Crohn's disease is a chronic, auto-immune, inflammatory condition affecting the digestive tract. Crohn's disease affects approximately 300 out of every 100,000 people in Western Europe, causing a significant financial burden on healthcare systems. The available treatments are costly and have a high failure rate within the first year. Uncontrolled inflammation in the bowel can lead to significant damage, often requiring surgery for 10-30% of patients within five years of diagnosis.
Crohn's disease location and behavior is routinely assessed using magnetic resonance imaging (MRI) or magnetic resonance enterography (MRE) to evaluate the presence, extent, and activity of the disease. One important feature evaluated during MRE is the thickness of the bowel wall, which is strongly linked to the severity of the disease. There are limitations in bowel wall thickness measurement and it is believed that segmentation of abnormal bowel volume may facilitate more objective, quantitative assessment for tracking disease activity and treatment response. This is not feasible without automation due to constraints on clinicians time. Previous automated segmentation tools have used images acquired with Gadolinium contrast agents, which numerous studies suggest should be avoided.
Existing methods for assessing disease activity based on bowel wall thickness and other MRI features are considered time-consuming, limiting their clinical utility. Notably, changes in the length and volume of an abnormal segment of the bowel may provide more accurate information about the response to treatment compared to bowel wall thickness alone, which is valuable both for clinical use and drug development.
While some imaging techniques allow for automatic calculation of bowel wall thickness on certain MRI scans, reducing the need for manual input. Studies have used complex computer algorithms to automatically identify and measure abnormal bowel segments, achieving high accuracy and reproducibility. However, these previous studies have relied on contrast-enhanced imaging, which has raised concerns due to potential long-term health risks associated with the contrast agent used. In response to these concerns, there is a need for automated methods to analyze unenhanced structural imaging as part of routine MRE, ultimately reducing reliance on contrast-enhanced imaging and addressing associated safety concerns.
In one embodiment, a method includes receiving, via a user interface, three-dimensional centerline input on a noncontrast T2-weighted magnetic resonance imaging (MRI) image and representing voxels within a threshold distance of the three-dimensional centerline as feature vectors. The method further includes generating clusters of the voxels using the feature vectors, binarizing the clusters into positive and negative groups of voxels based on a threshold value, and generating a segment of abnormal bowel within the noncontrast T2-weighted MRI image based at least on the binarized clusters.
In one embodiment, a method includes: receiving patient imaging including one or more voxels; determining a lumen indicator based on the patient imaging; representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; generating, based on the one or more feature vectors, a cluster including at least one of the one or move voxels; binarizing the cluster into one or more groups based on a threshold value; and generating a bowel segment model based at least on the cluster.
Optionally, in some embodiments, the one or more groups includes at least a positive group and a negative group; voxels in the positive group are included in the bowel segment model; and voxels in the negative group are excluded from the bowel segment model.
Optionally, in some embodiments, the bowel segment model represents a segment of abnormal bowel.
Optionally, in some embodiments, the patient imaging includes at least one of a magnetic resonance image (MRI), an ultrasound, or a computer tomography (CT) image.
Optionally, in some embodiments, the MRI includes a noncontrast T2-weighted MRI image.
Optionally, in some embodiments, the lumen indicator includes a three-dimensional centerline of the lumen.
Optionally, in some embodiments, the method further includes: receiving segmentation data; and evaluating the bowel segment model based on the segmentation data.
Optionally, in some embodiments, evaluating the bowel segment model includes at least one of comparing the bowel segment model to the segmentation data.
Optionally, in some embodiments, the comparison of the bowel segment model to the segmentation data includes at least one of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume, or a length normalized volume.
Optionally, in some embodiments, the length normalized volume is based at least in part on a length of the lumen indicator.
Optionally, in some embodiments, the segmentation data includes manual segmentation data determined by a medical provider.
In one embodiment, a method for training an artificial intelligence (AI) model to segment portions of a bowel of a patient includes: receiving, by a processing element, patient imaging data associated with a lumen; receiving, by the processing element, a lumen indicator configured to mark a portion of the lumen; receiving, by the processing element, a segmentation data based on the lumen indicator, wherein the patient imaging data, the lumen indicator, and the segmentation data include training data; providing the training data to an artificial intelligence algorithm executed by the processing element; training, by the processing element, the artificial intelligence algorithm using the training data to learn a correlation between the lumen indicator and the segmentation data associated with the lumen within the patient imaging; determining, by the processing element, a bowel segment model based on the training data; and evaluating the bowel segment model based on a validation data.
Optionally, in some embodiments, the patient imaging includes one or more voxels, and determining the bowel segment model includes: representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; generating, based on the one or more feature vectors, a cluster including at least one of the one or move voxels; binarizing the cluster into one or more groups based on a threshold value; and generating a bowel segment model based at least on the cluster.
In one embodiment, a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a processing element, cause the processing element to: receive patient imaging including one or more voxels; receive a lumen indicator based on the patient imaging; represent the one or more voxels within a distance of the lumen indicator as one or more feature vectors; generate, based on the one or more feature vectors, a cluster including at least one of the one or move voxels; binarizing the cluster into one or more groups based on a threshold value; and generate a bowel segment model based at least on the cluster.
Additional embodiments and features are set forth in part in the description that follows, and will become apparent to those skilled in the art upon examination of the specification and may be learned by the practice of the disclosed subject matter. A further understanding of the nature and advantages of the present disclosure may be realized by reference to the remaining portions of the specification and the drawings, which form a part of this disclosure. One of skill in the art will understand that each of the various aspects and features of the disclosure may advantageously be used separately in some instances, or in combination with other aspects and features of the disclosure in other instances.
Improved methods and systems of classifying or segmenting diseased and healthy portions of the bowel are disclosed. In various examples, the system captures imaging data of a patient's bowel, such as from an MRI scan, computer assisted tomography scan, ultrasound, x-ray, or the like. Given that full segmentation of Crohn's disease is prohibitively time-consuming for clinicians, the systems and methods disclosed herein take as input a one or more 3D lumen indicators (e.g., a centerline of an intestinal lumen) per abnormal bowel segment, and return a volumetric segmentation of the bowel. The system receives input from a user such as a medical provider or a technician indicating a centerline of a portion of the bowel. A user-drawn lumen indicator is generally rapid for a medical provider, such as a radiologist) to place and serves as a foundation for automated segmentation. The system uses the centerline to determine a three-dimensional representation of the bowel. The system segments the bowel into healthy and diseased portions. The term “bowel” refers to the long tube-like portion of the digestive tract that extends from the stomach to the anus. It includes the small intestine and the large intestine (colon), and may include interfaces with those structures as well, such as the ileum and terminal ileum.
In one example, a disclosed method includes, a segmentation protocol and semi-automated pipeline for use on coronal T2-weighted MRI, routinely acquired during Crohn's monitoring. Using a novel annotation protocol, expert radiologists may draw lumen indicators (e.g., 3D centerlines) through segments of abnormal bowel, and a volumetric segmentation of the wall was determined by the segmentation system 100 (see, e.g.,
In one example, a method for segmentation of abnormal bowel wall volume in Crohn's disease patients uses non-contrast T2 weighted images, commonly available in treatment records of Crohn's patients. The method can use real-world relatively low resolution data to develop an approach that may be generalizable to the clinical setting.
In one example, the method includes a bowel annotation protocol, creating ground truth data, a lumen indicator (e.g., lumen centerline) for a section of abnormal bowel, and corresponding segmentation of the surrounding intestinal wall. In one example, an artificial intelligence (AI) or machine learning (ML) algorithm takes a manually-created lumen indicator as input. A processing element performs unsupervised clustering to divide an image into small, contiguous clusters. Each cluster is characterized using a custom feature set, used as input to a random forest regressor, predicting the fraction of voxels in the bowel segment model. In some examples, the algorithm includes a neural network including convolutional layers and fully connected layers.
To validate the system, inter-reader agreement for two readers (e.g., medical providers 104 such as radiologists, physicians, or technicians) segmenting the bowel wall, based on the same centerline, may be quantified to verify that the output of the system agrees with humans at a level close to inter-reader agreement, with nonsignificant difference found when assessed using the Dice score and symmetric Hausdorff distance, but a significant difference seen with the mean contour distance. See e.g.,
In one example, the system may calculate a bowel wall thickness automatically with minimal user interaction on T1-weighted post contrast imaging. In addition, the system may include multi-stage, multi-scale feature extraction and classification models to perform fully-automated segmentation of abnormal bowel—in particular using support vector machines and random forests. In more examples, still, the method may add details of spatial context, and using active learning to reduce data requirements and time for model training. Such a method and system has achieved a Dice score of 0.924 on a detection task including the bowel lumen.
In statistics, specifically in the context of evaluating the performance of image segmentation or object detection algorithms, the Dice score, also known as the Sørensen-Dice coefficient, is a measure of the similarity or overlap between two sets of data. The Dice score may be used to assess the agreement between the predicted and ground truth segmentation masks or delincations obtained from medical imaging data, such as MRI or CT scans.
The mathematical formula for the Dice score is: [Dice={2|A∩B|/|A∪B| where:—|A| denotes the cardinality of set A (e.g., the number of pixels or voxels in the predicted segmentation mask)—|B| denotes the cardinality of set B (e.g., the number of pixels or voxels in the ground truth segmentation mask)—|A∩B| denotes the cardinality of the intersection of sets A and B (e.g., the number of overlapping pixels or voxels between the predicted and ground truth segmentation masks) The Dice score ranges from 0 to 1, where a score of 1 signifies good overlap between the predicted and ground truth masks (and generally a higher-quality model), while a score of 0 indicates no overlap (and generally a lower quality model). For example, the Dice score may be used to quantitatively assess the accuracy and performance of the segmentation algorithms and systems disclosed herein, providing a measure of how well the algorithm's predictions align with the ground truth segmentation, thereby informing the quality of the algorithm's performance in delineating structures or abnormalities within medical images.
In one such example the methods and systems disclosed herein used manual lumen indicators with region-growing and refinement based on active contours, using contrast-enhanced T1-weighted images, automatically extracting bowel wall volume and thickness measurements. In this example, even with independent manual lumen indicator inputs from multiple radiologists, the system increased reproducibility of extracted biomarker statistics relative to manual delineation. In another example, the system achieved segmentation of Crohn's lesions by combining manual lumen indicator input, curviplanar reformatting and deep learning using a 3D convolutional neural network (e.g., a U-net), achieving a Dice score of 0.75 for inflamed bowel wall.
The segmentation system 100 provides an efficient method of quantifying Crohn's disease burden, providing a novel, powerful tool for assessing disease severity and treatment response from routine clinical data. The benefits of the segmentation system 100 support proactive monitoring and precision management of patients and therapeutic development.
Turning to the figures,
The segmentation system 100 may include a user device 110 such as a desktop computer, laptop computer, smart phone, tablet, or other device suitable to enable a medical provider 104 to interact with the segmentation system 100.
The segmentation system 100 may include a server 112. The server 112 may have computational and/or storage capabilities greater than that of the user device 110. The server 112 may include or be in communication with a database 114 suitable to store medical imaging information from the imaging device 106, diagnostic information, medical records, and/or segmentation data developed by the segmentation system 100.
The devices of the segmentation system 100 may be in direct communication with one another, or may be in communication via a network 108. The network 108 may be implemented using one or more of various systems and protocols for communications between computing devices. In various embodiments, the network 108 or various portions of the network 108 may be implemented using the Internet, a local area network (LAN), a wide area network (WAN), and/or other networks. In addition to traditional data networking protocols, in some embodiments, data may be communicated according to protocols and/or standards including near field communication (NFC), Bluetooth, cellular connections, universal serial bus (USB), Wi-Fi, Zigbee, and the like.
Turning to
In the context of medical imaging AI/ML techniques, ground truth data may include authoritative or reference data that serves as the standard against which the performance of the segmentation system 100 is evaluated. This ground truth data 202 typically includes accurately annotated or labeled medical images, which have been reviewed and validated by medical providers 104 such as expert radiologists or practitioners. For example, these annotations may include delineations of relevant anatomical structures, such as the terminal ileum 206 and ileum 208 shown in
Overlaid on the patient imaging 200, the ground truth data 202 may include one or more lumen indicators (e.g., lumen indicators 210a and/or lumen indicator 212a) intersecting the imaging plane. In the context of the human body, the term “lumen” refers to the inside space or cavity within a tubular structure such as a blood vessel, intestine, or other hollow organ. More broadly, it can also refer to the interior of any tubular structure within the body, such as the central space within a bronchus in the lungs or the space within the spinal cord. The lumen is the open space within a tubular structure through which air, fluid, or other substances can pass.
The segmentation system 100 may be trained, validated, and/or tested on an annotation protocol whose output is shown as shown in
In the training method, a medical provider 104 such as an expert radiologist identified the coronal T2-weighted non-fat-saturated images. The medical provider 104 placed a lumen indicator (e.g., the lumen indicator 210a and/or lumen indicator 212a) through the lumen of an abnormal segment of small bowel. This training method is discussed in more detail with respect to
In one example, to train the model, readers (e.g., medical providers 104) annotated training cases before being processed by the segmentation system 100 algorithm. Summary data of example training, testing, and validation datasets can be seen in Table 1 and 2
Main datasets (train/validation/test). Imaging data from patients with confirmed small bowel Crohn's disease were collected from six different hospitals (see Table 1). Abnormal bowel segments, identified based on pre-existing clinical information, had lumen indicators drawn by medical providers 104 such as expert radiologists (segmentation was performed based on these by a technician or radiologist). The technicians could consult for lumen indicators clarification, and modify their segmentations accordingly. Data were checked for consistency with the protocol by an experienced researcher and expert radiologist before signoff. See
Table S, below, shows demographic information for inter-reader datasets.
In one example, patients 102 were randomly split to training, validation, and testing data sets. The training set (60 segments, 49 patients) was used for explicit optimization (parameter fitting). The validation set (20 segments, 18 patients) was used for interim performance quantification (and thus algorithm optimization) of the whole pipeline or its constituent parts. The test set (37 segments, 34 patients) was held out for the entire development.
In one example, to correctly examine inter-reader agreement of lumen indicator (e.g., centerline) based segmentation, readers worked from the same lumen indicator. Eighteen cases from the an example dataset (see table S2) were annotated on coronal T2-weighted non-contrast MRI, with centerlines drawn through the terminal ileum 206 by a first reader, which were then independently segmented by the first reader and a second reader in isolation from one another.
Turning to
In the context of medical imaging, a voxel may refer to a three-dimensional (3D) pixel, which is the basic unit of volume in an image dataset obtained through techniques such as computed tomography (CT), magnetic resonance imaging (MRI), or other 3D medical imaging modalities. A voxel may represent a data point within a 3D grid that defines the spatial characteristics of the imaged anatomy. Each voxel encapsulates information about a small volume element within the scanned region, and it is characterized by its position in 3D space as well as the intensity or density of the tissue or material it represents. Voxels serve as the building blocks of the 3D image, allowing the visualization and analysis of anatomical structures and pathological findings with spatial detail. The segmentation system 100 may analyze voxels within a medical image to enable assessment of tissue properties, the identification of abnormalities, and the visualization of anatomical structures in a fine-grained and volumetric manner. For example the segmentation system 100 may process and analyze image data at the voxel level to extract features, segment structures, or classify pathologies, thereby leveraging the 3D information contained within the voxel grid for comprehensive and detailed image interpretation.
Ground truth data 202 is shown including examples of a bowel segment model 302 and a bowel segment model 304, based respectively on the lumen indicator 210a, marked portion 210b, lumen indicator 212a, and marked portion 212b as previously discussed. The bowel segment model 302 and the bowel segment model 304 may be three-dimensional models of segments or portions of the bowel of the patient 102 as determined by the segmentation system 100 and the methods disclosed herein.
Turning to
In some examples, e.g., shown in
In some examples of clustering calculations shown for example in
In some examples, each cluster 404, 406 is described by a vector of the following hyperparameters. Fifteen features originate from three sets of values based on the voxels in the cluster: raw voxel intensities; intensity after filtering with a Laplacian-of-Gaussian filter, kernel size (e.g., 2 mm); and 3D distance between each cluster 404,406 voxel and lumen indicator; and minimum, maximum, range, standard deviation, mean for each. Features may further include minimum 3D distance of the cluster 404, cluster 406 centroid from the respective lumen indicator 210a, lumen indicator 212a; difference between mean intensity of the cluster 404, 406 voxels, and the voxels surrounding the closest point where the respective lumen indicator intersects the current imaging plane; the minimum distance, in the direction orthogonal to the coronal plane, between the cluster 404, 406 and the lumen indicator 210a, lumen indicator 212a; the difference between mean of voxel intensities, and the mean of the voxel intensities within a threshold (e.g., three connected voxels (in-plane). In some examples, the feature vectors for each cluster 404, cluster 406 are fed to a random forest regression model executed by a processing element of the segmentation system 100. The regressor may be trained as follows: training and validation sets may be used to generate clusters 404, 406 as described herein, and the degree of overlap with the ground truth data 202 quantified (e.g., the fraction of voxels within the cluster 404, 406 which were segmented in the ground truth data 202). A random forest regressor may be trained to predict this fraction based on the feature vector, for the training set. Modifications to the feature set, pipeline and hyperparameters may be based on results for the validation set.
In some examples, to decide whether a cluster's constituent voxels are included in the final segmentation, a threshold is applied to the continuous predictions from the regressor. In some examples, the threshold is set at 0.35, selected by examining the mean Dice score that would result from different values on the validation set (see, e.g.,
In some examples, system output 300 may be subject to post-processing. For example, the union of included clusters for each coronal slice may be median-filtered (e.g., 3×3 kernel). The voxel count of each contiguous region of voxels may be calculated, and any region with a count below a threshold (e.g., 33% of the maximum) may be removed.
For example, as shown in
In the example shown in
Each of
As shown for example in
As shown for example in
In some examples, the method 1000 may be represented in computer code such as C, C++, Python, or other languages and may be compiled or passed to an interpreter for execution by one or more processors, such as a processing element 1302. In one example, the method 1000 may be written in Python and may undergo formal code review, e.g., compliant with ISO62304. To quantify algorithmic performance, standard segmentation metrics such as the Dice score (a metric for overlap of segmentations bounded between 0 and 1), the Symmetric Hausdorff Distance (SHD, the maximum distance between the outlines of two shapes) and the Mean Contour Distance (MCD, the average distance between the outlines of two shapes), may be calculated by the method 1000, as discussed herein.
The method 1000 may begin in operation 1004 and a patient 102 presents to a medical provider 104 with symptoms. The medical provider 104 may determine, or the patient 102 may provide, clinical information 1002, such as medical history, symptoms, demographic information (e.g., ethnicity, socioeconomic status, etc.) or biometric information (height, weight, blood pressure, etc.). The patient 102 may also include patient imaging 200, either determined by the imaging device 106 of the segmentation system 100 or by another system. If the symptoms are indicative of small bowel disease such as Crohn's disease, the method 1000 may proceed to operation 1006. If the symptoms are not associated with small bowel disease, the method 1000 may end.
In operation 1006, the patient imaging 200 may be captured by, or fed into, the segmentation system 100. For example, the imaging device 106 of the segmentation system 100 may be used to perform an MRE on the patient 102. The segmentation system 100 may determine if additional analysis is needed. For example, one form of additional analysis may include performing Magnetic Resonance Index of Activity (sMaRIA) analysis on the patient 102. If the segmentation system 100 determines that additional analysis is needed, the method 1000 may proceed to operation 1008.
In many embodiments, the operation 1008 may be optional. In operation 1008, additional analysis may be conducted. For example, a sMaRIA study may be completed. In other examples, additional imaging of the patient 102 may be completed with other imaging technologies, such as ultrasounds, CT scan, etc.
The method 1000 may proceed to operation 1010 from either operation 1006 or operation 1008. In the operation 1010, a user, such as a medical provider 104, may use the segmentation system 100 to generate one or more lumen indicators (e.g., the lumen indicator 210a or lumen indicator 212a). In many examples, the lumen indicator 210a, 212a may mark an interior portion of the lumen of the small intestine. Sec, e.g., the dashed lines in
With reference to
For example, the medical provider 104 may interact with the user device 110 and mark sections of the bowel 204 that appear to be diseased. See, e.g., the nodes or dots on the solid lines covering the dashed lines. For example, the user device 110 may display successive slices of the ground truth data 202 and the medical provider 104 may mark (e.g., with a mouse, stylus, finger, or other input tool) a point in the slice that represents a one-dimensional lumen indicator. The segmentation system 100 may generate a polyline, spline, etc. through these successive point markers to generate a two dimensional or three dimensional lumen indicator 210a/212a. The medical provider 104 may mark the diseased portion either when marking each one-dimensional slice of the ground truth data 202, or may mark portions of the lumen indicator 210a/212a after the segmentation system 100 generates the same.
For example, a user such as a medical provider 104 may add a set of sequential points along the length of the lumen of an abnormal enteric segment. In some examples, the sequential points may be near a centerline of the lumen. In some embodiments, a cubic bezier curve may be generated between each adjacent pair of points (e.g., on different slices of the imaging study). The control points may be determined by enforcing C2 continuity (e.g., smoothness and differentiability of a curve, surface, or function) between adjacent curves, and solving for one or more curves simultaneously (e.g., by using the Thomas Algorithm AKA tridiagonal matrix algorithm). The resultant set of cubic Bezier curves is the lumen indicator 210a, lumen indicator 212a (see, e.g.,
The method 1000 may proceed from operation 1010 to either operation 1014 or to operation 1012. The method 1000 may proceed to operation 1012, for example, if the segmentation system 100 determines that one or more lumen indicators could benefit from additional evaluation. In operation 1012, the segmentation system 100 compares the one or more lumen indicators generated in operation 1010 against a threshold. For example, the segmentation system 100 may compare the length of a lumen indicator against a length threshold. In one example, the segmentation system 100 compares the length of the lumen indicator against the threshold of 10 mm. In other examples, the segmentation system 100 may compare the length of the lumen indicator against thresholds such as 1 mm, 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 11 mm, 12 mm, 13 mm, 14 mm, 15 mm, 16 mm, 17 mm, 18, mm, 19 mm, 20 mm, or longer. If the length of the lumen indicator is less than the threshold, that lumen indicator may be excluded from further analysis. If that is the only lumen indicator, the method 1000 may terminate. If there are additional lumen indicators, whose length is greater than or equal to the threshold, the method 1000 may continue to operation 1016 with respect to those lumen indicators.
The method 1000 may proceed from operation 1010 to operation 1014, for example, if the segmentation system 100 determines that one or more lumen indicators do not need additional evaluation, such as the length comparison performed in operation 1012. In operation 1014, the segmentation system 100 determines if the bowel 204 segmentation is complex. Examples of complex segmentation may include many segments, segments of diseased or abnormal bowel with segments of healthy bowel therebetween, asymmetric bowel disease, the presence of one or more pseudosacculations 810, etc. (e.g., as shown in
In operation 1016, the segmentation system 100 performs the automated segmentation described herein, e.g., as discussed with respect to
Operation 1018 may be similar to operation 1016 in the use of an AI/ML model to analyze the voxels of the patient imaging 200 however, the algorithm may be specifically adapted to perform brush segmentation based on different training data (e.g., data for complex segmentations), or at higher resolution or a deeper neural net.
The method 1000 may proceed to operation 1020 and the segmentation system 100 optionally presents a system output 300, such as a bowel segment model for review to a medical provider 104. The medical provider 104 may decide to exclude or keep the particular system output 300. If the system output 300 is kept, the method 1000 may proceed to operation 1014 with respect to the system output 300. In some examples, in operation 1020, the segmentation system 100 may determine one or more quality metrics, such as an overlap or Dice score, the Symmetric Hausdorff Distance, the Mean Contour Distance, etc. may be calculated by the method 1000, as discussed herein.
The method 1000 may proceed from the operation 1018 or operation 1020 to operation 1022 and the segmentation system 100 examines the resolution of a system output 300 (e.g., a bowel segment model). If the resolution is below a threshold, the bowel segment model may be excluded. If the resolution is at or above a threshold, the method 1000 may proceed to operation 1024. For example, the segmentation system 100 may evaluate a number of voxels represented in a particular bowel segment model. If the number of voxels is below the threshold (e.g., below 100), the bowel segment model may be excluded.
The method 1000 may proceed to operation 1024 and the segmentation system 100 determines if the data set for a particular patient 102 should be locked. For example, if all of the lumen indicators generated in operation 1010 have either been analyzed and converted into bowel segment models or discarded, the data set may be locked. If the data set is incomplete or additional analysis is needed, the segmentation system 100 may continue to process the patient imaging 200, lumen indicators, and bowel segment models as discussed. If the data set is locked, the method 1000 may store the system output 300 and/or the patient imaging 200 in an database 114 for later analysis, retrieval, long term storage, and/or sharing with other medical providers 104.
In operation 1102, the method 1100 receives patient imaging 200 including one or more voxels. In operation 1104, the segmentation system 100 determines a lumen indicator based on the patient imaging 200. In operation 1106, segmentation system 100 represents the one or more voxels within a threshold distance of the lumen indicator as one or more feature vectors. For example, the segmentation system 100 may include voxels within 0 mm (e.g., if a portion of the lumen indicator overlaps with a portion of the bowel 204 wall), 1 mm, 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 11 mm, 12 mm, 13 mm, 14 mm, 15 mm, 16 mm, 17 mm, 18, mm, 19 mm, 20 mm, or longer (e.g., up to and including 50 mm) of a portion of the lumen indicator.
In operation 1108, the segmentation system 100 generates, based on the one or more feature vectors, a cluster including at least one of the one or move voxels. The clusters may be based on, or described by, one or more vectors or one or more of the following hyperparameters. Fifteen features originate from three sets of values based on the voxels in the cluster: raw voxel intensities; intensity after filtering with a Laplacian-of-Gaussian filter, kernel size (e.g., 2 mm); and 3D distance between each cluster 404,406 voxel and lumen indicator; and minimum, maximum, range, standard deviation, mean for each. Vectors may further include minimum 3D distance of the cluster 404, cluster 406 centroid from the respective lumen indicator 210a, lumen indicator 212a; difference between mean intensity of the cluster 404, 406 voxels, and the voxels surrounding the closest point where the respective lumen indicator intersects the current imaging plane; the minimum distance, in the direction orthogonal to the coronal plane, between the cluster 404, 406 and the lumen indicator 210a, lumen indicator 212a; the difference between mean of voxel intensities, and the mean of the voxel intensities within a threshold (e.g., three connected voxels (in-plane).
In operation 1110, segmentation system 100 binarizes the cluster into one or more groups based on a threshold value. In operation 1112, the segmentation system 100 generates a bowel segment model based at least on the cluster.
In operation 1202, a processing element of the segmentation system 100 receives, patient imaging 200 associated with a lumen. In operation 1204, the processing element receives a lumen indicator configured to mark a portion of the lumen. In operation 1206, the processing element receives segmentation data based on the lumen indicator. For example, the segmentation data may include one or more lumen indicators created as disclosed herein. Additionally, or alternately, the segmentation data may include manual segmentation data determined by the medical provider 104 using manual segmentation techniques. The patient imaging 200 data, the lumen indicator, and/or the segmentation data may be used as training data for the AI algorithm. In operation 1208, the processing element 1302 provides the training data to the AI algorithm.
In operation 1210, the processing element 1302 trains the AI algorithm using the training data to learn a correlation between the lumen indicator and the segmentation data associated with the lumen within the patient imaging 200. For example, the feature vectors for each cluster 404, cluster 406 are fed to a random forest regression model executed by a processing element of the segmentation system 100. The regressor may be trained as follows: training and validation sets may be used to generate clusters 404, 406 as described herein, and the degree of overlap with the ground truth data 202 quantified (e.g., the fraction of voxels within the cluster 404, 406 which were segmented in the ground truth data 202). A random forest regressor may be trained to predict this fraction based on the feature vector, for the training set. Modifications to the feature set, pipeline and hyperparameters may be based on results for the validation set.
In operation 1212, the processing element determines a bowel segment model based on the training data. In operation 1214, the processing element evaluates the bowel segment model based on validation data. For example, the processing element 1302 may compare the generated bowel segment model against the segmentation database such as by calculating one or more of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume of the bowel segment model, or a length normalized volume of the bowel segment model.
The processing element 1302 may be any type of electronic device capable of processing, receiving, and/or transmitting instructions. For example, the processing element 1302 may be a central processing unit, microprocessor, processor, or microcontroller. Additionally, it should be noted that some components of the computing system 1300 may be controlled by a first processing element 1302 and other components may be controlled by a second processing element 1302, where the first and second processing elements may or may not be in communication with each other.
The I/O interface 1304 allows a user to enter data in to a computing system 1300, as well as provides an input/output for the computing system 1300 to communicate with other devices or services. The I/O interface 1304 can include one or more input buttons, touch pads, touch screens, and so on.
The external device 1312 are one or more devices that can be used to provide various inputs to the computing systems 1300, e.g., mouse, microphone, keyboard, trackpad, etc. The external devices 1312 may be local or remote and may vary as desired. In some examples, the external devices 1312 may also include one or more additional sensors.
The memory components 1308 are used by the computing system 1300 to store instructions for the processing element 1302 such as for executing the method 1000, an AI/ML algorithm, a user interface, as well as store data (e.g., patient imaging 200, ground truth data 202, system outputs 300, etc.). The memory components 1308 may be, for example, magneto-optical storage, read-only memory, random access memory, erasable programmable memory, flash memory, or a combination of one or more types of memory components.
The network interface 1310 provides communication to and from the computing system 1300 to other devices. The network interface 1310 includes one or more communication protocols, such as, but not limited to Wi-Fi, Ethernet, Bluetooth, etc. The network interface 1310 may also include one or more hardwired components, such as a Universal Serial Bus (USB) cable, or the like. The configuration of the network interface 1310 depends on the types of communication desired and may be modified to communicate via Wi-Fi, Bluetooth, etc.
The display 1306 provides a visual output for the computing system 1300 and may be varied as needed based on the device. The display 1306 may be configured to provide visual feedback to a user such as a patient 102 and/or a medical provider 104, and may include a liquid crystal display screen, light emitting diode screen, plasma screen, or the like. In some examples, the display 1306 may be configured to act as an input element for the user through touch feedback or the like.
Any description of a particular component being part of a particular embodiment, is meant as illustrative only and should not be interpreted as being required to be used with a particular embodiment or requiring other elements as shown in the depicted embodiment.
All relative and directional references (including top, bottom, side, front, rear, and so forth) are given by way of example to aid the reader's understanding of the examples described herein. They should not be read to be requirements or limitations, particularly as to the position, orientation, or use unless specifically set forth in the claims. Connection references (e.g., attached, coupled, connected, joined, and the like) are to be construed broadly and may include intermediate members between a connection of elements and relative movement between elements. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other, unless specifically set forth in the claims.
The present disclosure teaches by way of example and not by limitation. Therefore, the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall there between.
The technology described herein may be implemented as logical operations and/or modules in one or more systems. The logical operations may be implemented as a sequence of processor-implemented steps directed by software programs executing in one or more computer systems and as interconnected machine or circuit modules within one or more computer systems, or as a combination of both. Likewise, the descriptions of various component modules may be provided in terms of operations executed or effected by the modules. The resulting implementation is a matter of choice, dependent on the performance requirements of the underlying system implementing the described technology. Accordingly, the logical operations making up the embodiments of the technology described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
In some implementations, articles of manufacture are provided as computer program products that cause the instantiation of operations on a computer system to implement the procedural operations. One implementation of a computer program product provides a non-transitory computer program storage medium readable by a computer system and encoding a computer program. It should further be understood that the described technology may be employed in special purpose devices independent of a personal computer.
The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention as defined in the claims. Although various embodiments of the claimed invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, it is appreciated that numerous alterations to the disclosed embodiments without departing from the spirit or scope of the claimed invention may be possible. Other embodiments are therefore contemplated. It is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative only of particular embodiments and not limiting. Changes in detail or structure may be made without departing from the basic elements of the invention as defined in the following claims.
This application claims the benefit of priority under 35 U.S.C. § 119(c) and 37 C.F.R. § 1.78 to provisional application No. 63/489,363 filed on Mar. 9, 2023, titled “Semi-automated Segmentation of Inflamed Bowel Wall on Noncontrast T2-weighted MRI”, and is related to provisional application No. 63/331,448 filed on Apr. 15, 2022, titled “System to Characterize Topology and Morphology of Fistulae from Medical Imaging Data”, provisional application No. 63/447,910 filed on Feb. 24, 2023, titled “Virtual Examination System (vEUA)”, and provisional application Ser. No. 18/135,019 filed on Apr. 14, 2023, titled “System to Characterize Topology and Morphology of Fistulae from Medical Imaging Data”, all of which are hereby incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
63489363 | Mar 2023 | US |