Not applicable.
This disclosure relates to medical image operations that develop and validate a post-processing method to improve upon model-based organ segmentation.
Automatic segmentation of organs by delineating the boundaries from medical images is a key step in radiation treatment planning for cancer patients as it can reduce human efforts and bias. Segmentation methods based on computational models, such as the deep convolutional neural network (DCNN), have shown great success in this task. However, in such models, as the segmentation is often done voxel-by-voxel, the results can contain anatomically impossible regions. The proposed method aims to improve the segmentation results by using image post-processing to identify and fix these errors.
Examples of the present disclosure provide a method to improve model-based automatic segmentation of organs from CT or MR images in radiation treatment planning using image post-processing algorithms.
According to a first aspect of the present disclosure, a computer-implemented method for image post-processing to improve model-based organ segmentation. The method may include receiving three-dimensional (3D) voxel-by-voxel segmentation results as the output of a segmentation model based on 3D computed tomography (CT) or magnetic resonance (MR) images, analyzing the segmentation results on a 3D basis, processing the results to identify regions with anatomically incorrect segmentations, fixing the incorrect segmentations, analyzing the remaining segmentation results on a slice-by-slice basis, processing the results to identify slices with anatomically incorrect segmentations and fixing the incorrect segmentations to obtain the final results.
According to a second aspect of the present disclosure, an apparatus for image post-processing to improve model-based organ segmentation. The apparatus may include one or more processors, a display, and a non-transitory computer-readable memory storing instructions executable by the one or more processors. Wherein the instructions are configured to receive three-dimensional (3D) voxel-by-voxel segmentation results as the output of a segmentation model based on 3D computed tomography (CT) or magnetic resonance (MR) images, analyze the segmentation results on a 3D basis, process the results to identify regions with anatomically incorrect segmentations, fix the incorrect segmentations, analyze the remaining segmentation results on a slice-by-slice basis, process the results to identify slices with anatomically incorrect segmentations and fix the incorrect segmentations to obtain the final results.
According to a third aspect of an example of the present disclosure, a non-transitory computer-readable storage medium having stored therein instructions is provided. When the instructions are executed by one or more processors or one or more graphic processing units of the apparatus, the instructions cause the apparatus to receive three-dimensional (3D) voxel-by-voxel segmentation results as the output of a segmentation model based on 3D computed tomography (CT) or magnetic resonance (MR) images, analyze the segmentation results on a 3D basis, process the results to identify regions with anatomically incorrect segmentations, fix the incorrect segmentations, analyze the remaining segmentation results on a slice-by-slice basis, process the results to identify slices with anatomically incorrect segmentations and fix the incorrect segmentations to obtain the final results.
Other aspects and features according to the example embodiments of the disclosed technology will become apparent to those of ordinary skill in the art, upon reviewing the following detailed description in conjunction with the accompanying figures.
Reference will now be made to the accompanying drawings, which are not necessarily drawn to scale.
Reference will now be made in detail to example embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the disclosure. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the disclosure as recited in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in the present disclosure and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It shall also be understood that the term “and/or” used herein is intended to signify and include any or all possible combinations of one or more of the associated listed items.
It shall be understood that, although the terms “first,” “second,” “third,” etc. may be used herein to describe various information, the information should not be limited by these terms. These terms are only used to distinguish one category of information from another. For example, without departing from the scope of the present disclosure, first information may be termed as second information; and similarly, second information may also be termed as first information. As used herein, the term “if” may be understood to mean “when” or “upon” or “in response to a judgment” depending on the context.
The present disclosure related to an image post-processing framework to improve model-based organ segmentation from CT or MRI.
The processing component 120 typically controls overall operations of the computing environment 130, such as the operations associated with display, data acquisition, data communications, and image processing. The processor 131 may include one or more processors to execute instructions to perform all or some of the steps in the above described methods. Moreover, the processor 131 may include one or more modules which facilitate the interaction between the processor 131 and other components. The processor may be a Central Processing Unit (CPU), a microprocessor, a single chip machine, a GPU, or the like. GPU 134 can include one or more GPUs interconnected to execute one or more GPU executable programs.
The memory 132 is configured to store various types of data to support the operation of the computing environment 130. Examples of such data comprise instructions for any applications or methods operated on the computing environment 130, CT datasets, image data, etc. The memory 132 may be implemented by using any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic or optical disk.
In an embodiment, the computing environment 130 may be implemented with one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), graphical processing units (GPUs), controllers, micro-controllers, microprocessors, or other electronic components, for performing the above methods.
The image post-processing method to improve model-based organ segmentation is programmed as one set of predetermined software 133 and installed on the computing environment 130. When the computing environment 130 receives CT/MRI images from scanner controller 120, the predetermined software 133 is executed to generate the segmentation results.
In step 210, three-dimensional (3D) CT/MRI images covering a specific body region and voxel-by-voxel segmentations from a computational model containing the original segmentation labels of different organs are received.
Anatomically, most body organs, such as the heart, conform to a specific shape and have rather smooth boundaries. Furthermore, in clinical practice, the segmentation needs to follow certain guidance even if the actual boundary of the organ is ambiguous. Therefore, the final 3D voxel maps of one organ should abide by the following rules: 1) the 3D voxel maps are one spatially connected region; 2) on each axial slice, the contours need to be convex and do not have any holes inside; and 3) for certain organs, such as the heart, there should be a distinctive “cut-off” slice on the superior end to follow the guideline. However, as the computation model provides voxel-to-voxel segmentations, these rules are not necessarily followed. We propose to use image post-processing to solve this issue.
The following criteria is used to define connectivity: two pixels/voxels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor. In 3D, 4-connectivity means connected pixels/voxels have to share face, whereas with 8-connectivity, they have to share only edge or vertex.
In step 212, the 3D connective analysis is performed on the voxel-by-voxel segmentation results. All voxels within the organ are labeled as 1 and voxels outside the organ are labeled as 0. 8-connectivity is used to find connected regions.
In step 214, the volume of each region is calculated by adding the voxels of each connected region.
In step 216, the maximum volume of all regions is calculated. Only the region whose volume is the maximum is retained. All other regions are relabeled as 0.
In step 218, the remaining segmentation results are analyzed on a slice-by-slice basis along the axial direction. First, the holes within the slice are identified and filled via morphological closing; then, the boundaries are processed via convex hull operation; finally, the 2D connective analysis is performed. 2-connectivity is used to find connected regions. The resulting area of the organ on each slice is then calculated.
The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features.
The convex hull is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image.
In step 220, the axial slices are stacked together and arranged from one direction to another (e.g. from head to foot). For each slice, if the area is smaller than ½ of the next consecutive slice, the next slice is regarded as the “cut-off” slice, or the edge slice whose contours are retained. The segmentations on all slices beyond this slice are regarded as incorrect segmentations.
In step 222, the incorrect segmentations are removed by assigning the voxels to 0.
As shown in
This invention was made with government support under Grant No. R43EB027523-01A1 awarded by The National Institute of Health. The government has certain rights in the invention.