Computed tomography (CT) remains a valuable technology across a range of diagnostics and interventional imaging situations. New CT systems continue to be developed using new technologies including novel sources, detectors, and system geometries. Both traditional and novel systems are subject to various physical effects that make the actual data acquisition deviate from the idealized reconstruction model. The potential sources of such modeling errors are widespread and can include inexact calibrations of the x-rays produced by the source, the filtration of those x-rays, physical effects induced by the patient (e.g. beam hardening and scatter), and detector effects. While a multitude of approaches can be used to mitigate such errors, there are often residual biases present—e.g. inexact knowledge of x-ray spectrum for beam hardening correction, drift in detectors, tube warm-up effects, inexact modeling of scatter due to a limited number of photon tracked and simulated via Monte Carlo, etc. Any of these unmodeled biases has the potential to propagate through reconstruction to create artifacts and errors in the estimation of attenuation coefficients. Such degradations have the potential to impact clinical diagnoses and can confound quantitative imaging that relies on accurate estimation of attenuation (e.g. for attenuation correction in PET/CT and in radiation therapy treatment planning [Schneider et al., “The calibration of CT hounsfield units for radiotherapy treatment planning.” Phys Med Biol. 1996; 41:111]).
While there has been a great deal of effort to address individual biases through specific modeling and correction efforts, it has proven difficult to find a solution that covers all sources of bias. Therefore, there remains a need to address these unknown or “impossible” to model biases based, for example, only on the acquired projection data. A significant amount of research has gone into sinogram consistency critera [Lesaint, “Data consistency conditions in X-ray transmission imaging and their application to the self-calibration problem,” Functional Analysis [math.FA]. Communauté Universite Grenoble Alpes, 2018. English.] in which specific properties of legitimate sinograms are expressed mathematically. For example, the integral of the logarithm of projections is constant in each slice of parallel projection data. More general consistency criteria apply, but they are more complicated mathematical expressions and may not hold under all circumstances (e.g. truncated data). However, the existence of such criteria means that not all sinograms are possible and there is a potential to identify and correct the sinograms to be consistent.
Accordingly, there is a need for additional methods, and related aspects, for reconstructing CT images that mitigates biases, particularly unknown or unmodeled biases.
The present disclosure relates, in certain aspects, to methods of reconstructing computed tomography (CT) images that mitigate unmodeled biases. In some aspects, the present disclosure also provides methods of generating artificial neural networks (ANNs) to estimate unmodeled bias from acquired CT projection data. Related systems and computer readable media are also provided.
In one aspect, for example, the present disclosure provides a method of reconstructing a CT image. The method includes receiving acquired CT projection data of an object (e.g., a subject or the like). The method also includes substantially removing at least one unmodeled bias from the acquired CT projection data using at least one trained artificial neural network (ANN) and at least one loss function that incorporates intermediate CT reconstruction information to produce corrected CT projection data. In addition, the method also includes generating a reconstructed CT image from the corrected CT projection data, thereby reconstructing the CT image.
In another aspect, the present disclosure provides a method of generating an artificial neural network (ANN) to estimate unmodeled bias from acquired computed tomography (CT) projection data. The method includes training at least one ANN comprising at least one loss function that incorporates intermediate CT reconstruction information to estimate unbiased projections from a plurality of biased and/or unbiased sinogram pairs, thereby generating an ANN to estimate unmodeled bias from acquired CT projection data.
In another aspect, the present disclosure provides a computed tomography (CT) system that includes at least one x-ray energy source and at least one x-ray detector configured and positioned to detect x-ray energy transmitted through an object (e.g., a subject or the like) from the x-ray energy source. The system also includes at least one trained artificial neural network (ANN) and at least one loss function that incorporates intermediate CT reconstruction information that are configured to remove unmodeled bias from acquired CT projection data. In addition, the system also includes at least one controller that is operably connected, or connectable, at least to the x-ray detector and to the trained ANN. The controller comprises, or is capable of accessing, computer readable media comprising non-transitory computer executable instructions which, when executed by at least one electronic processor, perform at least: receiving acquired CT projection data of an object; substantially removing at least one unmodeled bias from the acquired CT projection data using the trained ANN and the loss function to produce corrected CT projection data; and, generating a reconstructed CT image from the corrected CT projection data.
In another aspect, the present disclosure provides a computer readable media comprising non-transitory computer executable instruction which, when executed by at least electronic processor, perform at least: receiving acquired CT projection data of an object; substantially removing at least one unmodeled bias from the acquired CT projection data using at least one trained artificial neural network (ANN) and at least one loss function that incorporates intermediate CT reconstruction information to produce corrected CT projection data; and generating a reconstructed CT image from the corrected CT projection data.
In some embodiments, a source of the unmodeled bias is unknown or substantially indeterminable. In some embodiments, the methods include (or the instructions of the system or computer readable media further perform at least) using at least one physical model of a CT data collection process to identify the unmodeled bias from the acquired CT projection data. In some embodiments, the methods and other aspects disclosed herein include using at least one physical model of a CT data collection process, and/or CT data collected from physical calibration phantoms (e.g., existing or custom designed phantoms) to identify the unmodeled bias from the acquired CT projection data.
In some embodiments, the unmodeled bias comprises a potential to propagate through a reconstruction process to create one or more artifacts and/or one or more errors in an estimation of attenuation coefficients. In some embodiments, the unmodeled bias is selected from the group consisting of, for example, a drift in x-ray energy source, a drift in an x-ray energy detector, an incomplete scatter rejection, an inexact scatter correction, an inexact x-ray energy source calibration, a filtration of x-rays from an x-ray energy source, a physical effect induced by the object, a reconstruction algorithm effect, a beam hardening effect, a scattering effect, and a detector effect.
In some embodiments, the unmodeled bias comprises at least one residual bias. In some embodiments, the residual bias is selected from the group consisting of, for example, an inexact knowledge of an x-ray spectrum for beam hardening correction, a drift in one or more x-ray detectors, a tube warm-up effect, and an inexact modeling of scatter. In some embodiments, the unmodeled bias is one-dimensional (1D). In some of these embodiments, the 1D unmodeled bias is a function of projection angle. In some embodiments, the unmodeled bias is two-dimensional (2D). In some of these embodiments, the 2D unmodeled bias is a function of radial detector bin and projection angle. In some embodiments, the unmodeled bias is substantially specific to an anatomy of the object. In some embodiments, the unmodeled bias comprises scatter and/or differential beam hardening.
In some embodiments, the loss function comprises a spatial-frequency loss function, an image domain loss function, and/or a projection domain loss function. In some embodiments, the loss function comprises a mean squared error between ramp-filtered ANN-corrected and unbiased projection data in a frequency domain. In some embodiments, the spatial-frequency loss function comprises the Formula:
In some embodiments, the acquired CT projection data of the object is generated using a single energy CT (SECT) technique. In some embodiments, acquired CT projection data of the object is generated using a spectral CT technique. In some of these embodiments, for example, the spectral CT technique comprises a dual energy CT (DECT) technique and/or a photon counting CT technique.
In some embodiments, the ANN is trained to estimate unbiased projections from a plurality of biased and/or unbiased sinogram pairs. In some embodiments, the plurality of biased and/or unbiased sinogram pairs comprises about 10,000, 20,000, 30,000, 40,000, 50,000, 60,000, 70,000, 80,000, 90,000, 100,000, or more biased and/or unbiased sinogram pairs. In some embodiments, the ANN is not trained to estimate a specific bias type. In some embodiments, the ANN comprises a convolutional neural network (CNN) that comprises at least one encoding phase, at least one bridge, at least one decoding phase, and at least one two-dimensional (2D) convolutional layer. In some of these embodiments, the 2D convolutional layer is configured to learn a one-dimensional (1D) bias map. In some of these embodiments, the 2D convolutional layer is configured to learn a 2D bias map and/or a 3D bias map.
In some embodiments, the corrected CT projection data is improved in a projection domain, an image domain, or both a projection domain and an image domain relative to the acquired CT projection data. In some embodiments, the methods include (or the instructions of the system or computer readable media further perform at least) using a reconstruction technique selected from the group consisting of: a model-based iterative reconstruction (MBIR) technique, a deep learning technique, and a filtered-backprojection (FBP) technique. In some embodiments, the methods include receiving the x-ray CT data at one or more x-ray detectors configured and positioned to detect x-ray energy transmitted through the object from one or more x-ray energy sources to generate the acquired CT projection data of the object.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate certain embodiments, and together with the written description, serve to explain certain principles of the methods, systems, and related computer readable media disclosed herein. The description provided herein is better understood when read in conjunction with the accompanying drawings which are included by way of example and not by way of limitation. It will be understood that like reference numerals identify like components throughout the drawings, unless the context indicates otherwise. It will also be understood that some or all of the figures may be schematic representations for purposes of illustration and do not necessarily depict the actual relative sizes or locations of the elements shown.
In order for the present disclosure to be more readily understood, certain terms are first defined below. Additional definitions for the following terms and other terms may be set forth through the specification. If a definition of a term set forth below is inconsistent with a definition in an application or patent that is incorporated by reference, the definition set forth in this application should be used to understand the meaning of the term.
As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, a reference to “a method” includes one or more methods, and/or steps of the type described herein and/or which will become apparent to those persons skilled in the art upon reading this disclosure and so forth.
It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. Further, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In describing and claiming the methods, systems, and component parts, the following terminology, and grammatical variants thereof, will be used in accordance with the definitions set forth below.
Machine Learning Algorithm: As used herein, “machine learning algorithm,” generally refers to an algorithm, executed by computer, that automates analytical model building, e.g., for clustering, classification or pattern recognition. Machine learning algorithms may be supervised or unsupervised. Learning algorithms include, for example, artificial neural networks (e.g., back propagation networks), discriminant analyses (e.g., Bayesian classifier or Fischer analysis), support vector machines, decision trees (e.g., recursive partitioning processes such as CART-classification and regression trees, or random forests), linear classifiers (e.g., multiple linear regression (MLR), partial least squares (PLS) regression, and principal components regression), hierarchical clustering, and cluster analysis. A dataset on which a machine learning algorithm learns can be referred to as “training data.”
Subject: As used herein, “subject” refers to an animal, such as a mammalian species (e.g., human) or avian (e.g., bird) species. More specifically, a subject can be a vertebrate, e.g., a mammal such as a mouse, a primate, a simian or a human. Animals include farm animals (e.g., production cattle, dairy cattle, poultry, horses, pigs, and the like), sport animals, and companion animals (e.g., pets or support animals). A subject can be a healthy individual, an individual that has or is suspected of having a disease or a predisposition to the disease, or an individual that is in need of therapy or suspected of needing therapy. The terms “individual” or “patient” are intended to be interchangeable with “subject.”
Substantially: As used herein, “substantially,” “about,” or “approximately” as applied to one or more values or elements of interest, refers to a value or element that is similar to a stated reference value or element. In certain embodiments, the term “substantially,” “about, or “approximately” refers to a range of values or elements that falls within 25%, 20%, 19%, 18%, 17%, 16%, 15%, 14%, 13%, 12%, 11%, 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, or less in either direction (greater than or less than) of the stated reference value or element unless otherwise stated or otherwise evident from the context (except where such number would exceed 100% of a possible value or element).
Unmodeled Bias: As used herein, “unmodeled bias” in the context of computed tomography (CT) image reconstruction, refers to mismatches or discrepancies between a physical model of CT data collection process and a corresponding actual system under consideration.
Proper reconstruction of computed tomography (CT) image volumes typically utilizes a physical model of the data collection process. Any mismatches between the physical model and the actual system represent unmodeled biases. Such unmodeled biases can arise from a multitude of sources including varying system “drifts” in the source and detector, as well as unmodeled physics (like incomplete scatter rejection or inexact scatter correction). Generally, these biases introduce artifacts and quantitation errors that degrade image quality and make quantitative data analysis more difficult.
While there is a great deal of work attempting to model specific biases (like scatter correction) and remove those effects, there are often residual errors that propagate through to reconstructed images. The effect of bias on the reconstruction is generally dependent on the nature of the residual bias as well as data processing. For example, model-based iterative reconstruction (MBIR) is potentially more dependent on an accurate model than filtered-backprojection (FBP).
To address these problems, the present disclosure, in certain aspects, provides a generalized strategy to estimate unmodeled biases for situations, for example, where the underlying source is either unknown or difficult to estimate directly. To illustrate, in some embodiments, two exemplary cases are considered: 1) a one-dimensional (1D) unmodeled bias which is only a function of projection angle (e.g., 1D cases caused by an unmodeled X-ray tube warm-up effect, where x-ray fluence drifts throughout the scan (common in many CBCT systems that do not have a dedicated detector to estimate barebeam fluence)), and 2) a 2D bias which is an unknown function of both radial detector position or bin and projection angle (e.g., an unknown functional comprised of the weighted sum of Gaussian functions of unknown amplitude and widths). In some embodiments, a convolutional neural network (CNN) framework for CT projection-domain de-biasing is used, which consists of the ResUNet architecture and a spatial-frequency loss function which incorporates intermediate information about the reconstruction. In certain embodiments, this exemplary framework is applied to the two bias scenarios and shows a reduction in reconstruction errors in both cases for both FBP and MBIR.
In some embodiments, developments in machine learning are leveraged to provide other artificial neural network (ANN) based solutions for estimating unknown biases in CT. In some of these embodiments, for example, an ANN is used to estimate unbiased projections from many example biased and unbiased sinogram pairs. This methodology is distinct from the application of machine learning to the estimation of specific biases—e.g. using convolutional neural networks to perform scatter correction of CT data [Maier et al., “Deep Scatter Estimation (DSE): Accurate Real-Time Scatter Estimation for X-Ray CT Using a Deep Convolutional Neural Network,” J Nondestruct Eval 2018; 37(3):57]. These and other features of the present disclosure will be apparent upon complete review of the present disclosure, including the accompanying figures.
The present disclosure provides various methods of reconstructing computed tomography (CT) images or of generating artificial neural networks (ANNs) to estimate unmodeled bias from acquired CT projection data. To illustrate,
To illustrate,
In some embodiments, a source of the unmodeled bias is unknown or substantially indeterminable. In some embodiments, the methods include using at least one physical model of a CT data collection process to identify the unmodeled bias from the acquired CT projection data. In some embodiments, the unmodeled bias includes a potential to propagate through a reconstruction process to create one or more artifacts and/or one or more errors in an estimation of attenuation coefficients. In some embodiments, the unmodeled bias is selected from the group consisting of, for example, a drift in x-ray energy source, a drift in an x-ray energy detector, an incomplete scatter rejection, an inexact scatter correction, an inexact x-ray energy source calibration, a filtration of x-rays from an x-ray energy source, a physical effect induced by the object, a reconstruction algorithm effect, a beam hardening effect, a scattering effect, and a detector effect.
In some embodiments, the unmodeled bias includes a residual bias. In some of these embodiments, the residual bias is selected from the group consisting of, for example, an inexact knowledge of an x-ray spectrum for beam hardening correction, a drift in one or more x-ray detectors, a tube warm-up effect, and an inexact modeling of scatter. In some embodiments, the unmodeled bias is one-dimensional (1D). In some of these embodiments, the 1D unmodeled bias is a function of projection angle. In some embodiments, the unmodeled bias is two-dimensional (2D). In some of these embodiments, the 2D unmodeled bias is a function of radial detector bin and projection angle. In some embodiments, the unmodeled bias is substantially specific to an anatomy of the object (e.g., a given patient or other subject). In some embodiments, the unmodeled bias comprises scatter and/or differential beam hardening.
In some embodiments, the loss function comprises a spatial-frequency loss function, an image domain loss function, and/or a projection domain loss function. In some embodiments, the loss function comprises a mean squared error between ramp-filtered ANN-corrected and unbiased projection data in a frequency domain. In some embodiments, the spatial-frequency loss function comprises the Formula:
In some embodiments, the acquired CT projection data of the object is generated using a single energy CT (SECT) technique. In some embodiments, acquired CT projection data of the object is generated using a spectral CT technique. In some of these embodiments, for example, the spectral CT technique comprises a dual energy CT (DECT) technique and/or a photon counting CT technique.
In some embodiments, the ANN is trained to estimate unbiased projections from a plurality of biased and/or unbiased sinogram pairs (i.e., a plurality of biased sinogram pairs, a plurality of unbiased sinogram pairs, or a plurality of pairs of both biased and unbiased sinograms). In some embodiments, the plurality of biased and/or unbiased sinogram pairs comprises about 10,000, 20,000, 30,000, 40,000, 50,000, 60,000, 70,000, 80,000, 90,000, 100,000, or more biased and/or unbiased sinogram pairs. In some embodiments, the ANN is not trained to estimate a specific bias type. In some embodiments, the ANN comprises a convolutional neural network (CNN) that comprises at least one encoding phase, at least one bridge, at least one decoding phase, and at least one two-dimensional (2D) convolutional layer. In some of these embodiments, the 2D convolutional layer is configured to learn a one-dimensional (1D) bias map. In some of these embodiments, the 2D convolutional layer is configured to learn a 2D bias map and/or a 3D bias map.
In some embodiments, the corrected CT projection data is improved in a projection domain, an image domain, or both a projection domain and an image domain relative to the acquired CT projection data. In some embodiments, the methods include using a reconstruction technique selected from the group consisting of: a model-based iterative reconstruction (MBIR) technique, a deep learning technique, and a filtered-backprojection (FBP) technique. In some embodiments, the methods include receiving the x-ray CT data at one or more x-ray detectors configured and positioned to detect x-ray energy transmitted through the object (e.g., a given patient or other subject) from one or more x-ray energy sources to generate the acquired CT projection data of the object.
The present disclosure also provides various systems and computer program products or machine readable media. In some aspects, for example, the methods described herein are optionally performed or facilitated at least in part using systems, distributed computing hardware and applications (e.g., cloud computing services), electronic communication networks, communication interfaces, computer program products, machine readable media, electronic storage media, software (e.g., machine-executable code or logic instructions) and/or the like. To illustrate,
As understood by those of ordinary skill in the art, memory 306 of the server 302 optionally includes volatile and/or nonvolatile memory including, for example, RAM, ROM, and magnetic or optical disks, among others. It is also understood by those of ordinary skill in the art that although illustrated as a single server, the illustrated configuration of server 302 is given only by way of example and that other types of servers or computers configured according to various other methodologies or architectures can also be used. Server 302 shown schematically in
As further understood by those of ordinary skill in the art, exemplary program product or machine readable medium 308 is optionally in the form of microcode, programs, cloud computing format, routines, and/or symbolic languages that provide one or more sets of ordered operations that control the functioning of the hardware and direct its operation. Program product 308, according to an exemplary aspect, also need not reside in its entirety in volatile memory, but can be selectively loaded, as necessary, according to various methodologies as known and understood by those of ordinary skill in the art.
As further understood by those of ordinary skill in the art, the term “computer-readable medium” or “machine-readable medium” refers to any medium that participates in providing instructions to a processor for execution. To illustrate, the term “computer-readable medium” or “machine-readable medium” encompasses distribution media, cloud computing formats, intermediate storage media, execution memory of a computer, and any other medium or device capable of storing program product 308 implementing the functionality or processes of various aspects of the present disclosure, for example, for reading by a computer. A “computer-readable medium” or “machine-readable medium” may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks. Volatile media includes dynamic memory, such as the main memory of a given system. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications, among others. Exemplary forms of computer-readable media include a floppy disk, a flexible disk, hard disk, magnetic tape, a flash drive, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
Program product 308 is optionally copied from the computer-readable medium to a hard disk or a similar intermediate storage medium. When program product 308, or portions thereof, are to be run, it is optionally loaded from their distribution medium, their intermediate storage medium, or the like into the execution memory of one or more computers, configuring the computer(s) to act in accordance with the functionality or method of various aspects. All such operations are well known to those of ordinary skill in the art of, for example, computer systems.
To further illustrate, in certain aspects, this disclosure provides systems that include one or more processors, and one or more memory components in communication with the processor. The memory component typically includes one or more instructions that, when executed, cause the processor to provide information that causes at least one reconstructed CT image and/or the like to be displayed (e.g., via communication device 214 or the like) and/or receive information from other system components and/or from a system user (e.g., via communication device 214 or the like).
In some aspects, program product 308 includes non-transitory computer-executable instructions which, when executed by electronic processor 304 perform at least: receiving acquired CT projection data of an object; substantially removing at least one unmodeled bias from the acquired CT projection data using at least one trained artificial neural network (ANN) and at least one loss function that incorporates intermediate CT reconstruction information to produce corrected CT projection data; and generating a reconstructed CT image from the corrected CT projection data.
System 300 also typically includes additional system components (e.g., CT imaging device 318) that are configured to perform various aspects of the methods described herein. In some of these aspects, one or more of these additional system components are positioned remote from and in communication with the remote server 302 through electronic communication network 312, whereas in other aspects, one or more of these additional system components are positioned local, and in communication with server 302 (i.e., in the absence of electronic communication network 312) or directly with, for example, desktop computer 314. Although not within view, CT imaging device 318 includes at least one x-ray energy source and at least one x-ray detector configured and positioned to detect x-ray energy transmitted through an object (e.g., a subject or the like) from the x-ray energy source. System 300 also includes at least one trained artificial neural network (ANN) and at least one loss function that incorporates intermediate CT reconstruction information that are configured to remove unmodeled bias from acquired CT projection data.
Case 1: 1D bias as a function of projection angle: Based on observations of tube warm-up in a rotating anode radiography/fluoroscopic x-ray source, the following model for tube warm-up was devised:
Case 2: 2D bias as a function of both projection angle and (radial) detector bin: To create a more complicated unknown bias, we modeled a residual error from an inaccurate pre-log correction as a set of 2D Gaussian functions being added to the CT projection data in measurement domain, which is as follows:
Reconstruction methods: In our investigations, we used two reconstruction methods: 1) Standard FBP using a Hamming apodized ramp filter with 0.8 cutoff; and 2) a specific MBIR approach—quadratic penalized-likelihood (PL) reconstruction using 100 iterations of separable quadratic surrogates, 12 subsets, and a first-order neighborhood penalty with penalty weight of 105. All projection data in this work used a fan-beam geometry with source-to-axis distance (SAD) of 800 mm, source-to-detector distance (SDD) of 1000 mm and square pixels of 0.278 mm to generate noiseless projection data. To illustrate the effects of the two bias scenarios, we conducted a small experiment summarized in
We seek a general machine learning approach to try to estimate the underlying unbiased projections (e.g. learning how to use intrinsic criteria, etc.) from unbiased and biased data pairs. For this effort we adopt a CNN architecture inspired by Deep ResUNet [Zhang et al., “Road Extraction by Deep Residual U-Net,” IEEE Geoscience and Remote Sensing Letters, 2018; 15(5),749-753]. The residual unit helps ease training of the network, and the concatenations between residual units in encoding and decoding phase facilitates information propagation without degradation, which helps preserve structural information of the projection data. Specifically, a 9-level architecture of deep ResUNet has been built, which is illustrated in
The selection of an appropriate loss function requires care. A projection-domain metric is desirable for fast and efficient computation; however, the biases that are important are those that propagate through the reconstruction. Thus, one might prefer an image-domain loss function, but this requires reconstruction as part of the metric. To balance these goals we selected a loss function that weighs the relative importance of projection-domain spatial-frequencies. Specifically, since reconstructions explicitly (FBP) or implicitly (MBIR) perform high-pass filtering of projection data, we choose the mean squared error between ramp-filtered CNN-corrected and unbiased projection data in frequency domain as the loss function:
For training we use a database of 45,000 CT axial slices of different patients from the DeepLesion dataset [Yan et al., “DeepLesion: Automated Mining of Large-Scale Lesion Annotations and Universal Lesion Detection with Deep Learning,” Journal of Medical Imaging 5(3), 036501 (2018)]. We divide these slices into a training set of size 40,000, validation set of size 4,000 and test set of size 1,000. These three sets of data have no sharing of patient anatomy information. Inputs to the network for training and evaluation have simple gain correction applied by using the mean bare beam fluence
We also evaluated the improvement of CNN corrected projection data in both projection domain and image domain. For 1D debiasing,
For image-domain improvements we consider FBP and PL reconstructions. Again, a representative sample is shown in
For 2D debiasing,
As shown in
In this example we have investigated the impact and a mitigation strategy for unknown biases in CT data. We considered two different classes of bias that might be found in CT projection data representing potential unknowns that vary as a function of projection angle only; or jointly with projection angle and radial detector bin. Different kinds of bias can have significantly different impact on a reconstruction with additional dependencies on the data processing/reconstruction approach. We developed a machine learning approach to exploit intrinsic properties of sinogram data and as well as the data-driven properties of the particular nature of the bias classes and CT datasets under investigation. We found that the ResUNet combined with the spatial-frequency loss function was able to predict these biases allowing for correction and mitigation of the associated artifacts with increased quantitation. This methodology can be applied to physical CT data whose data has unknown bias contamination.
While the foregoing disclosure has been described in some detail by way of illustration and example for purposes of clarity and understanding, it will be clear to one of ordinary skill in the art from a reading of this disclosure that various changes in form and detail can be made without departing from the true scope of the disclosure and may be practiced within the scope of the appended claims. For example, all the methods, cranial implant devices, and/or component parts or other aspects thereof can be used in various combinations. All patents, patent applications, websites, other publications or documents, and the like cited herein are incorporated by reference in their entirety for all purposes to the same extent as if each individual item were specifically and individually indicated to be so incorporated by reference.
This application is the national stage entry of International Patent Application No. PCT/US2022/052789, filed on Dec. 14, 2022, and published as WO 2023/114265 A1 on Jun. 22, 2023, which claims the benefit of U.S. Provisional Patent Application Ser. No. 63/289,708, filed on Dec. 15, 2021, which are hereby incorporated by reference in their entireties.
This invention was made with government support under grant R21EB026849 awarded by the National Institutes of Health. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/052789 | 12/14/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63289708 | Dec 2021 | US |