Medical imaging is a valuable tool in the practice of modern clinical medicine. Imaging is used in an extremely broad array of clinical situations, from diagnosis to delivery of therapeutics to guiding surgical procedures. Machine learning techniques requiring training data have been used for various tasks with medical imaging. However, these techniques require large amounts of data sets. For medical imaging tasks such as lesion detection and region-of-interest (ROI) quantification, obtaining high quality diagnostic images is essential. Recently the neural network method has been applied to transform low-quality images into the images with improved signal-to-noise ratio (SNR). In some cases collecting large amounts of training labels is relatively easy, such as static magnetic resonance (MR) reconstruction. However, this is not an easy task for some other cases.
High-dose computed tomography (CT) has potential safety concerns; long-scanned dynamic positron emission tomography (PET) is not employed in routine clinical practice; in cardiac MR applications, it is practically impossible to acquire breathhold fully sampled 3D images. With limited amounts of high-quality patient data sets available, overfitting can be a potential pitfall. For example, if a new patient data does not lie in the training space due to population difference, the trained network cannot accurately recover unseen structures. In addition, low-quality images are often simulated by artificially downsampling the full-dose/ high-count data, which may not reflect the real physical condition of low-dose imaging. This mismatch between training and the real clinical environment can reduce the network performance.
Thus, systems and methods that overcome the drawbacks of present machine learning-based medical imaging analysis techniques that require large amounts of training data are desired.
The present disclosure provides systems and methods that improve image reconstruction and denoising without the use of training data. In one non-limiting example, systems and methods are provided for reconstructing and/or denoising images, for example, such as positron emission tomography (PET) images.
In accordance with one aspect of the disclosure, an image generation system is provided. The system includes at least one processor and at least one non-transitory, computer-readable memory accessible by the processor and having instructions that, when executed by the processor, cause the processor to receive a first patient image associated with a patient, receive a second patient image associated with the patient, train an untrained model based on the first patient image and the second patient image, provide the first patient image to the model, receive a third patient image from the model, and output the third patient image to at least one of a storage system or a display.
In accordance with another aspect of the disclosure, an image generation method is provided. The method includes receiving a first patient image associated with a patient, receiving a second patient image associated with the patient, training an untrained model based on the first patient image and the second patient image, providing the first patient image to the model, receiving a third patient image from the model, and outputting the third patient image to at least one of a storage system or a display.
The foregoing and other aspects and advantages of the invention will appear from the following description. In the description, reference is made to the accompanying drawings which form a part hereof, and in which there is shown by way of illustration configurations of the invention. Any such embodiment does not necessarily represent the full scope of the invention, however, and reference is made therefore to the claims and herein for interpreting the scope of the invention.
The present disclosure provides systems and methods that can generate certain types of images (e.g., reconstructed images, denoised images, and or images of a various imaging modalities) using certain imaging modalities without training data. In particular, the present disclosure provides systems and method for generating new images such as reconstructed positron emission tomography (PET) images, denoised PET images, arterial spin labeling (ASL) images, low-dose dual energy computed tomography (DECT) images, and PET parametric images associated with a patient using combinations of certain medical images such as PET images, ASL images, CT images, MR images, and/or DECT images of the patient without training data. As one non-limiting example, a system and method is provided for generating a reconstructed PET image of a patient using a PET image of the patient and a simultaneously acquired MR image of the patient.
Previous techniques for reconstructing and/or denoising medical images generally have one or more drawbacks. As described above, machine learning approaches that rely on training data are prone to overfitting due to limited amounts of high-quality patient data sets available. Apart from using training pairs to perform supervised learning, some techniques use prior images acquired from the same patient to improve the image quality. The prior images can come from temporal information, different physics settings, or even other imaging modalities. They are included into the maximum posterior estimation or sparse representation framework using predefined analytical expressions or pre-learning steps. The predefined expressions might not be able to extract all the useful information in a given image, and the pre-learnt model might not be optimal for the later reconstruction as no data consistency constraint is enforced during pre-learning. Other techniques use neural networks to generate various distributions based on random noise input. However, random noise may not provide enough information to sufficiently reconstruct and/or denoise an image to a desired level.
The present disclosure provides systems and methods that overcome at least one of the drawbacks of the above techniques.
The computing device 104 can implement portions of a medical image analysis application 132, which can involve the computing device 104 transmitting and/or receiving instructions, data, commands, etc. from one or more other devices. For example, the computing device 104 can receive patient image data (e.g., PET images, ASL images, CT images, MR images, and/or DECT images) from the patient image database 120, receive patient image data (e.g., PET images, ASL images, CT images, MR images, and/or DECT images) from the imaging system 128, and/or transmit reports, raw data, and/or other information generated by the medical image analysis application 132 to the display 108 and/or the generated information database 124.
The supplementary computing device 116 can implement portions of the medical image analysis application 132. It is understood that the image generation system 100 can implement the medical image analysis application 132 without the supplemental computing device 140. In some aspects, the computing device 104 can cause the supplemental computing device 116 to receive patient image data (e.g., PET images, ASL images, CT images, MR images, and/or DECT images) from the patient image database 120, receive patient image data (e.g., PET images, ASL images, CT images, MR images, and/or DECT images) from the imaging system 128, and/or transmit reports, raw data, and/or other information generated by the medical image analysis application 132 to the display 108 and/or the generated information database 124. In this way, a majority of the medical image analysis application 132 can be implemented by the supplementary computing device 116, which can allow a larger range of devices to be used as the computing device 104 because the required processing power of the computing device 104 may be reduced. For example, a relatively inexpensive tablet computer or laptop computer may be used as the computing device 104, and a server may be used as the supplementary computing device 116.
The patient image database can include patient image data previously generated by an imaging device (e.g., the imaging system 128). In some configurations, the imaging system 128 can be configured to generate PET images, ASL images, CT images, MR images, and/or DECT images. In some configurations, the imaging system 128 can include a single imaging device configured to produce multiple types of images simultaneously and/or concurrently. For example, the imaging system 128 can include a PET scanner that can also generate a CT scan concurrently. In this way, a single imaging machine can supply sufficient input data to the medical image analysis application 132 in order to generate certain types of images (e.g., a reconstructed PET image and/or a denoised PET image). In some configurations, the imaging system 128 can include multiple devices each configured to generate a single type of image (e.g., a first device configured to generate PET images and a second device configured to generate MR images).
The generated information database 124 can include patient image data generated by the medical image analysis application 132. The patient image data may be formatted in the DICOM® standard (i.e., format) (e.g., as a DICOM® object). Each DICOM® object can include image data (e.g., pixel data) formatted in various standards such as JPEG, lossless JPEG, JPEG 2000, etc. Each DICOM® object can also include attributes about the patient and/or the image data (imaging modality and/or image type).
In some configurations, medical image analysis application 132 can automatically generate one or more images (e.g., PET images, ASL images, CT images, MR images, and/or DECT images) based on patient images without training data. The medical image analysis application 132 can also automatically generate one or more reports based on the generated images. The medical image analysis application 132 can output one or more of the generated images and/or reports to the display 108 (e.g., in order to display the generated image and/or report to a medical practitioner) and/or to a memory, such as a memory included in the generated information database 124 (e.g., in order to store the generated image and/or report).
As shown in
In some configurations, the display 140 can present a graphical user interface. In some configurations, the display 140 can be implemented using any suitable display devices, such as a computer monitor, a touchscreen, a television, etc. In some configurations, the inputs 144 of the computing device 104 can include indicators, sensors, actuatable buttons, a keyboard, a mouse, a graphical user interface, a touch-screen display, etc. In some configurations, the inputs 144 can allow a user (e.g., a medical practitioner, such as an oncologist) to interact with the computing device 104, and thereby to interact with the supplemental computing device 116 (e.g., via the communication network 112). The display 108 can be a display device such as a computer monitor, a touchscreen, a television, and the like.
In some configurations, the communication system 148 can include any suitable hardware, firmware, and/or software for communicating with the other systems, over any suitable communication networks. For example, the communication system 148 can include one or more transceivers, one or more communication chips and/or chip sets, etc. In a more particular example, the communication system 148 can include hardware, firmware, and/or software that can be used to establish a coaxial connection, a fiber optic connection, an Ethernet connection, a USB connection, a Wi-Fi connection, a Bluetooth connection, a cellular connection, etc. In some configurations, the communication system 148 allows the computing device 104 to communicate with the supplemental computing device 116 (e.g., directly, or indirectly such as via the communication network 112).
In some configurations, the memory 152 can include any suitable storage device or devices that can be used to store instructions, values, etc., that can be used, for example, by the processor 136 to present content using the display 140 and/or the display 108, to communicate with the supplemental computing device 116 via communications system(s) 148, etc. The memory 152 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, the memory 152 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, etc. In some configurations, the memory 152 can have encoded thereon a computer program for controlling operation of the computing device 104 (or the supplemental computing device 116). In such configurations, the processor 136 can execute at least a portion of the computer program to present content (e.g., user interfaces, images, graphics, tables, reports, and the like), receive content from the supplemental computing device 116, transmit information to the supplemental computing device 116, and the like.
Still referring to
In some configurations, the display 160 can present a graphical user interface. In some configurations, the display 160 can be implemented using any suitable display devices, such as a computer monitor, a touchscreen, a television, etc. In some configurations, the inputs 164 of the supplemental computing device 116 can include indicators, sensors, actuatable buttons, a keyboard, a mouse, a graphical user interface, a touch-screen display, etc. In some configurations, the inputs 164 can allow a user (e.g., a medical practitioner, such as an oncologist) to interact with the supplemental computing device 116, and thereby to interact with the computing device 104 (e.g., via the communication network 112).
In some configurations, the communication system 168 can include any suitable hardware, firmware, and/or software for communicating with the other systems, over any suitable communication networks. For example, the communication system 168 can include one or more transceivers, one or more communication chips and/or chip sets, etc. In a more particular example, the communication system 168 can include hardware, firmware, and/or software that can be used to establish a coaxial connection, a fiber optic connection, an Ethernet connection, a USB connection, a Wi-Fi connection, a Bluetooth connection, a cellular connection, and the like. In some configurations, the communication system 168 allows the supplemental computing device 116 to communicate with the computing device 104 (e.g., directly, or indirectly such as via the communication network 112).
In some configurations, the memory 172 can include any suitable storage device or devices that can be used to store instructions, values, and the like, that can be used, for example, by the processor 156 to present content using the display 160 and/or the display 108, to communicate with the computing device 104 via communications system(s) 168, and the like. The memory 172 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, the memory 172 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, etc. In some configurations, the memory 172 can have encoded thereon a computer program for controlling operation of the supplemental computing device 116 (or the computing device 104). In such configurations, the processor 156 can execute at least a portion of the computer program to present content (e.g., user interfaces, images, graphics, tables, reports, and the like), receive content from the computing device 104, transmit information to the computing device 104, and the like.
Referring now to
The event locator circuits 330, in some implementations, form part of a data acquisition processing system 332 that processes the signals produced by the acquisition circuits 326. The data acquisition processing system 332 includes a general controller 334 that controls communications, for example, by way of a backplane bus 336 and on the communications network 318. The event locator circuits 330 assemble the information regarding each valid event into a set of numbers that indicate precisely when the event took place, the position in which the event was detected and the energy deposited by the photon. This event data packet is conveyed to a coincidence detector 338 that is also part of the data acquisition processing system 332.
The coincidence detector 338 accepts the event data packets from the event locator circuit 330 and determines if any two of them are in coincidence. The coincidence detector 338 identifies the coincident event pairs located and records them as a coincidence data packet that is provided to a sorter 340. The function of the sorter in many PET imaging systems is to receive the coincidence data packets and generate memory addresses from the coincidence data packets for the efficient storage of the coincidence data. In that context, the set of all projection rays, or lines of response, that point in the same direction (θ) and pass through the scanner's field of view (FOV) is a complete projection, or “view.” The distance (R) between a particular line of response and the center of the FOV locates that line of response within the FOV. The sorter 340 counts all of the events that occur on a given line of response (R, θ) during the scan by sorting out the coincidence data packets that indicate an event at the two detectors lying on this line of response.
The sorter 340 provides the image dataset array to an image processing/reconstruction system 342, for example, by way of a communications link 344 to be stored in an image array 346. The image array 346 holds the dataset array for access by an image processor 348 that reconstructs one or more images corresponding to the dataset array. As will be described, images can then be viewed and enhanced using the work station 316. In manner such as described above, other networked workstations may be used as well, which may be remotely located and even include mobile devices and portable workstations.
Referring now to
The flow 400 can include determining if stop criteria has been reached 416. In some configurations, the flow 400 can train the model 408 for a predetermined number of epochs before determining that the stop criteria has been reached. For example, the stop criteria can be three hundred epochs. If the stop criteria has been reached (e.g., “NO” at 416), the flow 400 can proceed to loss function calculation 420. In some configurations, the loss function can be an L2 loss function calculated between a training label 424 and the generated image 412. The flow 400 can include calculating a loss function value based on a predetermined loss function that can vary based on the image type of the input image 404 and/or the training label 424, as will be explained below. The flow 400 can include calculating the loss function using the loss function value based on the loss function, the generated image 412, and the training label 424.
The flow can include updating model parameters 428 of the model 412. For example, the flow 400 can adjust weights of the model 408. The updating the model parameters 428 will be described in further detail below. Once the model has been trained using the same input image 404 and the same training label 424 until the stop criteria has been reached (e.g., at 416), the flow 400 can output the last generated image 412 generated by the model 408 as a final image 432. In some configurations, the reconstructed image 432 can be a reconstructed image and/or a denoised image. Specifics of the flow will be discussed below.
In the flow 400, the input image 404 can be prior images of the same patient, rather than random noise used in previous techniques. Instead of calculating the mean squared error (MSE) between the generated image 412 and the training label 424 (which can be considered as the original corrupted image), a training objective function can be formulated based on maximum likelihood estimation derived from imaging physics. As described above, in some configurations, the model 408 can include a modified 3D U-net neural network, which can increase the quality of the final image 432 generated by the flow 400 as compared to other techniques. To stabilize model training, the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS algorithm) can be used instead of adaptive stochastic gradient descent (SGD) methods.
In inverse problems, such as image deblurring and image reconstruction, the measured data y∈M can be assumed as a collection of independent random variables and its mean
where A∈M×N is the transformation matrix and s∈M is a known additive term. Supposing the measured random variable yi follows a distribution of p(yi|x), the log likelihood for the measured data y can be written as:
In this proposed DIPRecon framework, the unknown image x is represented as
x=f(θ|z) (3)
where f represents the neural network, θ are the unknown parameters of the neural network, and z denotes the prior image and is the input to the neural network. When substituting×with the neural network representation in (3), the original data model shown in (1) can be rewritten as
Replacing x by (3), the log likelihood can be expressed using θ as
The maximum likelihood estimate of the unknown image x can be calculated in two steps as
{circumflex over (θ)}=arg max L(y|θ) (6)
{circumflex over (x)}=f({circumflex over (θ)}|z) (7)
The objective function in (6) is difficult to solve due to the coupling between the transformation matrix and the neural network. The objective function in (6) can be transferred to the constrained format as below
max L(y|x)
s.t.x=f(θ|z) (8)
The augmented Lagrangian format for the constrained optimization problem in (8) can be used as
which can be solved by the alternating direction method of multipliers (ADMM) algorithm iteratively in three steps:
μn+1=μn+xn+1−f(θn+1|z) (12)
An example of PET image reconstruction is now described as an example application of the flow 400. In PET image reconstruction, A is the detection probability matrix, with Aij denoting the probability of photons originating from voxel j being detected by detector i.s∈M denotes the expectation of scattered and random events. M is the number of lines of response (LOR). Assuming the measured photon coincidences follow Poisson distribution, the log-likelihood function L(y|x) can be explicitly written as
Though the measurement data may follow different distributions for other inverse problems, only subproblem (10) needs to be reformulated and the other subproblems (11) and (12) can stay unchanged. Here subproblem (10) is a penalized image reconstruction problem, and the optimization transfer method can be used to solve it. As x in L(y|x) is coupled together, a surrogate function QL(x|xn) for L(y|x) can be constructed to decouple the image pixels so that each pixel can be optimized independently. QL (x|xn) can be constructed as follows:
where A,j=Σi=lMAij and {circumflex over (x)}j,EMn+1 is calculated by
It can be verified that the constructed surrogate function QL(x|xn) fulfills the following two conditions:
Q
L(x|xn)−QL(xn|xn)≤L(y|x)−L(y|xn) (16)
∇QL(xn|xn)=∇L(y|xn) (17)
After getting this surrogate function, subproblem (10) can be optimized pixel by pixel. For pixel j, the surrogate objective function for subproblem (10) can be
The final update equation for pixel j after maximizing (18) can be expressed as:
Subproblem (11) is a nonlinear least square problem. Currently, network training is generally based on first-order methods, such as the Adam algorithm and the Nesterov's accelerated gradient (NAG) algorithm. The L-BFGS algorithm is a quasi-newton method, combining a history of updates to approximate the Hessian matrix. It is not widely used in network training as it requires large batch size to accurately calculate the descent direction, which is less effective than first-order methods for large-scale applications. In this proposed framework, as only the patient's own prior images are employed as the network input, the data size is much smaller than the traditional network training. In our case, the L-BFGS method may be preferred to solve subproblem (11) due to its stability and better performance observed in experiments.
In some configurations, the updating the model parameters 428 can include solving subproblem (10), subproblem (11), and subproblem (12). In some configurations, the loss function calculation 420 can include performing two iterations to solve subproblem (10). The updating the model parameters 428 can include solving subproblem (12) using the solutions to subproblem (10) and subproblem (11). The updating model parameters 428 can include updating the model based on the solution to subproblem (11). Exemplary pseudocode for an algorithm executed at the updating the model parameters 428 to solve subproblem (10) and subproblem (11) is presented Algorithm 1 below.
Referring now to
In some configurations, the neural network 500 can include certain components of the 3D U-Net with a number of different components. In some configurations, the neural network 500 can receive an input image 502. In some configurations, the input image 502 can be a single channel three dimensional image (e.g., an MR image of a patient). In some configurations, the neural network 500 can generate and output a predicted image 504 (e.g., a reconstructed PET image). In some configurations, the predicted image 504 can be a single channel three dimensional image. As will be described below, for certain imaging types (e.g., DECT images), the predicted image 504 can be a multi-channel three dimensional image (e.g., a two channel image). In some configurations, the neural network 500 can include a number of feature maps coupled together by a combination of different layers. In some configurations, the neural network 500 can include a first feature map 506, a second feature map 508, a third feature map 510, a fourth feature map 512, a fifth feature map 514, a sixth feature map 516, a seventh feature map 518, an eighth feature map 520, a ninth feature map 522, a tenth feature map 524, a eleventh feature map 526, a twelfth feature map 528, a thirteenth feature map 530, a fourteenth feature map 532, a fifteenth feature map 534, a sixteenth feature map 536, a seventeenth feature map 538, an eighteenth feature map 540, and a nineteenth feature map 542.
As described above, the feature maps 506-542 can be coupled together using a number of different layers. In some configurations, the layers can include a convolutional layer (e.g., a 3×3×3 3D convolutional layer), a batch normalization (BN) layer, a leaky rectified linear unit (LReLU) layer, a convolutional layer with a stride value greater than one (e.g., a 3×3×3 3D convolutional layer with stride 2×2×2), a bilinear interpolation layer (e.g. a 2×2×2 bilinear interpolation layer), and an identity mapping layer. Each 3×3×3 3D convolutional layer with stride 2×2×2 can be used to down-sample certain feature maps, and the 2×2×2 bilinear interpolation layer can be used to up-sampling certain feature maps. Each identity mapping layer can map features in a left-side encoder path 544 of the neural network 500 to features in a right-side decoder path 546 of the neural network 500. In some configurations, the identity mapping layer can include skip connections and can add features on the left-side encoder path 544 to features in the right-side decoder path 546.
Certain layers can provide advantages over a standard 3D U-Net. In particular, convolutional layers with stride two to down-sample the input image 502 instead of using max pooling in order to construct a fully convolutional neural network. A fully convolutional network may be trained more easily than if max-pooling layers were used in place of the convolutional layers with stride two. Additionally, the fully convolutional layer may provide better image generation results (e.g., better contrast, more accurate reconstructions, etc.). The identity mapping layers with skip connections to link the encoder path 544 and the decoder path 546 instead of concatenating can reduce the number of training parameters, which can improve the training speed of the neural network 500. The bi-linear interpolation layer can be used instead of deconvolution upsampling in order to reduce potential checkboard artifact. Additionally, leaky ReLU layers can be used instead of ReLU layers. As compared to standard ReLU layers, leaky ReLU layers can may provide better information flow, and as a result of improved information flow, better training.
The layers may be arranged into groups of common layer arrangements. For example, a layer group can include a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the neural network 500 can include a first layer group 548, a second layer group 550, a third layer group 552, a fourth layer group 554, a fifth layer group 556, a sixth layer group 558, a seventh layer group 560, an eighth layer group 562, a ninth layer group 564, a tenth layer group 566, a eleventh layer group 568, a twelfth layer group 570, a thirteenth layer group 572, a fourteenth layer group 574, a fifteenth layer group 576, a sixteenth layer group 578, a seventeenth layer group 580, an eighteenth layer group 582, a nineteenth layer group 584, a twentieth layer group 586, a twenty-first layer group 588, a twenty-second layer group 590, and a twenty-third layer group 592.
In some configurations, the input image 502 can be coupled to the first feature map 506 by the first layer group 548 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the first feature map 506 can be coupled to the second feature map 508 by the second layer group 550 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the first feature map 506 and the second feature map 508 can include sixteen channels. In some configurations, the second feature map 508 can be downsampled to the third feature map 510 by the third layer group 552 including a convolutional layer with stride 2×2×2, a BN layer, and an LReLU layer.
In some configurations, the third feature map 510 can be coupled to the fourth feature map 512 by the fourth layer group 554 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the fourth feature map 512 can be coupled to the fifth feature map 514 by the fifth layer group 556 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the fourth feature map 512 and the fifth feature map 514 can include thirty-two channels. In some configurations, the fifth feature map 514 can be downsampled to the sixth feature map 516 by the sixth layer group 558 including a convolutional layer with stride 2×2×2, a BN layer, and an LReLU layer.
In some configurations, the sixth feature map 516 can be coupled to the seventh feature map 518 by the seventh layer group 560 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the seventh feature map 518 can be coupled to the eighth feature map 520 by the eighth layer group 562 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the seventh feature map 518 and the eighth feature map 520 can include sixty-four channels. In some configurations, the eighth feature map 520 can be downsampled to the ninth feature map 522 by the ninth layer group 564 including a convolutional layer with stride 2×2×2, a BN layer, and an LReLU layer.
In some configurations, the ninth feature map 522 can be coupled to the tenth feature map 524 by the tenth layer group 566 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the tenth feature map 524 can be coupled to the eleventh feature map 526 by the eleventh layer group 568 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the tenth feature map 524 and the eleventh feature map 526 can include one hundred and twenty-eight channels. In some configurations, the eleventh feature map 526 can be upsampled to the twelfth feature map 528 by the twelfth layer group 570 including a bilinear interpolation layer (e.g. a 2×2×2 bilinear interpolation layer).
In addition to upsampling the eleventh feature map 526, the twelfth feature map 528 can be coupled to the eighth feature map 520 by the thirteenth layer group 572 including skip connections (e.g., copy and add skip connections) to receive at least some values (e.g., features that are not skipped) from the eighth feature map 520. The values from the eighth feature map 520 can be added to the upsampled values of the eleventh feature map 526 to generate the twelfth feature map 528.
In some configurations, the twelfth feature map 528 can be coupled to the thirteenth feature map 530 by the fourteenth layer group 574 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the thirteenth feature map 530 can be coupled to the fourteenth feature map 532 by the fifteenth layer group 576 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the twelfth feature map 528 and the thirteenth feature map 530 can include sixty-four channels. In some configurations, the fourteenth feature map 532 can be upsampled to the fifteenth feature map 534 by the sixteenth layer group 578 including a bilinear interpolation layer (e.g. a 2×2×2 bilinear interpolation layer).
In addition to upsampling the fourteenth feature map 532, the fifteenth feature map 534 can be coupled to the fifth feature map 514 by the seventeenth layer group 580 including skip connections (e.g., copy and add skip connections) to receive at least some values (e.g., features that are not skipped) from the fifth feature map 514. The values from the fifth feature map 514 can be added to the upsampled values of the fourteenth feature map 532 to generate the fifteenth feature map 534.
In some configurations, the fifteenth feature map 534 can be coupled to the sixteenth feature map 536 by the eighteenth layer group 582 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the sixteenth feature map 536 can be coupled to the seventeenth feature map 538 by the nineteenth layer group 584 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the fifteenth feature map 534 and the sixteenth feature map 536 can include thirty-two channels. In some configurations, the seventeenth feature map 538 can be upsampled to the eighteenth feature map 540 by the twentieth layer group 586 including a bilinear interpolation layer (e.g. a 2×2×2 bilinear interpolation layer).
In addition to upsampling the seventeenth feature map 538, the eighteenth feature map 540 can be coupled to the second feature map 508 by the twenty-first layer group 588 including skip connections (e.g., copy and add skip connections) to receive at least some values (e.g., features that are not skipped) from the second feature map 508. The values from the second feature map 508 can be added to the upsampled values of the seventeenth feature map 538 to generate the eighteenth feature map 540.
In some configurations, the eighteenth feature map 540 can be coupled to the nineteenth feature map 542 by the twenty-second layer group 590 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the nineteenth feature map 542 can be coupled to the predicted image 504 by the twenty-third layer group 592 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the eighteenth feature map 540 and the nineteenth feature map 542 can include sixteen channels. In some configurations, the input image 502 and the predicted image 504 can have the same resolution (e.g., 192×192×128). The resolution of the input image 502 can vary based on the type of imaging device used to generate the input image 502. Through experimentation it was found that using 3D convolution for 3D image reconstruction performed better than using 2D convolution with multiple axial slices in the input channels. To enable a non-negative constraint on the reconstructed image, an ReLU layer (e.g., LReLU) can be added before the predicted image 504. In addition, it was found that compared to the network without encoder and decoder path, the U-net structure could save more GPU memory because the spatial size is reduced due to the encoder path.
Referring now to
At 604, the process 600 can receive a first patient image. In some configurations, the first patient image can be an MR image associated with the patient. In some configurations, the first patient image can be the input image 404 in
At 608, the process 600 can receive a second patient image. In some configurations, the second patient image can be a raw PET image associated with the patient. In some configurations, the second patient image can be previously acquired concurrently and/or simultaneously along with the first patient image.
At 612, the process 600 can train the model based on the first patient image and the second patient image. In some configurations, the first patient image and the second patient image are formed from image data acquired from different image modalities, such as MRI, CT, ultrasound, or PET. As described above, in some configurations, the model can include the neural network 500 in
In some configurations, the process 600 can pretrain the model for a predetermined number of epochs (e.g., three hundred epochs), but pretraining is not required in order to train the model to sufficiently generate reconstructed PET images and/or other images.
In some configurations, for a certain number of iterations (e.g., MaxIt and/or three hundred), the process 600 can provide the first patient image to the model, receive an output image from the model, and update the model based on the output image and the second patient image. The process 600 can solve objection function (6) by breaking equation (9) into the three subproblems (10)-(12) using ADMM as described above. The process 600 can iteratively solve both subproblems (10)-(12) for a predetermined number of iterations (e.g., MaxIt and/or three hundred) in order to train the network based on the first patient image (e.g., z) and the second patient image (e.g., y). The process 600 can iteratively solve equations (15) and (19) for a predetermined number of iterations (e.g., SubIt1 and/or two) to solve subproblem (10). It is noted that subproblem (10) is solved in part based on the assumption that the photon coincidences in the raw PET image follow the Poisson distribution. Penalty parameter p in equations (15) and (19) can be predetermined (e.g., 3×10−3). The process 600 can iteratively solve subproblem (11) for a predetermined number of iterations (e.g., SubIt2 and/or ten) using the L-BFGS algorithm and the resulting xn,SubIt1 from the last iteration of equation (19). The process 600 can update the model based on the θn obtained when solving equation (19).
At 616, the process 600 can provide the first patient image to the model. At 616, the model is trained and can be referred to as a trained model.
At 620, the process 600 can receive a third patient image from the model. In some configurations, the third patient image can be a reconstructed PET image that the model generates based on the first patient image (e.g., an MR image).
At 624, the process 600 can output the third patient image to at least one of a memory or a display. In some configurations, at 624, the process 600 can cause the third patient image to be displayed on a display (e.g., the display 108). In some configurations, at 624, the process 600 can generate a report based on the third patient image. In some configurations, at 624, the process 600 can output the report to at least one of a memory or a display. In some configurations, at 624, the process 600 can cause the report to be displayed on a display (e.g., the display 108).
It is understood that at least some of the steps included in the process 1600 can be rearranged and/or executed in parallel.
A number of experiments to determine values of certain operating parameters (e.g., penalty parameter, number of epochs, etc.) are now described. To stabilize the network training, before training, the flow 400 and/or the process 600 can include scaling the intensity of a PET image to the range [0, 1]. Due to the non-convexity of neural network training, it is generally better to assign good initials to the network parameters before being trained inside the iterative reconstruction loop. Comparisons between the results with and without pre-training are shown in the supplemental materials. In some configurations, the neural network 500 can be trained by running ML EM algorithm for 60 iterations, and then used it as xlabel to train the network based on
Using MR images as network input, where f:→ represents the neural network, θ∈L indicates the trainable variables, zi∈N is the network input for the ith training pair, and xlabeli∈ denotes the ith training label. For CNN, θ contains the convolutional filters and the bias items from all layers.
Network pretraining was run for three hundred epochs using the L-BFGS algorithm based on minimizing (20). As the penalty parameter has a large impact on the convergence speed, the convergence of the log-likelihood L(y|f({circumflex over (θ)}|z)) was examined to determine the penalty parameter used in practice. A log-likelihood curve was constructed for different penalty parameter ρ values, and based on the convergence speed and stability of the likelihood, ρ=3×10−3 was chosen.
The neural network 500 trained using the process 600 was compared with the Gaussian postfiltering method, the penalized reconstruction method based on including a pre-trained neural network in the penalty, and the kernel method. One technique of incorporating the trained network into iterative reconstruction framework is include it in the penalty item. The objective function is:
Here f(θ0|z) is the pre-trained network using noisy PET images (e.g., raw PET images) as labels and the pre-trained network parameters θ0 is fixed during the reconstruction process. This method is denoted as CNNPenalty method. Interestingly, the objective function shown in (21) becomes the same as subproblem (13) by replacing f(θ0|z) with f(θn|z)+μn. In the flow 400 and the process 600, one difference between the CNNPenalty method and the process 600 is that μ was fixed to 0 and the network updating step (e.g., equation (11)) was skipped. By this setting, the effects of updating the network parameters inside the reconstruction loop can be understood. For the kernel method, the (i,j)th element of the kernel matrix Kz is
where fi∈N
and the final output image is {circumflex over (x)}=Kz{circumflex over (θ)}. The kernel method is denoted as KMRI in later comparisons.
In testing, the network training was done in GPU (NVIDIA GTX 1080 Ti) and the image reconstruction was implemented in CPU with 16 threads (Intel Xeon E5-2630 v3). The neural network was implemented using TensorFlow 1.4. The L-BFGS algorithm was implemented using the TensorFlow “ScipyOptimizerinterface” to call “L-BFGS-B” method from scipy library running in CPU mode. Ten previous iterations were used in L-BFGS to approximate the Hessian matrix.
A 3D brain phantom from BrainWeb was used in the simulation. Corresponding T1-weighted MR image was used as the prior image. The voxel size was 2×2×2 mm3 and the phantom image size was 128×128×105. The input to the network was cropped to 128×128×96 to reduce GPU memory usage. To simulate mismatches between the MR and PET images, twelve hot spheres of diameter 16 mm were inserted into the PET image as tumor regions, which are not visible in the MR image. The time activity curves of blood, gray matter, white matter and tumor were simulated based on a two-tissue compartment model. The computer simulation modeled the geometry of a Siemens mCT scanner, and the system matrix was generated using the multi-ray tracing method. In this experiment, the last five minute frame of a one-hour FDG scan was used as the ground-truth image. Noise-free sinogram data were generated by forward-projecting the ground-truth image using the system matrix and the attenuation map. Poisson noise was then introduced to the noise-free data by setting the total count level to be equivalent to last 5 min scan with 5 mCi injection. Uniform random events were simulated and accounted for 30 percent of the noise-free data. Scatters were not included. For quantitative comparison, contrast recovery coefficient (CRC) vs. the standard deviation (STD) curves were plotted based on reconstructions of twenty independent and identically distributed (i.i.d) realizations. The CRC was computed between selected gray matter regions and white matter regions as
Here R is the number of realizations and is set to 20, ār=1/KaΣk=1K
where
The network structure for the simulated data set is the same as the network structure shown in
In this experiment, a 70-minutes dynamic PET scan of a human subject acquired on a Siemens Brain MR-PET scanner after 5 mCi FDG injection was employed in the real data evaluation. The data were reconstructed with an image array of 256×256×153 and a voxel size of 1.25×1.25×1.25 mm3. The input to the network was cropped to 192×192×128 to reduce GPU memory usage. A simultaneous acquired T1-weighted MR image having the same image array and voxel size as the PET image was used as the prior image. Correction factors for random, scatters were estimated using the standard software provided by the manufacturer and included during reconstruction. The motion correction was performed in the line-of-response (LOR) space based on the simultaneously acquired MR navigator signal. Attenuation was derived from T1-weighted MR image using the SPM based atlas method. To generate multiple realizations for quantitative analysis, the last 40 minutes PET data were binned together and resampled with a ⅛ ratio to obtain 20 i.i.d. datasets that mimic 5-minutes frames. As the ground truth of the regional uptake is unknown, a hot sphere with diameter 12.5 mm, mimicking a tumor, was added to the PET data (invisible in the MRI image). This addition was intended to simulate the case where MRI and PET information does not match. The TAC of the hot sphere was set to the TAC of the gray matter, so the final TAC of the simulated tumor region is higher than that of the gray matter because of the superposition. The simulated tumor image of the last 40 minutes was forward-projected to generate a set of noise free sinograms, including detector normalization and patient attenuation. Randoms and scatters from the inserted tumor were not simulated as they would be negligible compared with the scattered and random events from the patient background. Poisson noise was then introduced and finally the tumor sinograms were added to the original patient sinograms to generate the hybrid real data sets. For tumor quantification, images with and without the inserted tumor were reconstructed and the difference was taken to obtain the tumor only image and compared with the ground truth. The tumor contrast recovery (CR) was calculated as
where
To test the effectiveness of the conditional DIP framework, an experiment was performed by using either the uniform random noise or the patient's MR prior image as the network input. ML EM reconstruction of the real brain data at 60th iteration was treated as the label image. 300 epochs were run for network training using the L-BFGS algorithm. Referring now to
When the input is random noise (e.g., in
The behaviors of different optimization algorithms under the conditional DIP framework were also compared. The Adam, NAG, and L-BFGS algorithms were compared regarding the pre-training process using the real brain data. When comparing different algorithms, the normalized cost value was computed, which is defined as
where ϕAdamref and ϕAdam1 is the cost value defined in (20) after running Adam for seven hundred iterations and one iteration, respectively.
Referring now to
A simulation was performed to reconstruct PET images for three MR images having different orthogonal views. The techniques compared were the EM+filter method, the kernel method, the CNNPenalty method, and the technique of training the neural network 500 using the process 600. The kernel method and the neural network 500 trained using the process 600 both generated images with more cortex structures and have lower noise compared to the EM-plus-filter method and the CNNPenalty method. Compared with the kernel method, the neural network 500 trained using the process 600 recovered even more details of the cortices, and the white matter regions were cleaner. Furthermore, compared to the kernel method, the tumor uptake using the neural network 500 trained using the process 600 is higher and the tumor shape is closer to the ground truth. In this simulation set-up, there are no tumor signals in the prior MRI image, and the neural network 500 trained using the process 600 can still recover PET signals, which is a sign of robustness to potential mismatches between PET and prior images. Besides, by comparing the CNNPenalty method and the neural network 500 trained using the process 600, updating the network parameters inside the reconstruction loop can recover more brain structures and reduce the image noise than other reconstruction methods. Referring now to
For the real brain dataset, high-count images were reconstructed from the combined 40-min scanning for reference. Compared to the EM-plus-filter method and the CNNPenalty method, the kernel method and the neural network 500 trained using the process 600 can recover more cortex details. Additionally, image noise in the white matter is reduced. The cortex shape using the neural network 500 trained using the process 600 is clearer than the kernel method. For the tumor region which is unobserved in the MR image, the uptake is higher in the neural network 500 trained using the process 600 as compared with the kernel method.
Referring now to
At 1104, the process 1100 can receive a first patient image. In some configurations, the first patient image can be an MR image associated with the patient. In some configurations, the first patient image can be a T1-weighted MR image. In some configurations, the first patient image can be a CT image associated with the patient. In some configurations, the first patient image can be the input image 404 in
At 1108, the process 1100 can receive a second patient image. In some configurations, the second patient image can be a noisy PET image associated with the patient. In some configurations, the noisy PET image can be previously generated using the maximum likelihood expectation maximization (MLEM) algorithm running a predetermined number of iterations (e.g., forty iterations). In some configurations, the second patient image can be previously acquired concurrently and/or simultaneously along with the first patient image.
At 1112, the process 1100 can train the model based on the first patient image and the second patient image. Again, in some configurations, the first patient image and the second patient image may be images produced by differing imaging modalities. As described above, in some configurations, the model can include the neural network 500 in
At 1116, the process 1100 can provide the first patient image to the model. At 1116, the model is trained and can be referred to as a trained model.
At 1120, the process 1100 can receive a denoised PET image from the model. In some configurations, the model can generate the denoised PET image based on the first patient image (e.g., an MR image).
At 1124, the process 1100 can output the denoised PET image to at least one of a memory or a display. In some configurations, at 1124, the process 1100 can cause the denoised PET image to be displayed on a display (e.g., the display 108). In some configurations, at 1124, the process 1100 can generate a report based on the denoised PET image. In some configurations, at 1124, the process 1100 can output the report to at least one of a memory or a display. In some configurations, at 1124, the process 1100 can cause the report to be displayed on a display (e.g., the display 108).
It is understood that at least some of the steps included in the process 1100 can be rearranged and/or executed in parallel.
In testing, denoised PET images generated in accordance with the process 1100 were compared to a number of other denoising techniques. In a simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the process 1100 achieved the best performance regarding the bias-variance tradeoff. For testing on a clinical PET/CT dataset, the process 1100 achieved the highest CNR improvement ratio (53.35%±21.78%), as compared with the Gaussian (12.64%±6.15%, P=0.002), NLM guided by CT (24.35%±16.30%, P=0.002), BM4D (38.31%±20.26%, P=0.002), and Deep Decoder (41.67%±22.28%, P=0.002) methods. For testing on a clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80%±25.23%, higher than the Gaussian (18.16%±10.02%, P<0.0001), NLM guided by MR (25.36%±19.48%, P<0.0001), BM4D (37.02%±21.38%, P<0.0001), and Deep Decoder (30.03%±20.64%, P<0.0001) methods.
Referring now to
The flow 1200 can include determining if stop criteria has been reached 1216. In some configurations, the flow 1200 can train the model 1208 for a predetermined number of epochs before determining that the stop criteria has been reached. For example, the stop criteria can be three hundred epochs. If the stop criteria has been reached (e.g., “NO” at 1216), the flow 1200 can proceed to loss function calculation 1220. In some configurations, the loss function can be an L2 loss function calculated between a training label 1224 and the generated image 1212. The flow 1200 can include calculating a loss function value based on a predetermined loss function . The flow 1200 can include calculating the loss function using the loss function value based on the loss function, the generated image 1212, a physical module 436(e.g., an identity matrix), and the training label 1224.
The flow can include updating model parameters 1228 of the model 1212. For example, the flow 1200 can adjust weights of the model 1208. Once the model has been trained using the same input image 1204 and the same training label 1224 until the stop criteria has been reached (e.g., at 1216), the flow 1200 can output the last generated image 1212 generated by the model 1208 as a final image 1232. In some configurations, the reconstructed image 1232 can be a reconstructed image and/or a denoised image. Specifics of the flow will be discussed below.
Specifically, an unsupervised deep learning approach for generating and/or reconstructing Arterial Spin Labeling (ASL) MR images is described. To train the network, T1-weighted anatomical image can be used as network input, with noisy ASL image as training label for denoising task or k-space data as training label for image reconstruction from sparsely sampled data application.
For MR inverse problems such as image denoising and reconstruction, the measured data, y∈M×1, can be written as
y=Ax+w (28)
where A∈M×N is the transformation matrix, x∈N×1 is the unknown image to be estimated, w∈M×1 is the noise, M is the size of measured data, and N is the number of pixels in image space. Supposing y is independent and identically distributed (i.i.d.) and the ith element of y, yi, follows a distribution of p(yi|x), x can be estimated based on the maximum likelihood framework as
The maximum-likelihood solution of equation (29) can be prone to noise and/or undersampling artifacts. The image x can be nonlinearly represented by a deep neural network as
x=f(θ|z) (30)
where f represents the neural network, θ denotes the unknown parameters of the neural network, and z denotes the prior image which is the input to the neural network. This representation can exploit multiple-contrast prior images, simply by increasing the number of network-input channels. With x substituted by the neural network representation in equation (30), the original data model in equation (28) becomes
y=Af(θ|z)+w (31)
where θ are the only unknown parameters to be determined. The task of recovering unknown image×from the measured data y is translated to finding an optimal set of network parameters θ that maximize the likelihood function:
Once θ is determined, the reconstructed image {circumflex over (x)} is simply the output of the neural network, {circumflex over (x)}=({circumflex over (θ)}|z).
For denoising applications, the transformation matrix 1236 becomes an identity matrix and y is the noisy image. The image noise w can be assumed to follow i.i.d. Gaussian distribution, and the maximum likelihood estimation in equation (32) can be explicitly written as
Optimization of equation (33) is a network training problem with L2 norm as the loss function. A unique advantage of this framework is that only the measured data from the subject is employed in the training process, with no additional training pairs needed. This can significantly reduce the training data size and thus allows using high-order optimization methods to train the network with better convergence properties. The L-BFGS algorithm can be used to solve equation (33), which is a Quasi-Newton method, using a history of updates to approximate the Hessian matrix. Compared to the commonly used first-order stochastic algorithms, L-BFGS is a monotonic algorithm and is preferred to solve equation (33) due to its stability and better performance as described above.
For image reconstruction, y is the measured k-space data and A is the forward model of imaging, which can be written as
A=ΩFS (34)
where Ω is the k-space sampling mask, denotes Fourier transform, and S represents the coil sensitivities. Assuming i.i.d. Gaussian noise in the k-space data, the maximum likelihood estimation in equation (32) becomes
Directly training the neural network (estimating θ) using the loss function in equation (35) can be time consuming because the forward operator A and its adjoint is often computationally expensive and the neural network training needs more update steps compared to traditional iterative image reconstruction. Instead alternating direction method of multipliers (ADMM) was used to separate the reconstruction and network training steps. Another advantage is that this separation allows direct use of penalized image reconstruction methods at the image reconstruction step. One auxiliary variable×to convert (35) to the constrained format as below
min||ΩSx−y||22s.t.x=f(θ|z) (36)
The augmented Lagrangian format for the constrained optimization problem in (36) is:
where y is the dual variable and ρ is the penalty parameter. It can be solved by the ADMM algorithm iteratively in three steps
where μ=(1/ρ)y is the scaled dual variable. With the ADMM algorithm, the original constrained optimization problem is decoupled into a network training problem (38) and a penalized reconstruction problem (39). At each step of iteration, the parameters of the neural network (θn+1 are estimated by minimizing the L2 norm difference between the output of the network (f(θ|z) and the training label (xn+μn). The training label are updated consecutively by solving subproblem (39) and (40), respectively. The L-BFGS algorithm can be used to solve subproblem (38). By setting the first order derivative to zero, the normal equation for subproblem (39) can be expressed as
[(ΩS)HΩFS+ρI]x=(ΩS)Hy+ρ[f(θn+1|z)−μn] (41)
The reconditioned conjugate gradient (PCG) algorithm can used to solve the above normal equation (41).
Referring now to
In some configurations, the neural network 1300 can receive an input image 1302. In some configurations, the input image 1302 can be a single channel three dimensional image (e.g., an MR image of a patient). In some configurations, the neural network 1300 can generate and output a predicted image 1304 (e.g., a denoised ASL image). The predicted image 1304 can be a multi-channel three dimensional image (e.g., a two channel image). For an ASL image, the two channels can represent the real and imaginary parts of the ASL image. In some configurations, the neural network 1300 can include a number of feature maps coupled together by a combination of different layers. In some configurations, the neural network 1300 can include a first feature map 1306, a second feature map 1308, a third feature map 1310, a fourth feature map 1312, a fifth feature map 1314, a sixth feature map 1316, a seventh feature map 1318, an eighth feature map 1320, a ninth feature map 1322, a tenth feature map 1324, a eleventh feature map 1326, a twelfth feature map 1328, a thirteenth feature map 1330, a fourteenth feature map 1332, a fifteenth feature map 1334, a sixteenth feature map 1336, a seventeenth feature map 1338, an eighteenth feature map 1340, and a nineteenth feature map 1342.
In some configurations, the feature maps 1306-1342 can be the same as the feature maps 506-542 in the neural network 500 with a few differences. In some configurations, the first feature map 1306 can include twelve channels, the second feature map 1308 can include twelve channels, the fourth feature map 1312 can include twenty-four channels, the fifth feature map 1314 can include twenty-four channels, the seventh feature map 1318 can include forty-eight channels, the eighth feature map 1320 can include forty-eight channels, the tenth feature map 1324 can include ninety-six channels, the eleventh feature map 1326 can include ninety-six channels, the twelfth feature map 1328 can include forty-eight channels, the thirteenth feature map 1330 can include forty-eight channels, the fifteenth feature map 1334 can include twenty-four channels, the sixteenth feature map 1336 can include twenty-four channels, the eighteenth feature map 1340 can include twelve channels. In some configurations, the nineteenth feature map 1342 can include twice as many channels as the eighteenth feature map 1340 (e.g., twenty-four channels) to generate the two-channel predicted image 1304.
As described above, the feature maps 1306-1342 can be coupled together using a number of different layers. The layers may be arranged into groups of common layer arrangements. For example, a layer group can include a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the neural network 1300 can include a first layer group 1348, a second layer group 1350, a third layer group 1352, a fourth layer group 1354, a fifth layer group 1356, a sixth layer group 1358, a seventh layer group 1360, an eighth layer group 1362, a ninth layer group 1364, a tenth layer group 1366, a eleventh layer group 1368, a twelfth layer group 1370, a thirteenth layer group 1372, a fourteenth layer group 1374, a fifteenth layer group 1376, a sixteenth layer group 1378, a seventeenth layer group 1380, an eighteenth layer group 1382, a nineteenth layer group 1384, a twentieth layer group 1386, a twenty-first layer group 1388, a twenty-second layer group 1390, and a twenty-third layer group 1392. In some configurations, the layer groups 1348-1392 can be include the same layer arrangements as the layer groups 548-592 in the neural network 500.
Referring now to
At 1404, the process 1400 can receive a first patient image. In some configurations, the first patient image can be an MR image associated with the patient. In some configurations, the first patient image can be a T1-weighted multi-contrast MR image. In some configurations, the first patient image can be the input image 1204 in
At 1408, the process 1400 can receive a second patient image. In some configurations, the second patient image can be a noisy ASL image associated with the patient. In some configurations, the noisy PET image can be sparsely sampled data (e.g., k-space data). In some configurations, the second patient image can be previously acquired concurrently and/or simultaneously along with the first patient image.
At 1412, the process 1400 can train the model based on the first patient image and the second patient image. As described above, in some configurations, the model can include the neural network 1300 in
In configurations where the process 1400 trains the model to denoise an image, the first patient image can be a noisy ASL image. In some configurations, the process 1300 can iteratively provide the first patient image to the model, receive an output image from the model, and update the model parameters based on the output image and the noisy ASL image (e.g., the second patient image). In some configurations, the process 1400 can update the model by solving equation (33) using L-BFGS as described above.
In configurations where the process 1400 trains the model to reconstruct an image, the first patient image can be sparsely sampled data (e.g., k-space data). In some configurations, the process 1300 can iteratively provide the first patient image to the model, receive an output image from the model, and update the model parameters based on the output image and the sparsely sampled data (e.g., the second patient image). In some configurations, the process 1400 can update the model by solving equations (38)-(40) as described above.
At 1416, the process 1400 can provide the first patient image to the model. At 1416, the model is trained and can be referred to as a trained model.
At 1420, the process 1400 can receive a third patient image from the model. The third patient image may be an image associated with an imaging modality that differs from the first patient image or the second patient image. The third patient image can be a denoised ASL image if the second patient image was a noisy ASL image. The third patient image can be a reconstructed ASL image if the second patient image was sparsely sampled data.
At 1424, the process 1400 can output the third patient image to at least one of a memory or a display. In some configurations, at 1424, the process 1400 can cause the third patient image to be displayed on a display (e.g., the display 108). In some configurations, at 1424, the process 1400 can generate a report based on the third patient image. In some configurations, at 1424, the process 1400 can output the report to at least one of a memory or a display. In some configurations, at 1424, the process 1400 can cause the report to be displayed on a display (e.g., the display 108). It is understood that at least some of the steps included in the process 1400 can be rearranged and/or executed in parallel.
Referring now to
In some configurations, the neural network 1500 can include a number of feature maps coupled together by a combination of different layers. In some configurations, the neural network 1500 can include a first feature map 1506, a second feature map 1508, a third feature map 1510, a fourth feature map 1512, a fifth feature map 1514, a sixth feature map 1516, a seventh feature map 1518, an eighth feature map 1520, a ninth feature map 1522, a tenth feature map 1524, a eleventh feature map 1526, a twelfth feature map 1528, a thirteenth feature map 1530, a fourteenth feature map 1532, a fifteenth feature map 1534, a sixteenth feature map 1536, a seventeenth feature map 1538, an eighteenth feature map 1540, and a nineteenth feature map 1542.
As described above, the feature maps 1506-1542 can be coupled together using a number of different layers. The layers may be arranged into groups of common layer arrangements. For example, a layer group can include a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. In some configurations, the neural network 1500 can include a first layer group 1548, a second layer group 1550, a third layer group 1552, a fourth layer group 1554, a fifth layer group 1556, a sixth layer group 1558, a seventh layer group 1560, an eighth layer group 1562, a ninth layer group 1564, a tenth layer group 1566, a eleventh layer group 1568, a twelfth layer group 1570, a thirteenth layer group 1572, a fourteenth layer group 1574, a fifteenth layer group 1576, a sixteenth layer group 1578, a seventeenth layer group 1580, an eighteenth layer group 1582, a nineteenth layer group 1584, a twentieth layer group 1586, a twenty-first layer group 1588, a twenty-second layer group 1590, and a twenty-third layer group 1592.
In some configurations, the feature maps 1506-1542 can be the same as the feature maps 506-542 in the neural network 500 with a few differences. In some configurations, the neural network 1500 can include a non-local layer 1594 coupled to the eighteenth feature map 1540 by a twenty-fourth layer group 1596 including a convolutional layer (e.g., a convolutional layer with a stride length of 1), a BN layer, and an LReLU layer. The non-local layer 1594 can also be coupled to the nineteenth feature map 1542 by the twenty-third layer group 1592. In some configurations, the nineteenth feature map 1542 can include twice as many channels as the eighteenth feature map 1540 (e.g., twenty-four channels) to generate the two-channel predicted image 1504.
For DECT scans, sinogram data yl from low energy (typically 80 kVp), and yh from high energy (typically 140 kVp) are acquired simultaneously. Traditionally, the low energy image xl and high energy image xh are reconstructed from yl and yh independently based on the Poisson model,
where A is the system matrix, bi denotes the blank scan for detector i, and ri is the mean value of background noise for detector i. xl is noisier than xh due to higher attenuation coefficients at 80 kVp compared to 140 kVp. For clinical diagnosis, the difference image xd=xl−xh is important as it contains the injected contrast agent information, such as iodine concentration. Always, xd is obtained after reconstructing xl and xd independently, which is noisy in low-dose scenarios. Various methods have been proposed for DECT image reconstruction.
Supervised deep learning approaches have been applied to medical image reconstruction, but these approaches require a large number of high-quality training images with k-space data or sinograms also available, which are difficult to acquire or obtain. The deep image prior (DIP) framework shows that convolutional neural networks (CNNs) can learn intrinsic structural information from the corrupted images. In DIP framework, random noise is used as the network input and no high-quality training labels are needed. Furthermore, it has been shown that when the network input is not random noise but high-quality prior image from the same subject, the denoising results can be further improved. The nosier images, xl and xd, can be represented as the two-channel output from a neural network as
x
l
=f(θ|α)1,xd=f(θ|α)2 (43)
where f represents the neural network (e.g., the neural network 1500), θ stands for the network trainable parameters, α is the network input, and the subscript of f( ) implies the output channel number. Here xl+xh is employed as the network input α as it has lower noise but preserves all image structures. To further utilize the prior information, the non-local layer 1594 can function as additional constraint. The non-local layer 1594 can be pre-calculated based on xl+xh using a Gaussian weighting function. After substituting xl, xh=xl−xd with the neural network representation (43) and expressing the objective function in a constrained format, the original dual-energy CT reconstruction model can be rewritten as
max L(l|xl)+L(h|xh)
s.t.x
l
=f(θ|α)1, xh=f(θ|α)1+f(θ|α)2 (44)
which can be solved by the ADMM algorithm iteratively in three steps. The first step is
which is a network training problem and can be solved using the L-BFGS algorithm due to its monotonic property. The second step is
which is a penalized CT reconstruction, and can be solved using the SQS algorithm. Note that xl and xh can be updated independently in this step. The third step is the update of dual variables μl and μh.
Referring now to
At 1604, the process 1600 can receive a first patient image. In some configurations, the first patient image can be a single channel three dimensional image CT image. The single channel three dimensional CT image can be a summation of low energy and high energy CT images reconstructed using filtered back projection (FBP)). In some configurations, the first patient image can be a single channel image having pixel intensities normalized to a range of [0, 1].
At 1608, the process 1600 can receive a second patient image. In some configurations, the second patient image can be a two channel three dimensional DECT image. The DECT image can include a low energy image (e.g., as a first channel) and a difference image (e.g., as a second channel). In some configurations, the second patient image can be previously acquired concurrently and/or simultaneously along with the first patient image.
At 1612, the process 1600 can train the model based on the first patient image and the second patient image. As described above, in some configurations, the model can include the neural network 1500 in
At 1616, the process 1600 can provide the first patient image to the model. At 1616, the model is trained and can be referred to as a trained model. At 1620, the process 1600 can receive a third patient image from the model. The third patient image can be a reconstructed DECT image. The reconstructed DECT image can include a low energy image (e.g., as a first channel) and a difference image (e.g., as a second channel).
At 1624, the process 1600 can output the third patient image to at least one of a memory or a display. In some configurations, at 1624, the process 1600 can cause the third patient image to be displayed on a display (e.g., the display 108). In some configurations, at 1624, the process 1600 can generate a report based on the third patient image. In some configurations, at 1624, the process 1600 can output the report to at least one of a memory or a display. In some configurations, at 1624, the process 1600 can cause the report to be displayed on a display (e.g., the display 108). It is understood that at least some of the steps included in the process 1600 can be rearranged and/or executed in parallel.
Referring back to
where CROI(t) and Cref(t) are the radioactivity concentrations in ROI and reference tissue at time t, respectively. After equilibrium time t_, the plot becomes linear. The slope of the linear part is distribution volume ratio (DVR), which has a relationship with binding potential (BP) by BP=DVR−1. In testing, reference tissue was drawn in the muscle region. For convenience, ym∈N×1 and xm∈N×1 are used to represent
in the mth frame in the rest of the paper. The optimization problem for parametric image reconstruction can be written as
where Y∈I×1={ym|m>m*}, P∈I×2N={pm|pm=[diag(xm), IN], m>m*}, and K∈2N×1=[DV R′, V′]′. m* stands for the equilibrium frame and IN is a N×N identity matrix. Inspired by the deep image prior framework which shows that CNN can learn intrinsic structure information from the corrupted image and its related works, a constraint is introduced by representing the unknown parametric image DV R by
DV R=f(θ|z) (49)
where f represents the neural network, θ are the unknown parameters of the neural network, z denotes the neural network input (e.g., the input image 502) which comes from CT images of the same patient. With this constraint, the optimization problem for parametric image reconstruction can be written as
minimize ½||Y−PK||2
subject to DV R=f(θ|z) (50)
The augmented Lagrangian format of (50) can be represented as
The augmented Lagrangian optimization problem in (51) can be solved by ADMM algorithm in three steps:
The subproblem for θ in (52) is a network training problem with L2 norm as loss function. The network structure employed in this problem can be the neural network 500 in
Thus, subproblem (53) has the following surrogate function
In the surrogate function (56), the penalty term in the subproblem (53) is written as a function of Kj by using a conditional parameter ρj. f(θn+1|z) and μn are replaced by Fn+1∈2N×1=[f′(θn+1|z),0′]′ and Un∈2N×1=[μn′, 0′]′, respectively. The resulting iterative update equation for Kj is
where m=1, . . . , M+1 is the number of sub-iterations. DV Rjn+1 is equal to [Kjm,M+1]+(j=1, . . . ,N). In some configurations, to solve equations (52)-(54), equation (52) can be solved over ten iterations, and equations (53) can be solved over two iterations.
Referring now to
At 1704, the process 1700 can receive a first patient image. In some configurations, the first patient image can be a single channel three dimensional image CT image. The single channel three dimensional CT image can be a low-dose CT image. In some configurations, the first patient image can be a single channel image having pixel intensities normalized to a range of [0, 1].
At 1708, the process 1700 can receive a second patient image. In some configurations, the second patient image can be a reconstructed PET image. In some configurations, the reconstructed PET image can be reconstructed using 3-dimensional ordered subset expectation maximization (3D-OSEM). For example, PET images can be acquired with a multi-bed-position dynamic scan and reconstructed using 3D-OSEM with three iterations and twenty-one subsets.
At 1712, the process 1700 can train the model based on the first patient image and the second patient image. As described above, in some configurations, the model can include the neural network 500 in
At 1716, the process 1700 can provide the first patient image to the model. At 1716, the model is trained and can be referred to as a trained model. At 1720, the process 1700 can receive a third patient image from the model. The third patient image can be a parametric PET image. The parametric PET image can include a low energy image (e.g., as a first channel) and a difference image (e.g., as a second channel).
At 1724, the process 1700 can output the third patient image to at least one of a memory or a display. In some configurations, at 1724, the process 1700 can cause the third patient image to be displayed on a display (e.g., the display 108). In some configurations, at 1724, the process 1700 can generate a report based on the third patient image. In some configurations, at 1724, the process 1700 can output the report to at least one of a memory or a display. In some configurations, at 1724, the process 1700 can cause the report to be displayed on a display (e.g., the display 108). It is understood that at least some of the steps included in the process 1700 can be rearranged and/or executed in parallel.
The present invention has been described in terms of one or more preferred configurations, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.
This application is based on, claims the benefit of, and claims priority to, U.S. Provisional Patent Application No. 62/906,597, filed Sep. 26, 2019, which is hereby incorporated herein by reference in its entirety for all purposes.
The invention was made with government support under R01AG052653, P41EB022544, C06CA059267, P41EB022544, T32EB013180, R01HL118261, R01HL137230, and grant R01CA165221 awarded by the National Institutes of Health. The government has certain rights in this invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/053166 | 9/28/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62906597 | Sep 2019 | US |