DEEP-LEARNING BASED SEPARATION METHOD OF A MIXTURE OF DUAL-TRACER SINGLE-ACQUISITION PET SIGNALS WITH EQUAL HALF-LIVES

Information

  • Patent Application
  • 20200037974
  • Publication Number
    20200037974
  • Date Filed
    December 07, 2018
    5 years ago
  • Date Published
    February 06, 2020
    4 years ago
Abstract
The present invention discloses a DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope. It predicts the two separate PET signals by establishing a complex mapping relationship between the dynamic mixed concentration distribution of the same isotope-labeled dual-tracer pairs and the two single radiotracer concentration images. Based on the compartment models and the Monte Carlo simulation, the present invention selects three sets of the same radionuclide-labeled tracer pairs as the objects and simulates the entire PET process from injection to scanning to generate enough training sets and testing sets. When inputting the testing sets into the constructed universal deep belief network trained by the training sets, the prediction results show that the two individual PET signals can been reconstructed well, which verifies the effectiveness of using the deep belief network to separate the dual-tracer PET signals labelled with the same isotope.
Description
FIELD OF TECHNOLOGY

The present invention belongs to the field of PET (positron emission tomography) imaging, which involves a dual-tracer PET separation method based on Deep Belief Networks (DBN) aiming at the tracer pairs labelled with the same isotope.


BACKGROUND TECHNOLOGY

Positron emission tomography (PET) is a kind of functional medical imaging modality used to detect the physiochemical activity in living bodies. The technology is often used to measure the physiological indexes (i.e. glucose metabolism, hypoxia, blood flow, and hypoxia) of the certain parts. The technology of imaging different functions in different parts in the living body mainly relies on various radioactive tracers. Researchers replaced one or more atoms in a chemical compound (i.e. Glucose, protein, nucleic acid) by a radionuclide (11C, 13N, 15O, 18F is most common) to make radioactive tracers. By using bolus and continuous infusion protocols, the tracers will concentrate in a certain part in the living body that referred as Regions of Interest (ROIs), where it decays in a certain rate. During the decay period, it emits a positron, which travels a short distance before annihilating with an electron. This annihilation produces two high-energy (511 keV) γ photons propagating in nearly opposite directions. The two γ photons will be detected by detectors outside of the body, then the reconstruction algorithm will be performed to recover a radioactive concentration distribution image.


However, for a specific part of the body, multi-faceted, multi-angle detection can provide more information, and depicting the physiological and functional status of the tumor from different aspects also helps to improve the diagnostic accuracy, so the dual-tracer PET imaging is very necessary. In order to save scan time, reduce patient suffering and improve diagnostic efficiency, single-scan dual-tracer PET dynamic imaging has become a key technology to be solved. Since an annihilation reaction produces a pair of 511 ekV γ photons, it cannot be from the energy point of view to distinguish between the two tracers. At present, there are two main types of mainstream dual-tracer PET separation methods: (1) separation of reconstructed dual-tracer PET images; (2) integration of separation algorithm into the separation process of dual-tracer PET, and direct reconstruction of the concentration distribution of the two radiotracers. The first type of algorithm is essentially a signal separation problem, and the solution is relatively extensive. The second type of algorithm requires complex reconstruction algorithms to support, and the practicality is not strong. At present, the researchers mainly focus on the first type of algorithm.


The separation operation based on the mixed dual-tracer PET image mainly depends on the difference of the tracer half-life and kinetic parameters, combined with the compartment model to solve, and most of them need to be interlaced. Two tracers were injected to provide partial non-mixed single tracer information to facilitate modeling and separation, which lengthens the scanning process for the entire PET.


The dual tracers labeled with the same radioactive isotope play important roles for clinical use. For example, [18F]FDG-[18F]FLT is used for detecting cell proliferation, [11C]FMZ-[11C]DTBZ is used for measuring neurotransmitter system, [62Cu]ATSM-[62Cu]PTSM is used for observing blood flow and hypoxia. However, when dual tracers are labeled with the same isotope, the distinguishability of the two signals is diminished, and clinical requirements require simultaneous injection of two tracers to reduce scan time, which renders most algorithms ineffective. The separation of PET signals with the same isotope labeling has become a difficult problem to solve.


SUMMARY OF THE INVENTION

According to all described above, the present invention provides a DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope. With the help of information exaction ability of deep learning, two individual PET volumetric images can be separated from a mixture of dual-tracer PET images.


A DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope comprises the following steps:

  • (1) injecting the dual tracers labelled with the same isotope (tracer I and tracer II) to a biological tissue and performing dynamic PET imaging on the biological tissue, and obtaining the coincidence counting vector corresponding to different moments, then constituting dynamic coincidence counting sequence Sdual that reflects the mixed dual-tracer distribution.
  • (2) injecting the individual tracer I and tracer II labelled with the same isotope to the biological tissue sequentially and performing two separate dynamic PET imaging on them respectively, then obtaining dynamic coincidence counting sequences, denoted as SI and SII, that reflect the tracer I and tracer II distribution, respectively.
  • (3) using PET reconstruction algorithm to acquire dynamic PET image sequences XDual, XI and XII, which correspond to the dynamic coincidence counting sequences Sdual, SI and SII.
  • (4) repeating step (1)˜(3) multiple times to generate enough sequences of dynamic PET images sequences XDual, XI and XII, and dividing them into training datasets and testing datasets.
  • (5) extracting time activity curves (TACs) of each pixel from dynamic PET images sequences XDual, XI and XII, using the TAC of XDual as input and the TAC of the corresponding XI and XII as ground truth, training by deep belief network to obtain dual-tracer PET reconstruction model.
  • (6) inputting every TAC of any XDual into the PET reconstruction model one by one, outputting the TAC of the corresponding XI and XII, the reshaping the TAC to obtain dynamic PET image sequences XtestI and XtestII corresponding to tracer I and tracer II.


Furthermore, the method used to extract TACs from dynamic PET image sequences XDual, XI and XII in step (5) can be formulated as:






X
dual=[TAC1dual TAC2dual . . . TACndual]T






X
I=[TAC1I TAC2I . . . TACnI]T






X
II=[TAC1II TAC2II . . . TACnII]T


wherein TAC1dual˜TACndual are the TACs of the 1st-n-th pixels in the dynamic PET images sequence XDual, TAC1I˜TACnI are the TACs of the 1st-n-th pixels in the dynamic PET images sequence XI, TAC1II˜TACnII are the TACs of the 1st-n-th pixels in the dynamic PET images sequence XII, n represents the total pixels of the PET image, and T represents transposition.


Furthermore, the training process of DBN in the described step (5) can be detailed as follows:


5.1 initializing a DBN framework consisting of an input layer, hidden layers and an output layer, wherein the hidden layers are composed of three stacked Restricted Boltzmann Machines (RBMs);


5.2 initializing the parameters in the described DBN, which includes the number of nodes in hidden layers, the offset vector and the weight vector between layers, learning rate, activation function and the maximum number of iterations;


5.3 pre-training the stacked RBMs in the hidden layers;


5.4 transmitting the parameters obtained from pre-training to initialized DBN, and then inputting the TAC in the dynamic PET image sequence Xdual one by one into the DBN, calculating the error function L between the output result and the corresponding ground truth and applying the gradient descent method to continuously update the parameters of the whole network until the error function L converges or the maximum number of iteration is reached then a trained PET reconstruction model can be acquired.


Furthermore, in step 5.3, the restricted Boltzmann machine in the hidden layer is pre-trained, that is, each restricted Boltzmann machine is composed of a display layer and a hidden layer, and by the contrast divergence algorithm, the weight of each layer will be updated until the hidden layer can accurately represent the features of the display layer and can reverse the display layer. Furthermore, the loss function L described in step 5.4 is as follows:





L=∥custom-character−TACjI22+∥custom-character−TACjII22+ζ∥(custom-character+custom-character)−(custom-character+custom-character22


wherein TACjI is the TAC of the j-th pixel in the dynamic PET image sequence XI; TACjII is the TAC of the j-th pixel in the dynamic PET image sequence XII; TACjI and TACjII represent the two output results correspond to those in XI and XII that obtained after the TAX of the j-th pixel in the dynamic PET image sequence Xdual are substituted into the DBN; j is a natural number and 1≤j≤n, n is the total number of the PET images, ∥ ∥2 the L2 norm; ζ is a user defined constant.


Furthermore, in the step (6), the TAC of the j-th pixel of Xdual in the test set is input into the dual-tracer PET reconstruction model, and the output of the pixel is [TACjI TACjI] with respect to the two individual tracers, wherein 1≤j≤n, n is the total number of the PET images. According to the procedures above, the TAC of all the pixels of Xdual are input to the model, then the dynamic PET image sequences XtestI and XtestII that correspond to tracer I and tracer II is obtained according to the following formula:






X
test
I=[TAC1I TAC2I . . . TACnI]T






X
test
II=[TAC1II TAC2II . . . TACnII]T

    • wherein T represents matrix transposition.


The present invention described above can achieve the separation of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope by the trained DBN framework. A point-to-point mapping relationship between a mixed dual-tracer PET image and two single-tracer PET images was learned by inputting training data and ground truth into the constructed neural network. It is worth mentioning that this network is universal. The training set contains multiple sets of tracer combinations labeled with the same isotope, and the results show the good performance of this model on dual-tracer separation.


In summary, the present invention includes a universal framework constructed by deep belief network to establish a mapping relationship for the separation of dual tracers. By the powerful feature extraction capabilities of it, the signal separation of dual-tracer PET with the same isotope can be achieved well





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is the flow diagram of the separation method of dual-tracer PET signals according to the present invention.



FIG. 2 is the structure of the DBN according to the present invention.



FIG. 3 (a) is the Hoffman brain template.



FIG. 3 (b) is the complex brain template.



FIG. 3 (c) is the Zubal thorax template.



FIG. 4 (a) is the ground truth of 9th frame of [11C]DTBZ.



FIG. 4 (b) is the estimated result of 9th frame of [11C]DTBZ.



FIG. 4 (c) is the ground truth of 9th frame of [11C]FMZ.



FIG. 4 (d) is the estimated result of 9th frame of [11C]FMZ.



FIG. 5 (a) is the ground truth of 9th frame of [62Cu] ATSM.



FIG. 5 (b) is the estimated result of 9th frame of [62Cu] ATSM.



FIG. 5 (c) is the ground truth of 9th frame of [62Cu] PTSM.



FIG. 5 (d) is the estimated result of 9th frame of [62Cu] PTSM.



FIG. 6 (a) is the ground truth of 9th frame of [18F] FDG.



FIG. 6 (b) is the estimated result of 9th frame of [18F] FDG.



FIG. 6 (c) is the ground truth of 9th frame of [18F] FLT.



FIG. 6 (d) is the estimated result of 9th frame of [18F] FLT.





SPECIFIC EMBODIMENTS OF THE INVENTION

In order to more specifically describe the present invention, the detailed instructions are provided in conjunction with the attached figures and following specific embodiments:


As FIG. 1 shows, the present invention of DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope include the following steps:


(1) Preparation of Training Data

    • 1.1 Injecting the dual tracers labelled with the same isotope (tracer I and tracer II) to a biological tissue and performing dynamic PET imaging on it. Then a sequence of sinogram can be acquired, denoted as SI+II.
    • 1.2 Injecting the individual Tracer I and Tracer II labelled with the same isotope to the biological tissue sequentially and performing two separate dynamic PET imaging on them, respectively. Then two sequences of sinogram can be acquired, denoted as SI and SII.
    • 1.3 Using the reconstruction algorithm to reconstruct the sinogram to the concentration distribution of the radioactive tracer in the body, which are denoted as Xdual, YI and YII corresponding to Sdual, SI and SII respectively.
    • 1.4 Repeating step (1)˜(3) to generate enough sequences of dynamic PET images UDualcustom-characterUI and UII, and dividing them into training datasets Utraindualcustom-character UtrainI and UtrainII and testing datasets Utestdualcustom-character UtestI and UtestII randomly with a ratio of around 2:1.


Extracting pixel based time activity curves (TACs) from the reconstructed images Xdual, YI, YII, the process can be described as follows:






X
dual=[x1,x2,x3, . . . ,xN]T,xi=[xi1,xi2, . . . ,xiM]T






Y
I=[(y1)1,(y1)2, . . . ,(y1)N]T,(y1)i=[(y1)i1,(y1)i2, . . . ,(y1)iM]T






Y
II=[(y2)1,(y2)2, . . . ,(y2)N]T,(y2)i=[(y2)i1,(y2)i2, . . . ,(y2)iM]T


wherein xij represents the radioactive concentration of pixel i at j-th frame, (y1)ij and (y2)ij represent the radiotracer concentration of pixel point i of Tracer I and Tracer II at j-th frame, respectively, and N is the total number of pixels of the resulting PET image, M is the total number of frames acquired by dynamic PET.


(2) Preparation of Training Sets and Test Set Data.


70% of the TAC data set Xdual is extracted as the training set Xtraindual, and the remaining 30% is used as the testing set Xtestdual, and YI and YII corresponding to the training set and the test set are respectively connected in series as the labels of the training set and the ground truth of the testing set, which can be shown as follows:


(3) Constructing a deep belief network for the signal separation of the dual-tracer PET with the same isotope; as shown in FIG. 2, this deep belief network consists of an input layer, three hidden layers, and an output layer.


(4) The training set is input into this network for training. The training process is as follows:

    • 4.1 Initializing the network; initializing the deep belief network, including the number of nodes of all the layers, initializing offset vectors and weight matrix, setting learning rate, activation function and the number of iterations.
    • 4.2 Inputting the Xtraindual into the network for training; the training process is divided into two parts: pre-training and fine-tuning.
    • 4.2.1 Pre-training: The contrast divergence algorithm is required to keep updating the parameters of each restricted Boltzmann machine until all stacked restricted Boltzmann machines are trained well. In more detail, each restricted Boltzmann machine consists of a display layer and a hidden layer, using the contrast divergence algorithm to update the weights until the hidden layer accurately express the characteristics of the display layer and reverse the display layer.
    • 4.2.2 Fine-tuning: The parameters obtained by pre-training are copied to a common neural network with the same structure. These parameters are used as the initial values of the neural network to participate in the final fine-tuning process. The error function L between custom-character and label Ytrain is calculated. Based on L, the gradient descent algorithm is used to update the weight matrix of the entire network until the iteration stops.


The error function L is as follows:






L=∥
custom-character
−Y
train
I22+∥custom-character−YtrainII22−γ(∥custom-characterYtrainI22+∥custom-character−YtrainII22)


wherein, the first two reflect the error between the ground truth and the predicted value, while the latter two are the difference between the two tracer signals; γ is a defined constant used to adjust the proportion of the latter two items in the loss function.

    • 4.3 Adjusting the parameters of the whole network by the backpropagation algorithm. The error function is as follows:





Loss(custom-characterjI,custom-characterjII)=∥custom-characterjI−TACjI22+∥custom-characterjII−TACjII22+ξ∥custom-characterjI+custom-characterjII)−(TACjI+TACjII)∥22


wherein custom-characterjI and custom-characterjII and are the prediction value of TACjI and TACjII by the DBN respectively, and ξ is the weigh coefficient.


(5) Inputting the TACs of the test set data Xtestdual into the trained neural network to obtain the separation signal of the dual-tracer PET labeled with the same isotope.


Next, we validate the present invention by simulated experiment.


(1) Phantom Selection


There are three different dual tracers in training datasets, and every group is equipped with a different phantom with different Regions of interest (ROIs), which indicates different biochemical environment. FIG. 3 (a) is the Hoffman brain phantom with 3 ROIs for [18F] FDG+[18F] FLT; FIG. 3 (b) is the complex brain phantom with 4 ROIs for [11C] FMZ+[11C] DTBZ; FIG. 3 (c) is the Zubal thorax phantom with 3 ROIs for [62Cu] ATSM+[62Cu] PTSM.


(2) The Simulation of PET Concentration Distribution


The parallel compartment model was used to simulate the motion of dual tracers, and then the stable dual-tracer concentration distribution can be acquired by solving the dynamic ordinary differential equations (ODEs). The single compartment model based on kinetic parameters was used to simulate the motion of a single tracer in vivo, and then the stable single-tracer concentration distribution can be acquired by solving the dynamic ordinary differential equations (ODEs).


(3) The Simulation of PET Scanning Process


Computer Monte Carlo simulations were used to perform the dynamic dual-tracer PET scans with the help of software GATE. All simulations are based on the geometry of the full 3D whole body PET scanner SHR74000, designed by Nippon Kanematsu Photonics Co., Ltd. The PET scanner is designed with 6 rings and each ring has 48 detector blocks. Each detector block consists of a 16×16 array of lutetium yttrium orthosilicate (LYSO) crystals. The ring diameter of the scanner is 826 mm. When three groups of dual tracers and a single tracer concentration distribution map was input the Monte Carlo system, corresponding sinogram data were acquired.


(4) Reconstruction Process


The sinogram was reconstructed using the classical ML-EM reconstruction algorithm to obtain the concentration distribution of the simulated radiotracer pairs.


(5) Acquisition of TAC Curve;


TACs Based on pixel were obtained by recombining the concentration distribution matrix of the three groups of mixed radioactive concentration tracers in the body.


(6) Training Process;


the 70% TACs of three dual-tracer groups ([8F] FDG+[18F] FLT, [11C]FMZ+[11C] DTBZ and [62Cu] ATSM+[62Cu] PTSM) was inputted to the DBN as training data to do the pre-training. The TAC curve of a single tracer serves as a label to provide feedback for fine-tuning the entire network.


(7) Testing


The remaining 30% was used to evaluate the validity of the network.



FIG. 4 (a) and FIG. 4 (b) are the simulated radioactive concentration images of the 9th frame of [11C] DTBZ and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 4 (c) and FIG. 4 (d) are the simulated radioactive concentration images of the 9th frame of [11C] FMZ and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 5 (a) and FIG. 5 (b) are the simulated radioactive concentration images of the 9th frame of [62Cu] PTSM and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 5 (c) and FIG. 5 (d) are the simulated radioactive concentration images of the 9th frame of [62Cu] ATSM and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 6 (a) and FIG. 6 (b) are the simulated radioactive concentration images of the 9th frame of [18F] FDG and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 6 (c) and FIG. 6 (d) are the simulated radioactive concentration images of the 9th frame of [18F] FLT and the predicted radioactive concentration images obtained by the trained DBN, respectively.


Comparing the predicted image with the simulated ground truth, it can be found that the constructed deep belief network can separate the dual-tracer PET signal with the same isotope label well. This confirms the effectiveness of the deep belief network in feature extraction and signal separation, and also demonstrates that the method of the invention is effective in processing PET signals labeled with the same isotope.


The description of the specific instances is to facilitate ordinary technicians in the technical field to understand and apply the present invention. It is obvious that a person familiar with the technology in this field can easily modify the specific implementation mentioned above and apply the general principles described here into other instances without the creative Labor. Therefore, the present invention is not limited to the above instances. According to the disclosure of the present invention, the improvement and modification of the present invention will be all within the protection scope of the present invention.

Claims
  • 1. A deep belief network based separation method of a mixture of dual-tracer PET signals labelled with the same isotope, which comprises the following steps: (1) injecting the dual tracers labelled with the same isotope (tracer I and tracer II) to a biological tissue and performing dynamic PET imaging on the biological tissue, then obtaining a coincidence counting vector corresponding to different moments, and then forming a dynamic coincidence counting sequence reflecting the concentration distribution of the mixed dual tracers, denoted as Sdual;(2) injecting individual tracer I and tracer II to the biological tissue sequentially and performing two separate dynamic PET imaging on the biological tissue respectively, obtaining coincidence counting vectors of the two sets of single tracers corresponding to different moments, and constituting the dynamic coincidence counting sequences that respectively reflect the distribution of tracer I and tracer II, denoted as SI and SII;(3) using the PET reconstruction algorithm to reconstruct the dynamic PET images Xdual, XI and XII corresponding to the dynamic coincidence counting sequences Sdual, SI and SII;(4) repeating step (1)˜(3) multiple times to generate enough dynamic PET image sequences Xdual XI XII and dividing them into training sets and testing sets;(5) extracting the time activity curve (TAC) of each pixel from the dynamic PET image sequences Xdual, XI and XII. Taking the TACs of the training set Xdual as the input sample, and the TACs of the corresponding XI and XII as the ground truth, then train them by the deep belief network to obtain the dual-tracer. PET reconstruction model; and(6) inputting any TAC of Xdual in the test set into the PET reconstruction model one by one, and outputting the TACs corresponding to XI and XII, finally reconstructing the TACs to obtain dynamic PET images XtestI and XtestII corresponding to Tracer I and Tracer II.
  • 2. The separation method described in claim 1, characterized that, in the step (5), the TAC of each pixel is extracted from the dynamic PET image sequences Xdual, XI and XII according to the following formula: Xdual=[TAC1dual TAC2dual . . . TACndual]T XI=[TAC1I TAC2I . . . TACnI]T XII=[TAC1II TAC2II . . . TACnII]T
  • 3. The separation method according to claim 1, characterized in that: specific process of training in the step (5) by the deep belief network is as follows: 5.1 initializing a DBN framework consisting of an input layer, hidden layers and an output layer, wherein the hidden layers are composed of three stacked Restricted Boltzmann Machines (RBMs);5.2 initializing the parameters in the DBN, which include the number of nodes in hidden layers, the offset vector and the weight vector between layers; activation function and the maximum number of iterations;5.3 pre-training the stacked RBMs;5.4 passing the pre-trained parameters to the initialized deep belief network, and then substituting the TAC in the dynamic PET image sequence Xdual into the above-mentioned deep belief network, calculating an error function L between the output result and the corresponding ground truth, continuously updating the parameters of the entire network according to a gradient descent method until the error function L converges or reaches the maximum number of iterations, thus completing the training to obtain a dual tracer PET reconstruction model.
  • 4. The separation method described in claim 3, characterized in that: in step 5.3, the restricted Boltzmann machine in the hidden layer is pre-trained, that is, each restricted Boltzmann machine is composed of one display layer and one hidden layer, and the weight of the hidden layer and the display layer is continuously updated by the contrast divergence algorithm, until the hidden layer can accurately represent the characteristics of the display layer and reverse the display layer.
  • 5. The separation method described in claim 3, characterized in that: the formula of the error function L in the step 5.4 is as follows: L=∥−TACjI∥22+∥−TACjII∥22+ζ∥(+)−(+∥22
  • 6. The separation method described in claim 1, characterized in that: in the step (6), the TAC of j-th pixel of Xdual in the test set is input into the dual-tracer PET reconstruction model, and then [TACjI TACjII] is output corresponding to the two separated tracers, wherein 1≤j≤n, n is the total pixel number of the PET images; according to the above procedures, the TAC of all the pixels of Xdual are tested, then the dynamic PET image sequences XtestI and XtestII that correspond to tracer I and tracer II are obtained according to the following formula: XtestI=[TAC1I TAC2I . . . TACnI]T XtestII=[TAC1II TAC2II . . . TACnII]T wherein T represents matrix transposition.
Priority Claims (1)
Number Date Country Kind
201810507876.3 Aug 2018 CN national