The following relates generally to the medical imaging arts, positron emission tomography (PET) arts, image reconstruction arts, and the like.
Image reconstruction is a key component in the development and application of advanced positron-emission tomography (PET) imaging. Some known PET image reconstruction algorithms have been developed based upon expectation-maximization (EM), row-action maximum-likelihood (RAMLA), maximum a posteriori (MAP), and penalized maximum-likelihood (PML) algorithms. The algorithms have also been extended to list-mode, time-of-flight (TOF) PET imaging, and 4D spatial-temporal/parametric image reconstructions. Notwithstanding the foregoing, further improvement in image quality in PET imaging would be advantageous. Such improvements may, for example, enable use of PET detection and electronic technologies with reduced density of detectors without (or with reduced) concomitant loss in image quality. The ability to reduce the density of detectors while (at least substantially) retaining image quality would enable reduction in PET imaging device cost and could also provide benefits such as more efficient data processing due to the reduced data set sizes being reconstructed.
The following discloses a new and improved systems and methods that address the above referenced issues, and others.
In one disclosed aspect, an emission imaging device comprises an emission imaging scanner including radiation detectors for acquiring emission imaging data, an electronic data processing device programmed to reconstruct emission imaging data acquired by the emission imaging scanner to generate a reconstructed image, and a display device connected to display the reconstructed image. The emission imaging data are reconstructed to generate the reconstructed image by executing a constrained optimization program that is constrained by an image variability constraint ∥T(u)∥≤t0 in which t0 is an image variability constraint parameter, u is the reconstructed image at a current iteration of the constrained optimization program, T(u) is a sparsifying image transform, and ∥ . . . ∥ is a norm that outputs a strictly positive scalar value for the transformed image T(u).
In another disclosed aspect, an emission imaging method comprises: acquiring emission imaging data gm for a subject using an emission imaging scanner including radiation detectors; reconstructing the emission imaging data to generate a reconstructed image by executing the optimization program
where g(u) denotes a data model of the emission imaging scanner that transforms the reconstructed image u at the current iteration of the optimization program into emission imaging data and D(gm,g(u)) denotes a measure of data fidelity between the gm and g(u); during the reconstructing, constraining each iteration of the optimization program by an image variability constraint ∥T(u)∥≤t0 in which t0 is an image variability constraint parameter, T(u) is a sparsifying image transform, and ∥ . . . ∥ is a norm that outputs a strictly positive scalar value for the transformed image T(u); and displaying the reconstructed image on a display device.
In another disclosed aspect, a positron emission tomography (PET) imaging device comprises: a PET scanner including an annular ring of radiation detectors for acquiring PET imaging data; an electronic data processing device (20) programmed to reconstruct PET imaging data acquired by the PET scanner to generate a reconstructed image; and a display device (34) connected to display the reconstructed image. The PET imaging data are reconstructed to generate the reconstructed image by executing a constrained optimization program:
where g(u) denotes a data model of the PET scanner that transforms the reconstructed image u at the current iteration of the constrained optimization program into emission imaging data, D(gm,g(u)) denotes a measure of data fidelity between the gm and g(u), ∥f∥TV≤t0 is an image total variation constraint in which t0 is a total variation constraint parameter and f is a latent image defined by u=f where is a blurring matrix which is not an identity matrix, and fj≥0 is a positivity constraint.
One advantage resides in providing PET imaging with reduced equipment cost by enabling the use of a reduced number of crystals and associated electronics.
Another advantage resides in providing more efficient PET reconstruction via acquisition of smaller PET imaging data sets.
Another advantage resides in providing either one or both of the foregoing advantages without a concomitant degradation in clinical value of the PET images.
A given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
With reference to
When used for medical imaging, a radiopharmaceutical is administered to a human imaging subject, and the subject is disposed on the support 16 and moved into the PET rings 12. The radiopharmaceutical includes radioisotopes that produce positrons during radioactive decay events, and each positron annihilates with an electron in a positron-electron annihilation event that outputs two oppositely directed 511 keV gamma rays. PET imaging data are acquired by the PET detectors 12 in the form of gamma ray detection event, which may be stored in a list mode format in which each event is time stamped.
In illustrative
In the following, some illustrative embodiments of reconstruction algorithms that may be implemented by the reconstruction processor 30 are described.
With reference to
In the Monte-Carlo-simulation and real-data studies described herein, PET data were collected by use of the full PET detector configuration 12F of
The following notation is used in illustrative examples herein. discrete image is defined on a three-dimensional (3D) array containing N=Nx×Ny×k identical voxels of cubic shape, where Nx, Ny, and N, denote the numbers of voxels along the Cartesian x-, y-, and z-axis, respectively. The z-axis coincides with the central axis of the PET configuration 12F (or 12S) shown in
g(u)=u+gs+gr (1)
where vector g(u) of size M denotes the model data, is an M×N system matrix in which element hij is the intersection length of LOR i with voxel j, vectors gs and gr of size M denote scatter and random events, which are assumed to be known in the work. In the following, the notation g is generally used as a shorthand for g(u). We use vector gm of size M to denote the measured data. In this work, gm, gs, and gr are corrected for the effect of photon attenuation. The goal of PET-image reconstruction is to determine (i.e., reconstruct) u from knowledge of gm, , gs, and gr.
Using Equation (1), we form an optimization program in the form of:
where D(gm,g) denotes a measure of data fidelity between measured data gm and model data g, and “s.t.” is standard notation indicating “subject to” the constraint that follows.
One constraint in Equation (2) is a positivity constraint, i.e. fj≥0, which ensures that the voxels of latent image f have positive values. The other constraint is an image total variation (TV) constraint. The image TV norm ∥f∥TV in this image TV constraint may be defined as:
where x and y denote pixel labels or, for a three-dimensional (3D) image:
It will be appreciated that the image TV norm may be written with other handedness, e.g. the “−1” operations in the subscripts may be replaced by “+1” operations.
The image u to be reconstructed is related to latent image vector f of size N through:
u=f (3)
In Equation (3), a matrix of size N×N, ∥f∥TV is the image total variation norm of f and is constrained by the total variation (TV) constraint parameter t0, fj the jth element of vector u, and j=1, 2, . . . , N.
In some tested illustrative reconstruction algorithms, the blurring matrix was obtained as follows. For a three-dimensional (3D), isotropic Gaussian function centered at a given voxel in the image array, we calculate its values at the center locations of N voxels within the image array, and use the calculated values in a concatenated form identical to that of vector u to create a vector of size N. Repeating the calculation for each of the N voxels in the image array, N such vectors can be formed; and matrix of size N×N can subsequently be built in which a row is the transpose of one of the N vectors, and the N rows are in an order consistent with the concatenated order of entries of u. The unit of standard deviation of the Gaussian function is defined in terms of voxel size. With standard deviation taking zero value, reduces to the trivial case of the identity matrix . (That is, in some embodiments it is contemplated to replace the Gaussian blurring matrix or blurring operator with the identity matrix so that Equation (3) becomes u=f). Other blurring operators, e.g. with other than an identity matrix or Gaussian form, are also contemplated).
In illustrative reconstruction algorithms, three specific implementations of the “generic” divergence D(gm,g) of Equation (2) were tested.
The first tested divergence was the Kullback-Leibler (KL) divergence, for which the program of Equation (2) becomes:
where DKL(gm,g) denotes the data-KL divergence given by:
where [•]i denotes the ith element of vector [•]. When computing DKL, (gm,g) in experiments reported herein, the entries in g that are smaller than ε=10−20 are replaced with ε. For the convenience of devising convergence conditions below, a normalized data-KL divergence is defined as D′KL(gm,g)=DKL(gm,g)/DKL(gm,gε), where gε is obtained by replacing all of the entries in g with ε. We refer to the optimization program in Equation (4) as program “DKL-fTV”.
The second tested divergence employed the 2-norm. The program of Equation (2) becomes:
which is obtained by the replacement of D(gm,g) in Equation (2) with:
(gm,g)=∥gm−g∥2 (5a)
This fidelity metric takes the 2-norm of the difference between the measured data and model data. Also, for the convenience of specifying convergence conditions below, a normalized data-2 is defined as (gm,g)=(gm,g)/(gm,0). We refer to the optimization program in Equation (5) as the program “DL2-fTV”.
The third tested divergence employed the 1-norm. The program of Equation (2) becomes:
which is obtained by the replacement of D(gm,g) in Equation (2) with:
(gm,g)=∥gm−g∥1 (6a)
denoting the 1-norm of difference between measured data and model data. Again, for the convenience of describing convergence conditions below, a normalized data-1 is defined as (gm,g)=(gm,g)/(gm,0). We refer to the optimization program in Equation (6) as the program “DL1-fTV”.
In an optimization-based reconstruction, numerous parameters are employed to specify explicitly the program, i.e., the solution set, which are referred to as “program parameters”. We consider “algorithm parameters” to be those that affect only algorithm's convergence path leading to the solution set, and thus have no impact on the theoretical specification (or design) of the solution set. Clearly, the solution set depends upon the specific form of the optimization program, and thus the optimization program itself constitutes a program parameter. Moreover, an optimization program of a given form itself involves additional program parameters. For the optimization programs considered in Equations (4)-(6), additional program parameters include system matrix , scatter and random components gs and gr, the voxel size, TV-constraint parameter t0, and blurring matrix .
The optimization algorithms used in experiments disclosed herein can solve the convex optimization programs of Equations (4)-(6), but like any other iterative algorithms, they only converge to a solution in the limit of infinite iterations. Due to the limitation of computer precision and computation time, one can obtain reconstructions only at finite iterations. Therefore, practical convergence conditions are specified under which reconstructions can be achieved within a finite number of iterations; and the practical convergence conditions thus play a role in defining an actual solution set achievable within a finite number of iterations. In experiments reported herein, when the practical conditions are satisfied, the reconstruction stops and is referred to as the “convergent reconstruction” û*; and we also denote the corresponding latent image as {circumflex over (f)}*, where û*={circumflex over (f)}*.
As already described, system matrix contains M row vectors of size N in which each entry depicts the intersection of an LOR with a voxel in the image array. In the validation and Monte-Carlo-simulation studies reported herein, scatter and random events gs and gr are not considered; whereas in the phantom and human imaging studies reported herein, the single-scatter simulation method (Watson et al., “A single scatter simulation technique for scatter correction in 3d pet”, in 3D Img. Recon. Radiol. Nucl. Med. Springer, 1996, pp. 255-68) and the delayed coincidence method (Badawi et al., “Randoms variance reduction in 3D PET”, Phys. Med. Biol., vol. 44, no. 4, p. 941, 1999) were employed for estimating gs and gr, respectively. A voxel size of 4 mm was selected for the studies because it is used often in clinical studies. Blurring matrix generated by use of a 3D isotropic Gaussian function with a standard deviation of 2.4 mm, which is 0.6 times the image-voxel size, appears to yield appropriate reconstructions for data conditions considered.
For designing the practical convergence conditions, we introduce two unitless metrics as:
CTV(fn)=|∥fn∥TV−t0|/t0 (7)
and
D′(un)=D(gm,gn)/D(gm,0) (8)
where un and fn denote reconstructions at iteration n, gn=un+gs+gr the model data estimated at the nth iteration, obtained by replacing u with un in Equation (1). Practical convergence conditions are devised with CTV(fn) and D′(un) for the studies performed herein.
With the program parameters and practical convergence conditions specified above, only image-TV-constraint parameter t0 remains to be determined. We discuss the determination of t0 in each specific study carried out below, because different data conditions in the studies can have considerably different impacts on the appropriate selection of t0.
Optimization programs DKL-fTV, DL2-fTV, and DL1-fTV in Equations (4)-(6) are convex, and can be solved with a number of existing algorithms. Experiments reported herein utilize algorithms belonging to a class of algorithms known as the primal-dual algorithms.
A specific set of first-order, primal-dual algorithms developed by Chambolle and Pock (CP) (Chambolle and Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging”, J. Math. Imag. Vis., vol. 40, pp. 1-26, 2011) has been demonstrated to be an effective investigative tool for solving a variety of convex optimization programs in CT imaging, including the convex programs having the form of Equations (4)-(6). Pseudo-code for the CP algorithm is given below:
← 1/λL; σ ← λ/L; θ ← 1; n ← 0
0 ← f0
In this algorithm, depicts a matrix representing a finite differencing approximation of the image gradient, yielding vectors f, {right arrow over (t)}, and {right arrow over (z)}n N-element vectors with each entry a vector of size 3, norm ∥•∥SV of a linear operator computes the largest singular value of that linear operator,
We note that all of the lines in the pseudo-code are identical for the three optimization programs in Equations (4)-(6), except for vector Φ of size M in line 9, which may vary depending upon the specific data divergence considered. For the program in Equation (4):
Φ(yn,
where 1D is a vector of size M filled with 1s. For the optimization program in Equation (5):
Φ(yn,
and for the optimization program in Equation (6):
where max(•) is performed element-wise.
Reconstruction techniques based upon the Row-action maximum-likelihood algorithm (RAMLA) (Browne et al., “A row-action alternative to the EM algorithm for maximizing likelihood in emission tomography”, IEEE Trans. Med. Imag., vol. 15.5, pp. 687-99, 1996) are used frequently in PET image reconstruction. RAMLA can be viewed as a relaxed ordered-subset algorithm in which the step size is subset-independent and gradually decreases to zero. Under certain conditions, the RAMLA algorithm is mathematically equivalent to the Expectation-Maximization (EM) algorithm, but they involve different implementation procedures and can lead to different solutions when a finite number of iterations are used as in all practical reconstructions.
In experiments reported herein, the RAMLA algorithm was applied to reconstructing from full data PET images uref, and the full data RAMLA reconstructions were used as reference reconstructions for comparative purposes. Specifically, the RAMLA implementation in the study consists of subsets with the number of LORs varying from 291 to 291×40, and yields the reconstruction after two full iterations, as is done typically in practical research and clinical applications. RAMLA reconstructions from sparse data were also carried out in each of the studies described below, and were observed to have quality substantially lower than that of the reference reconstructions. Therefore, the RAMLA reconstructions of sparse data are not illustrated herein.
For the purpose of validation and computation-efficiency consideration, we used the full-scan configuration 12F of
The mathematical convergence conditions for the CP algorithms include CTV(fn)→0, D′(un)→0, and cPD(un)→0, as n→∞, where cPD (un) denotes the conditional primal-dual (cPD) gap. They are unachievable, however, due to limited computer precision and computation time involved in any practical, numerical study. Therefore, for the inverse-crime study considered, we designed practical convergence conditions, namely:
CTV(fn)<10−5
D′(un)<10−5 (12)
cPD(un)<10−3
and require that the convergence metrics maintain their decaying trends even after that the conditions are satisfied, as n increases, where D′(un)=D′KL(un), (un), and (un), respectively, for optimization programs in Equations (4), (5), and (6). Practical convergence conditions that are tighter or looser than those in Equation (12) can readily be designed, depending upon the amount of computation resources to be invested.
In the following, we report performed inverse-crime studies on reconstructions based upon the three optimization programs in Equations (4)-(6) in which t0=∥ftrue∥TV is computed from truth-latent-image ftrue. For brevity, we show results obtained only for program DKL-fTV in Equation (4), as similar results were obtained also for programs in Equations (5) and (6). It can be observed in
and display it also in
In the studies reported in the following, we designed practical convergence conditions:
CTV(fn)<10−5
D′(un)→plateau (13)
as n increases. The convergence conditions of Equation (13) are less restrictive than those in Equation (12) for the inverse-crime study, and they are designed based upon the practical considerations: (a) in a real-data study, because inconsistency exists between measured data gm and model data g, D′(u)>0 is generally non-zero; and because full knowledge is unavailable about model data g, the value of D′(u) is generally unknown. Therefore, condition D′(un)→plateau, instead of D′(un)→0, is considered. Unlike metrics CTV(f) and D′(u) that provide directly meaningful measures of physical properties of reconstructions in a practical study, metric cPD (un) yields a mathematical check on the correctness of the algorithm implementation. Consequently, once the implementation correctness is verified in the inverse-crime study, metric cPD (un) is not used in real-data studies in which practical convergence conditions in Equation (13) appear to yield reconstructions of practical relevance, as the study results below show.
Prior to physical-phantom and human studies, we conducted a Monte-Carlo-simulation study in which full data of ˜200 million total counts were generated from the digital Jaszczak phantom by using the GATE simulation package for the full-scan configuration 12F (
In a first aspect of the Monte-Carlo simulations, the TV-constraint parameter t0 was determined. More particularly, in the study, given the convergence conditions in Equation (13), all of the program parameters are determined, except for TV-constraint parameter t0, which is determined by use of the root-mean-square error (RMSE):
between truth image utrue and convergent reconstruction û*. For each of a set of t0 values, we solve program DKL-fTV in Equation (4) to obtain convergent reconstruction û* from full data and calculate the RMSE. Repeating the reconstruction and calculation for all values of t0, we obtain an RMSE of t0.
Next, the different optimization programs were compared via Monte-Carlo simulations. Using the selected t0, we obtained reconstructions from full data by solving programs DKL-fTV, DL2-fTV, and DL1-fTV in Equations (4)-(6), and these reconstructions are shown in
In an effort to elicit the artifact source, we define:
g′m=gm−gs−gr and g′=û* (15)
where g′m denotes the measured data with scatter/random corrected for, and g′ the model data estimated from the convergent reconstruction û*, also with scatter/random corrected for. In particular, we use g′KL, , and to specify explicitly the model data estimated, respectively, by use of programs DKL-fTV, DL2-fTV, and DL1-fTV.
With reference to
The foregoing results indicate that for the subject data sets the program DKL-fTV produces reconstructions of reasonable visual textures for both cold- and hot-rod sections. Thus, optimization program DKL-fTV was chosen as the optimization program for further investigations. (As can be seen from the example in
As shown in the pseudo-code (Algorithm) for the CP algorithm, latent image f defined in Equation (3) can also be reconstructed.
Reconstruction as a function of iterations is next considered.
The reconstructions above were obtained when the convergence conditions in (Equation (13)) were satisfied. To inspect how reconstructions evolve as a function of iterations,
In a next set of experiments, physical phantom data studies were performed using an IEC phantom. We collected full data of: 100 million total counts from the phantom by using a full-scan configuration in a digital prototype PET/CT system. From the full data, we extracted sparse data to mimic data collected with the sparse-scan configuration. The IEC phantom is composed of 6 fillable spheres of diameters 10, 13, 17, 22, 28, and 37 mm, respectively, in which the two largest spheres have zero activity, while the other four spheres are filled with positron-emitter activity at a concentration level four times the background-activity level. Scatter and random events were measured, and used as known components in the study. Images were reconstructed on a 3D array of 100×100×41 identical cubic voxels of size 4 mm.
A first task was determination of the image-constraint parameter t0. In the study, given the practical convergence conditions in Equation (13), all program parameters except the image-constraint parameter t0 were determined as described. Percent contrasts of hot and cold spheres and percent background variability, described in the Appendix, are standard metrics designed for evaluation of reconstruction quality of the IEC-phantom. In this study, full knowledge of the truth image is unknown. Therefore, combining the metrics, we form in the Appendix a single quality metric, referred to as the QNR, for determination of t0. For a set of t0 values, convergent reconstructions from full data of the IEC phantom were obtained by use of the CP algorithm to solve program DKL-fTV in Equation (4).
The QNRs calculated from the reconstructions are plotted in
Reconstructions based upon different optimization programs was also investigated.
In view of the foregoing, reconstructions using optimization program DKL-fTV were chosen for further investigation.
We computed from the reconstructions quantitative metrics described in the Appendix, and display them in
Reconstruction as a function of iterations was next investigated. Convergent reconstructions û* of the IEC phantom were obtained again when the convergence conditions in Equation (13) are satisfied.
In addition to the phantom studies reported above with reference to
Determination of the image-constraint parameter t0 was first considered. Again, given the practical convergence conditions in Equation (13), all of the program parameters except image-constraint parameter t0 were determined as previously described. Unlike the phantom studies, in which quantitative metrics were used for selecting t0, for the human study t0 was selected based upon qualitative visual inspection.
With reference to
In view of the foregoing, the optimization program DKL-fTV was selected for use in further investigations. For a single bed position, using t0 selected as described with reference to
With reference to
With reference to
Reconstruction of the human images as a function of the number of iterations was next considered. Reconstructions of the human images were obtained again when the practical convergence conditions in Equation (13) are satisfied. We also investigated how the summed reconstruction of the human subject evolves as a function of the iteration number. To illustrate,
In a human subject, the scatter component gs with attenuation-effect correction, is a program parameter that can be estimated from experimental measurements. The degree of estimation variability of gs can impact the reconstruction. To inspect the impact, we repeated the DKL-fTV reconstruction from full data of the human subject at bed position 3. Using gs obtained experimentally at bed position 3, we created hypothetically under- and over-estimated scatter events by scaling gs with a factor ranging from 0 to 2.
Data collected in PET imaging generally have signal-to-noise ratio (SNR) considerably lower than that of data in typical computed tomography (CT) imaging. This is a consequence of the low radioactivity of a radiopharmaceutical administered to a patient for PET imaging, compared with the much higher permissible x-ray beam flux commonly used in clinical CT imaging. Furthermore, transitions among different uptake regions or other clinically salient features in a PET-activity map are commonly observed to be generally not as sharp as transitions among anatomic regions in a CT image. This is a consequence of the typical spread distribution of radiopharmaceutical in organs and tissue, i.e. the radiopharmaceutical is not entirely contained within the organ or tissue of interest but rather its concentration is higher (by design of the radiopharmaceutical) in the organ or tissue of interest compared with surrounding tissue.
To accommodate these significant differences as compared with CT, it is disclosed herein to formulate the optimization program (Equation (2)) with the PET image u represented as a product of a latent image f and a Gaussian blurring matrix or blurring operator , as shown in Equation (3). This formulation allows for a latent image with sparser gradient magnitude image than the desired image, and avoids yielding an image with significant patchy textures for PET data with low SNR. Further, as seen in the optimization program of Equation (2) a limit on the image total variability is enforced as a constraint, rather than as a term of an objective function that is optimized thus separating out enforcement of the total variability limit from the optimization objective function.
As shown herein, the form of the optimization program itself can also significantly affect PET-image reconstruction. The studies reported here indicate that the optimization program DKL-fTV employing the Kullback-Leibler divergence (Equation (4)) yields reconstructions superior to those obtained with the other two programs investigated (Equations (5) and (6)). In addition to the program form, numerous parameters used for specification of a program can have a significant impact on the final reconstruction. Among the parameters of the optimization program, image-TV-constraint parameter t0 was observed to strongly affect reconstruction properties.
Image reconstructions have been carried out in different studies involving objects with considerably distinct activity-uptake distributions of practical relevance and data with different quality/quantity conditions of interest. The results show that the reconstruction based upon program DKL-fTV appears to be robust for the different activity up-takes and data sets under consideration. Moreover, a study was conducted for image reconstruction from data collected (or simulated to be collected via extraction from the full data set) with a PET configuration containing only half of the detectors in a digital prototype PET/CT scanner (the sparse configuration of
The use of image total variation (image TV) as a constraint has numerous advantages. As a constraint, the impact of the image TV on the image is readily understood it enforces an upper limit on the permissible total image variability. This can be seen, for example, by considering the limiting cases. If t0 approaches zero then no image variability is permitted, resulting in a flat (i.e. perfectly smooth) image. By contrast, if t0 becomes sufficiently large then the image variability constraint is effectively removed, as the constraint does not impact the image no matter how much image variability is present. In general, a smaller value of t0 biases toward a smoother image, albeit at a possible loss of some detail; whereas, a larger value of t0 biases toward improved image sharpness, albeit at a possible increase in overall image noise. However, unlike in the case of applying a post-reconstruction image smoothing filter, applying the image TV constraint during image reconstruction generally does not adversely impact overall image contrast.
Another advantage of using image TV as a constraint is that it is generally operable with other constraints, and the various constraints can be considered to be operating (at least approximately) independently. For example, in the optimization program examples of Equations (2) and (4)-(6), in addition to the TV constraint an additional positivity constraint is applied (the constraint fj≥0).
While the illustrative image TV constraint is shown by experiments reported herein to provide enhanced image quality for both full and sparse data sets, in other embodiments another constraint performing analogous function could be used. For example, more generally a norm of a sparse gradient magnitude image could be used as a constraint in place of the illustrative image TV constraint. Even more generally, a norm of another image sparsifying image transform such as a wavelet, curvet, or Fourier transform could be used as the constraint. The generalized form of the image variability constraint may be written as ∥T(u)∥≤t0, where u is the reconstructed image at a current iteration of the constrained optimization program, T(u) is the sparsifying image transform (e.g. wavelet, curvet, Fourier transform, et cetera) and outputs a transformed version of the image u, and ∥ . . . ∥ is a norm, i.e. a function that outputs a strictly positive scalar value for the transformed image T(u). In the example of Equations (2) and (3), T(u)=f=−1u and the norm ∥ . . . ∥ is the image total variability, i.e. ∥ . . . ∥TV. (In the limiting case where is the identity matrix, T(u)=u.) These constraints generally serve to constrain the maximum permissible image variability, albeit less directly than the illustrative image TV constraint. Note that in this generalized framework the positivity constraint fj≥0 becomes [T(u)]j≥0.
While the illustrative embodiments are directed to PET imaging, it will be appreciated that the disclosed image reconstruction techniques are readily applied to other types of emission imaging data, such as time-of-flight (TOF) PET imaging data or single photon emission computed tomography (SPECT) imaging data, with suitable adjustment of the model of Equation (1). In designing the reconstruction for such imaging data, the disclosed approaches for selecting and/or optimizing parameters of the optimization program (parameters and form, e.g. choice of divergence) are suitably performed to tailor the reconstruction.
The benefits of the disclosed image reconstruction with image variability constraint demonstrated for full and sparse PET data sets as per
In general, it is expected that performing an integrated design of both the sparse detector configuration (e.g., selecting the pattern, or randomness, of omitted detector elements) and the image reconstruction with image total variability constraint (selecting/optimizing the optimization program form and parameters) should yield improved performance for the overall system combination including the PET scanner (or SPECT gamma camera) with sparse detector array(s) and the image reconstruction algorithm. In this regard, it should be understood that the sparse detector array configuration 12S of
Yet another advantage of the disclosed approach employing iterative reconstruction with an image variability constraint is that the constraint can be tuned, or “dialed in” to accommodate run-to-run differences in imaging data, and/or to accommodate different clinical tasks that may benefit from different trade-offs between, on the one hand, highly smooth image texture (enforced with a lower maximum allowable image variability, e.g. lower t0 in the illustrative optimization programs employing image TV); and, on the other hand, high image sharpness (obtained by a higher maximum allowable image variability that effectively relaxes the image variability constraint, e.g. higher t0 in the illustrative optimization programs employing image TV).
With reference to
The illustrative t0 dial-in tables and/or formulas 54 include a task-t0 look-up table 56 that includes (task, t0) pairs. This look-up table 56 is suitably constructed by a skilled radiologist, for example, using the visual inspection approach described with reference to
The illustrative t0 dial-in tables and/or formulas 54 further include a data quantity t0 adjustment 56 that adjusts t0 based on the quantity of acquired PET data. In general, if a large quantity of data is available (for example, due to a long imaging data acquisition time and/or high radiopharmaceutical dosage used in the operation 50) then the image TV constraint parameter t0 is adjusted upward to impose a less aggressive constraint on the permissible image variability, in the expectation that the large data set should produce a correspondingly high quality reconstructed image without aggressive variability constraint. By contrast, if a small quantity of data is available (for example, due to a short imaging data acquisition time and/or low radiopharmaceutical dosage used in the operation 50) then the image TV constraint parameter t0 is adjusted downward to impose a more aggressive constraint on the permissible image variability, in the expectation that the small data set may lead to large artifacts that should be countered by more aggressive constraint on the image total variability. The data quantity adjustment 56 may be implemented as a look-up table (e.g. assigning various t0 values to different quantity range bins) or as an empirical formula, e.g. of the form t0=f(N) where N is a metric of the quantity of acquired PET data. In some embodiments the quantity adjustment 56 may be tied to the clinical task by integrating the adjustment 56 into the task look-up table 54 (e.g., having different data quantity adjustment formulas defined for different clinical tasks). While a quantity adjustment 56 is illustrated, other data set-specific t0 adjustments may be similarly made, e.g. based on PET scanner configuration, the particular PET scanner that acquired the data set (in radiology laboratories having multiple PET scanners connected to a common reconstruction system) or so forth.
With continuing reference to
Using CB,t,k to denote the average background activity within sub-ROI t of the size of sphere k, we define an average background activity corresponding to sphere k as CB,k=Σt=1T CB,t,k. With this, percent contrasts QH,j and QC,i for hot sphere j, where j=1, 2, 3, and 4, and cold sphere i, where i=5 and 6, and background variability Nk for sphere k, where k=1, 2, 3, 4, 5 and 6, defined in NEMA NU 2-2012, are calculated as:
where aH and aB denote truth activity concentrations in a hot sphere and background, aH=4×aB, and CH,j and CC,i are the average activities within hot sphere j and cold sphere i.
We define a metric, which takes into account the trade-off between contrast and background noise, as:
where QH,2 and QC,5 denote percent contrasts for hot sphere s2 of diameter 13 mm and for cold sphere s5 of diameter 28 mm, and N2 and N5 percent background variabilities corresponding to the two spheres.
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
This application is the U.S. National Phase application under 35 U.S.C. § 371 of International Application No. PCT/IB2017/050776 filed Feb. 13, 2017, published as WO 2017/149399 on Sep. 8, 2017, which claims the benefit of U.S. Provisional Patent Application No. 62/301,187 filed Feb. 29, 2016. These applications are hereby incorporated by reference herein.
This invention was made with Government support under grants CA158446, CA182264, and EB018102 awarded by the National Institutes of Health (NIH). The Government has certain rights in this invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2017/050776 | 2/13/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/149399 | 9/8/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20100124368 | Ye | May 2010 | A1 |
20110044546 | Pan | Feb 2011 | A1 |
20110142316 | Wang | Jun 2011 | A1 |
20130310678 | Balbi | Nov 2013 | A1 |
20150287223 | Bresler | Oct 2015 | A1 |
20150310653 | Knoll | Oct 2015 | A1 |
20150363947 | Rigie | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
2010127241 | Nov 2010 | WO |
2012129140 | Sep 2012 | WO |
2013116709 | Aug 2013 | WO |
2016011489 | Jan 2016 | WO |
Entry |
---|
Liang, et al., “Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction” Nuclear Science Symposium Conference, 1990. |
Watson et al., “A single scatter simulation technique for scatter correction in 3d pet”, in 3D Img. Recon. Radiol. Nucl. Med. Springer, 1996, pp. 255-268. |
Badawi et al., “Randoms variance reduction in 3D PET”, Phys. Med. Biol., vol. 44, No. 4, p. 941, 1999. |
Chambolle and Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging”, J. Math. Imag. Vis., vol. 40, p. 1 26, 2011. |
Browne et al., “A row-action alternative to the EM algorithm for maximizing likelihood in emission tomography”, IEEE Trans. Med. Imag., vol. 15.5, p. 687 99, 1996. |
Zhang, et al., “Investigation of optimization-based reconstruction with an image-total-variation constraint in PET” Physics in Medicine & Biology, vol. 61, No. 16. |
Burger, et al., “A Nonlinear Variational Approach to Motion-Corrected Reconstruction of Density Images” 2015. |
Sidky, et al., “Convex optimization problem prototyping with the Chambolle-Pock algorithm for image reconstruction in computed tomography” Phys Med Biol. May 21, 2012; 57(10): 3065-3091. |
Kaganovski, et al., “Alternating minimization algorithm with automatic relevance determination for transmission tomography under Poisson noise” Jan. 2015. |
Bian, et al., “Optimization-based Image Reconstruction from Sparse-view Data in Offset-Detector CBCT”, Phys Med Biol. Jan. 21, 2013; 58(2): 205-230. |
Number | Date | Country | |
---|---|---|---|
20210209817 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
62301187 | Feb 2016 | US |