This invention relates generally to estimating myocardial blood flow (MBF) and more specifically to a positron emission tomography (PET) based method for estimating myocardial blood flow (MBF).
Imaging of blood flow through the heart and associated veins can improve diagnosis and treatment of cardiac diseases. In particular, estimation of myocardial blood flow or blood flow through heart muscular tissue can be useful as described below.
By one approach, nuclear based medicine can be used to produce useful medical images. In such an approach, radioactive elements are introduced into the bloodstream such that when the radioactive elements experience a radioactive decay, the byproducts of that decay (often the reaction with particles called positrons) can be sensed to produce an image of the area where the radioactive elements are placed. An example approach to this kind of imaging is called positron emission tomography (PET). Several radioactive elements, called positron emitting tracers, are available for these studies with the most common being 82Rb and 13N-Ammonia.
Currently, PET is a primary method for determining non invasive coronary flow reserve. Coronary flow reserve can be defined as a ratio of a maximum hyperemic flow to baseline flow. In normal patients this ratio can typically range between 3-5, which is a essentially a measure of the function of coronary circulation and is particularly useful in the detection of early abnormalities due to coronary artery disease. Because the coronary flow reserve determination is a ratio, it is unaffected by a uniform reduction in both baseline and maximal flow.
Unfortunately, coronary flow reserve does not reflect true vasodilation capacity. A reduction in coronary flow reserve could be caused either by increased flow in the baseline state or by reduced maximum hyperemic flow. Factors that increase myocardial oxygen demand, for example hypertension, increased left ventricular wall stress, increases in inotropic state, and tachycardia, can lead to an increased basal flow. Differentiating between this case and the reduced maximal hyperemic flow due to significant coronary stenosis is difficult without absolute myocardial blood flow measurements. Measurements of hyperemic blood flow in absolute units provide a more direct estimate of vasodilation capacity. Accordingly, only by accurate determination of absolute myocardial blood flow can the existence of uniform diffuse disease be determined.
Since the early 1990's there have been validated techniques for estimating absolute myocardial blood flow. Nevertheless, absolute myocardial flow estimation has not been adopted for routine use in a clinic setting because of technical limitations. These limitations can include lack of technical expertise in a clinical setting, time taken to perform the calculations, and the lack of widely available commercial products to perform the calculations and display the results. On the other hand, numerous reports indicate the effect on absolute myocardial blood flow of various interventions or conditions. Yet, calculating absolute blood flow for clinical studies remains rare. The result is that diagnostic decisions are usually based on relative myocardial blood flow or relative changes in myocardial blood flow between rest and stress, often aided by a software tool that compares images to a normal database.
There are at least three different kinetic models that have been used to understand the distribution over time of flow tracers in myocardial tissue. These works include spillover correction because of a finite resolution of the scanner and because the myocardium is moving during the scan. In one known approach, factor analysis was used to obtain spillover independent time activity curves of the right ventricle (RV) and left ventricle (LV) and myocardial blood tissue. By using curves generated from factor analysis, the spillover component in the model can be eliminated in theory; however, factor analysis does not correct for the under measurement due to the partial volume effect. Correction for this would require the use of a contrast recovery coefficient. Methods for addressing the non-uniqueness problem of kinetic modeling have been proposed. Also, kinetic modeling directly from sinograms from a dynamic sequence has been suggested.
In the following, let aij denote the activity in voxel i of frame j. In factor analysis, it is assumed that the activity is a linear combination of K primary factor curves, where the summation coefficients are
The primary factor curves for this application are the right ventricular blood pool, the left ventricular blood pool, and the myocardial tissue curve. The mathematical task is to find both the factors and coefficients so that the linear combination of factor curves for every pixel in the image matches the measured curve as close as possible. This problem is constrained by requiring that the tissue curves and the linear coefficients are all positive.
Ammonia or Rubidium uptake is generally analyzed with a two or three compartment model. The models all have a blood compartment in contact with an extracellular free distribution compartment, which is in turn in contact with a metabolically trapped compartment. These models are much easier to calculate if it is assumed that the clearance from the metabolically trapped compartment is zero (or near zero) over the duration of the experiment. As a result, for accurate myocardial blood flow modeling, several authors recommend collecting and analyzing only two minutes of data. When using smooth data generated by averaging all pixels within a large range of interest, this is a reasonable approach.
While there have been significant advances in the art, further advances are possible. For example, it is desirable to have a myocardial blood flow analysis with greater accuracy than is presently known in the art.
Generally speaking, pursuant to these various embodiments, techniques for estimating myocardial blood flow (MBF) in each voxel in the myocardium, and specifically to a method for estimating myocardial blood flow in each voxel in the myocardium using a model using pharmacological kinetics based factor analysis of dynamic structures (K-FADS) and using a discretization that transforms the continuous-time K-FADS model into a discrete-time K-FADS model, then applying an iterative algorithm, such as a Voxel-Resolution myocardial blood flow (V-MBF) algorithm.
In one approach, a myocardial blood flow analysis includes a processing device applying a pharmacological kinetic model to a data set stored in a storage device. The data set may be compiled from a PET scan or other imaging approach that can monitor fluid flow in a voxel set. For example, the data set may be derived from an imaging technique based on monitoring fluid based tracers in a left ventricle, a right ventricle, and myocardium of a patient or animal subject. By one aspect, the pharmacological kinetic model includes incorporating a model of changing concentrations of bound fluid based tracers, unbound fluid based tracers, and blood plasma fluid based tracers into a standard factor analysis of dynamic structures model combined with a model of fluid based tracer activity in the left ventricle as a time shifted and dispersed function of blood flow from the right ventricle. In another approach, the tracer activity is modeled without assumption that a right ventricle tissue curve and a left ventricle tissue curve obey a particular mathematical relationship. The processing device is configured to output a processed data set based on the application of the pharmacological kinetic model to the data set for providing a representation of blood flow in the myocardium. The processed data set may be usable to create a visual representation, an audio representation, a textural description of the myocardial blood flow using known methods for conveying such information.
In other aspects, the processing device optionally estimates parameters of the standard factor analysis of dynamic structures model. The estimating may be done by estimating maximum values of fluid based tracer activity in one or both of the right ventricle or the left ventricle and modifying a corresponding signal vector value for the one of the right ventricle or the left ventricle using the estimated maximum values of fluid based tracer activity. In still another approach, the estimating may be done by estimating a left ventricle time activity curve, a right ventricle time activity curve, and a time activity curve, wherein the left ventricle time activity curve is assumed to be approximately equal to a response of an mth-order, all-pole filter applied to the right ventricle time activity curve and determining a set of parameters that produce a smallest least-squares error for the pharmacological kinetic model. This estimation may include for a given initial estimate right ventricle time activity curve and a given initial estimated left ventricle time activity curve, determining initial estimates for parameters of the pharmacological kinetic model.
Where the model is applied without assumption that a right ventricle tissue curve and a left ventricle tissue curve obey a particular mathematical relationship, a least squares objective function can be applied to obtain estimates for parameters of the pharmacological kinetic model. In one such approach, the least squares objective function is minimized by applying a majorize-minimize optimization technique to iteratively estimate the right ventricle tissue curve and the left ventricle tissue curve. At initialization, initial estimates for an initial estimated right ventricle time activity curve and an initial estimated left ventricle time activity curve are estimated and use the initial estimates for the initial estimate right ventricle time activity curve and the initial estimated left ventricle time activity curve to determine initial parameters of the pharmacological kinetic model. Where the data is noisy, the processing device smoothing estimates for blood flow for a given voxel by applying a limiting factor or penalty factor on data from voxels located within a given distance from the given voxel. Optionally, the pharmacological kinetic model includes a semi-parametric model configured to drive time activity curves for the right ventricle imaging activity from the fluid based tracers and the left ventricle imaging activity from the fluid based tracers to zero over time.
So configured, a more accurate derivation of myocardial blood flow is possible through the application of these modeling techniques. The advantages include increased resolutions and incorporation of kinetics. Further, unlike most known iterative algorithms for MBF estimation, such approaches explicitly describe a general procedure for initializing such algorithms. Other features will become more apparent to persons having ordinary skill in the art from the following description and claims.
The foregoing features, as well as other features, will become apparent with reference to the description and figures below, in which like numerals represent elements, and in which:
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein.
A method and apparatus for estimating myocardial blood flow (MBF) in each voxel in the myocardium is described. The algorithm is based on a factor analysis of dynamic structures (FADS) model that has been enhanced to constrain the factor analysis curves to be physiologically appropriate.
Turning now the figures,
I. A First Approach
The kinetic behavior of ammonia in the myocardium is modeled with the compartment model shown in
where Cp is the ammonia concentration in blood plasma, Cu is the concentration of the free (i.e., unbound) ammonia, and Cb is the concentration of the trapped (i.e., bound) ammonia.
In one approach, a model is applied that incorporates a pharmacological kinetic model with the standard FADS model, which is a model where each time activity curve is assumed to be a linear combination of factor curves. The resulting model, which can be called the pharmacological kinetics based factor analysis of dynamic structures (K-FADS) model, provides a means for estimating factor curves in the myocardium that are physiologically meaningful. Further, a discretization is performed to transform continuous-time K-FADS model into a discrete-time K-FADS model. It should be noted that there is a simple relationship between the discrete-time and continuous-time K-FADS parameters.
Next, an iterative algorithm can be applied, such as the Voxel-Resolution myocardial blood flow (V-MBF) algorithm. This algorithm can iteratively estimate the MBF for each voxel in the myocardium. The V-MBF algorithm is reliably initialized using an input-output system identification method described by Steiglitz-McBride. This method can be applicable to a class of discrete-time systems that includes the discrete K-FADS model. This V-MBF algorithm was evaluated subjectively and objectively through experiments conducted using synthetic data and found to be reliable in considered test scenarios. Accordingly, the method disclosed herein is feasible for determining physiologically meaningful estimates of absolute MBF.
The initial model is a continuous-time K-FADS model from which a discrete-time K-FADS model can be obtained by applying a bilinear transform to the continuous-time K-FADS model. It should be noted that there is a simple relationship between the discrete-time and continuous-time K-FADS parameters. Using a systems theory framework, the problem of estimating the discrete-time K-FADS parameters is related to the problem of identifying a discrete-time system from given input and output data (i.e., input-output system identification). The V-MBF algorithm iteratively estimates the MBF for each voxel in the myocardium. The V-MBF algorithm can be initialized using an input-output system identification method, such as one described by Steiglitz-McBride, which is applicable to a class of discrete-time systems that includes the discrete K-FADS model. Each of these aspects will be described in turn with respect to a first approach to estimating the MBF for voxels of the cardiac system.
Continuous-Time K-FADS Model
For convenience, the RV factor, LV factor (i.e., Cp). Cu, and Cb can be denoted by the continuous-time functions f1(t), f2(t), fi,3(t), and fi,4(t), respectively, where i is the voxel index. With this notation it follows that f1 and f2 represent the activity concentration of the ammonia in the right ventricle and left ventricle, respectively. Additionally, fi,3(t), and fi,4(t) represent the activity concentration of free and trapped ammonia in myocardial tissue, respectively. It can be seen that it is assumed that the RV (right ventrical) and LV (left ventrical) factors are spatially constant (i.e., RV and LV factor are voxel independent). In the continuous-time K-FADS model the LV factor is modeled as a time shifted and dispersed function of the RV factor
ƒ2(t)=γƒ1(t)*exp(−β(t−τ))u(t−τ) (4)
where γ, β, and τ (tau) are the unknown gain, time constant, and delay of the LV, respectively. Specifically, τ (tau) accounts for the fact that the ammonia activity first appears in the right ventricle and then, after a period of time, appears in the LV. The function u is the unit step function, and the notation * denotes the continuous-time convolution operator. The model for the LV factor in (4) can be motivated by observations of “isolated” RV and LV factors obtained from dynamic PET sequences and, in part, by the need for mathematical tractability. For example, consider if parameters k1, k2, and k3 are pixel dependent and voxel i lies in the myocardium, then, applying the Laplace transform to (3) can lead to the following expressions for the activity concentration of the free and trapped ammonia in voxel i
ƒi,3(t)=ki,1ƒ2(t)*exp(−(ki,2+ki,3)t)u(t) (5)
ƒi,4(t)=ki,3ƒi,3(t)*u(t). (6)
It is noted that k1,1, k2,1, . . . , k1,1 are the preferred MBF parameters. In keeping with the assumptions behind (1), the activity for the ith pixel can be expressed as
a
i(t)=ci,1ƒ1(t)+ci,2ƒ2(t)+ci,3ƒi,3(t)+ci,4ƒi,4(t). (7)
The first term in (7) can be identified as the amount of spillover from the right ventricle, and the second term can be a combination of the ammonia activity in blood vessels within the myocardium and spillover from the left ventricle. More specifically, the constant ci,1 accounts for the amount of the measured radioactivity in voxel in the case of a PET scan that is due to the blood plasma in the RV. Further, the constant ci,2 accounts for the amount of the measured radioactivity in voxel i that is due to blood plasma in the LV (i.e., LV spill over) and blood plasma in the blood vessels of the myocardium. For this approach, it is assumed that ci,2=0.05. The third and fourth terms in (7) are the activity of the free and trapped ammonia in the myocardial tissue, respectively. The coefficients ci,3 and ci,4 represent the fractional volume of voxel i that can be occupied by the radiotracer in either the free or trapped states. Given the free space for water in myocardial tissue is approximately 80 percent, it is assumed that ci,3=ci,4=0.8.
Within the descriptions herein, it can be convenient to let f(t)=f1(t) and ci=ci,1. From equations (4), (5), (6), and straightforward calculations, the functions fi,3(t) and fi,4(t) can be written as:
By substituting these equations into (7) it can be shown that
It can also be shown from straightforward calculations that
Thus, given estimates for the parameters β, {ki}, {ci}, {bi,1}, {bi,2}, and {bi,3}, and using the assumed values for {ci,2} and {ci,3}, the MBF parameters {ki,1} can be estimated from equations (15). As stated above, it is assumed that ci,2=0.05 and ci,3=0.8 for all i.
Discrete-Time K-FADS Model
The next aspect of this approach is to address the problem of estimating the parameters β, {ki}, {ci}, {bi,1}, {bi,2}, and {bi,3} from discrete-time Time Activity Curve (TAC) data because continuous-time TAC data is not available in practice.
In practice, only samples of the TACs are available so let yi[n], i=1, 2, . . . I and x[n] denote the discrete-time signals obtained by sampling the ith TAC ai(t) and RV factor respectively
y
i
[n]
a
i(nT),n=0,1, . . . ,N−1 (16)
x[n]ƒ(nT),n=0,1, . . . ,N−1, (17)
where ƒs1/T is the sampling rate and T is the sampling interval. It is noted that in applications where scan durations for dynamic sequence PET protocols are not uniform, the assumption that the TACs are sampled uniformly is inappropriate. However, uniform samples of the TACs can be obtained from non-uniform samples of the TACs via a suitable interpolation. It follows that, in order to estimate the MBF parameters, the parameters β, {ki}, {ci}, {bi,1}, {bi,2}, and {bi,3} are estimated from the data yi[n], n=0, 1, . . . , N−1. It can be observed that the parameters {ci} are really nuisance parameters because they do not show up in the expression for ki,1 (see equation (15)).
It is of interest to determine a discrete-time system with the property that its response to the discrete-time RV factor x[n] closely approximates the ith discrete-time TAC yi[n]. The bilinear transformation is a way to transform a linear time-invariant continuous-time system into a linear time-invariant discrete-time system. A limitation of the bilinear transform is that a delay in a continuous-time system must be an integer multiple of the sampling interval. Taking the Laplace transform of (11), we get the following relationship
As a result, it follows that the system function of the overall continuous-time system is given by
Assuming the delay t is a multiple of the sampling interval T, the system function of the desired discrete-time system, Hi(z), can be obtained by applying the bilinear transformation to the overall continuous-time system Hi,tot(s)
where τ=dT for some integer d, r1=1, r2=(2/T+β)−1(2/T−β), ri,3=(2/T+ki)−1(2/T−ki), b′i,1=bi,1(2/T)−1, b′i,2=bi,2(2/T+β)−1, and b′i,3=bi,3(2/T+ki)−1.
It is noted that for this approach, the z-transform is used. Here let g[n] be an arbitrary discrete-time sequence. The z-transform g[n] can be defined as
It should be noted that the delay term (i.e., z−d term) in equation (21) follows from the τ second delay in the continuous-time K-FADS model, which is the delay between activity appearing in the right and left ventricles.
As known in the art, the bilinear transformation has the property that it maps a stable continuous-time system into a stable discrete-time system. Moreover, the bilinear transformation avoids the problem of aliasing by mapping the jΩ axis into the unit circle of the complex plane. However, “frequency warping” can occur as a result of mapping the entire jΩ axis into the unit circle. Note, the frequency warping problem can be ameliorated by choosing a sufficiently high sampling rate fs.
It follows from the definition in equation (21) that the discrete-time K-FADS model for the ith TAC can be represented by the following input-output relationship
where θi[b′i,1, b′i,2, b′i,3, ci](recall ri=1). The notation Hi(z; d, r2, ri,3, θi) explicitly illustrates the dependence of the ith system function on the unknown parameters. For the discussion below, it is beneficial to define the following notation:
θr
θb′[b′1,1,b′2,1, . . . ,b′I,1,b′1,2,b′2,2, . . . ,b′I,2,b′1,3,b′2,3, . . . ,b′I,3]
c
[c
1
,c
2
, . . . ,c
I]. (23)
The problem of interest is to estimate parameters of the discrete-time K-FADS model from the sampled TACs yi[n], I=1, 2, . . . , I. Therefore, the V-MBF algorithm of this approach solves the following least-squares problem
Additionally, Sθ is a feasible set of the parameters θ and hi[n; d, r2, ri,3, θi] is the inverse Z-transform of Hi(z; d, r2, ri,3, θi).
In the development of the V-MBF algorithm, the problem (P) is simplified by assuming that the discrete-time d delay is known. Next, this assumption is removed by estimating the parameter d along with the other parameters of the discrete-time K-FADS model. An initialization method is applied that exploits the well known Steiglitz-McBride algorithm.
A. Discrete-Time Delay Known
To minimize the objective function φ in (24), an algorithm is developed based on the group coordinate-descent method. By use of the term group coordinate-descent method it is understood that, in a cyclic fashion, the objective function φ is minimized with respect to a set of parameters while the other parameters are fixed.
Let do denote the known discrete-time delay. Given initial estimates, x(0), r2(0), θr
Solution to Step 1 of the V-MBF Algorithm with Known Delay
In the solution to Step 1 of the V-MBF algorithm with known delay d, it is convenient to denote the next estimate for the RV factor as
Step 1.2 Let zj+1=zj+λjej. If jth component of zj+1 is negative, then this value is set to zero. Note, this operation accounts for the nonnegativity constraint of the discrete-time RV factor. If j<N, then increment j and repeat Step 1.1. Otherwise, if j=N, then go to Step 1.3.
Step 1.3 Let
Solution to Step 2 of the V-MBF Algorithm with Known Delay
Again, the simplicity of the coordinate descent method is exploited to compute a solution to the problem in Step 2. However, the coordinate descent method is expressed in a manner that is more convenient for this problem:
It should be observed that in Step 2.2 advantage is taken of the fact that the objective function φ is de-coupled in terms of the parameters r1,3, r2,3, . . . , rI,3. Also, a 1-D line search algorithm such as the golden section method can be used to solve the 1D minimization problems in Steps 2.1 and 2.2.
Solution to Step 3 of the V-MBF Algorithm with Known Delay
Referring to equation (25), it follows that the problem in equation (28) is equivalent to the following problems, i=1,2, . . . , I,
where, wp(m+1)[n], p=1, 2, is the inverse Z-transform of
and wi,3(m+1)[n] is the inverse Z-transform of
An optimization problem in equation (30) is a linear least-squares problem under the constraint 0≦ci≦1. If the constraint on ci ignored, then the update θi(m+1) could be computed by solving the normal equations associated with the least-squares objective function in (30). Thus, if the solution to an unconstrained version of the least-squares problem in (30) is such that 0≦ci(m+1)≦1 for all i, then no other steps should be necessary. Alternatively, if without loss of generality, the constraint is not satisfied for i=i0, then additional steps should be taken. A straightforward strategy could be to first compute the updates b′i
The update ci
Iterating between equations (33) and (34) may lead to improved estimates for ci
Discrete-Time Delay Unknown
In the example above, the discrete-time delay d was assumed as known. Nevertheless, in practice it must be estimated. Let the integers dmin and dmax be the assumed minimum and maximum values for d. Then, the complete V-MBF algorithm follows for d=dmin, . . . , dmax.
1. minimize φ(x, d, r2, θr
2. Store parameter estimates and value of least-squares objective function end
The preferred estimates for the K-FADS parameters in this approach produce the smallest least-square error. The estimates for the MBF parameters are obtained using equation (15) and the estimates:
and the assumed values ci,2=0.05 and ci,3=0.8 for all i.
Initialization Procedure
To start the V-MBF algorithm, initial estimates for the RV factor x, r2, and θr
To develop a method for computing an initial estimate, first observations can be used from equation (22) that for some 3rd-order polynomial Qi(z) and 2nd-order polynomial P′i(z), the z-transform of the ith TAC, is given by
where the roots of Qi(z) are r1, r2, and ri,3. Alternatively, for some polynomial P′i(z), equation (40) can be written as
where the unknown numerator polynomial is of the form Pi(z)=pi,0+pi,1z−1+pi,2z−2+pi,3z−3+pi,dz−d+pi,d+1z−(d+1)+pi,d+2z−(d+2)+pi,d+3z−(d+3) because Pi(z)=ciQi(z)+P′i(z)(1+z−1)z−d. In other words, each TAC is the output of an autoregressive moving-average (ARMA) model known in the art that is driven by the RV factor x[n].
Given an input-output pair for a linear, time-invariant system modeled as an ARMA model, the Steiglitz-McBride algorithm can provide estimates for ARMA parameters. Thus, given a TAC, yi[n], and initial RV factor x(0), the Steiglitz-McBride algorithm can be used, which is an iterative algorithm, to estimate Pi(z) and Qi(z). The Steiglitz-McBride algorithm can be summarized below
where the discrete-time Fourier transform is used to obtain:
Also Pi(m)(ejω), Qi(m)(ejω), pi(m+1), and qi(m+1 can be similarly defined, and Qi(0)(ejω)=1 is the chosen initial estimate for Q(ejω). Note, from Parseval's theorem, the objective function in (42) can be equivalent to a linear least-squares objective function. Thus, at the mth iteration, the Steiglitz-McBride algorithm entails a filtering step (i.e., initial RV factor x(0)[n] and ith TAC, yi[n], are filtered by 1/Qi(m)(z)) and minimization of a linear least-squares objective function.
Let {circumflex over (Q)}i(z) denote the resulting estimate for Qi(z) using the Steiglitz-McBride algorithm and zi,1≧zi,2≧zi,3 the roots of {circumflex over (Q)}i(z). The estimates for r2 and ri,3 are obtained from the roots of {circumflex over (Q)}i(z), I=1, 2, . . . , I, in the following manner. Because if β<1 and β>ki by an order of magnitude and one of the roots of Qi(z) equals one, in theory, the initial estimates for the parameters r2 and ri,3 can be r2(0)=avg{z1,2, z2,2, . . . , zI,2} and ri,3(0)=zi,3, respectively.
The Steiglitz-McBride algorithm is not used instead of the V-MBF algorithm to estimate the discrete-time K-FADS parameters when given an estimate for the RV factor because, for one reason, the roots of {circumflex over (Q)}i(z) are not guaranteed to be real with one root constrained to equal one, as required by the discrete K-FADS model. Another reason is that simulations discovered that estimating c and θb, from the estimates {circumflex over (P)}i(z), {circumflex over (Q)}i(z), i=1, 2, . . . , I, was not reliable.
Simulation Studies
To assess the potential of the V-MBF algorithm, simulated data was applied that modeled patient data used for cardiac health assessments. The model for the RV curve was
ƒ(t)=A(t−t0)exp(−α(t−t0))u(t−t0), (49)
where A=2700, t0=14 seconds, and α=0.12. Referring to equations (4), (5), (6), and (7), the parameters of the simulated data were γ=0.2, β=0.1, τ=13 seconds, and for all i, ki,2=0.001 s−1, ki,3=0.01 s−1, ci=0.03, ci,2=0.05, ci,3=0.8, ci,4=0.8. Note, the V-MBF algorithm does not require that the parameters ci, ki,2, and ki,3 be voxel independent. The values for these parameters were simply chosen for exemplary purposes. The MBF parameters {ki,1} (units s−1) for the 20 voxel scenario that were considered were
k
1=[0.0075,0.0462,0.0371,0.0233,0.0534,0.0281,0.0320,0.0336,0.0433,0.0036,0.0083 0.0034,0.0021,0.0103,0.0096,0.0345,0.0316,0.0031,0.0257,0.0346]. (50)
The simulated TACs ai(t), I=1, 2, . . . , I were computed using equation (11) and the above parameter values.
To model the integration characteristic of the scanner, the simulated TACs were integrated using a time-varying window. The resulting integrated data simulated the activity data that would actually be available in practice. A typical protocol used at an exemplary PET Center could leads to the following specification for the simulated integrated TACs (I-TACs)
where T1=3 seconds, T2=12 seconds, and T3=30 seconds.
The V-MBF algorithm can be based on standard TAC data (i.e., ai(nT)). Consequently, the I-TAC data is preferrably pre-processed. The I-TAC data is assumed to be nearly piece-wise linear. It follows using a known method from Kuhle that the standard TAC data at the midpoints of the windows is approximately:
a
i(0.5(kT1+(k−1)T1))≈gk,k=1,2, . . . ,20 (52)
a
i(0.5(kT2+(k−1)T2))≈gk,k=21,22, . . . ,25 (53)
a
i(0.5(kT3+(k−1)T3))≈gk,k=26,27, . . . ,31 (54)
Now, standard sampled TAC data can be estimated from the measured activity data {gk} using interpolation. Specifically, the “known” values for ai(t), {ai(0.5 (kT1+(k−1)T1))}k=120, {ai(0.5 (kT2+(k−1)T2))}k=21′25, and {ai(0.5 (kT3+(k−1)T3))}k=3126 can be used (see equations (52), (53), and (54)), and linear interpolation to obtain estimates for yi[n]=ai(nT), n=0, 1, . . . , N−1. In the simulations, a preferred sampling interval is T=0.05 sec. It is noted that the approach described above for obtaining standard sampled TAC data would also be used to generate an initial RV factor from I-TAC data located in the RV.
The V-MBF algorithm described above was applied for 5000 iterations where one sub-iteration was used to update the estimate for the RV curve. In this simulation, the maximum error in the MBF estimates was 1.5 percent. A typical result of a V-MBF algorithm is summarized in the
Thus, the V-MBF algorithm can be based on a model that accounts for the fact that the shape of TACs due to ischemic and normal tissue are different. In fact, the model can allow for the factors that represent free and trapped ammonia to be voxel dependent and physiologically appropriate. By contrast, in a standard FADS model, it is assumed that TACs in ischemic and normal tissue can be modeled as a linear combination of the same three factors. The present methods and systems represent a significant improvement in the art as a more appropriate model to provide more accurate MBF estimates than available methods.
The V-MBF algorithm presented herein performs well in simulation studies where unknown MBF parameters varied by an order of magnitude. This suggests that the V-MBF algorithm is robust and would perform well in practice, where MBF values due to ischemic and normal tissue can vary over a wide range. Although random noise was not added to the simulated TAC data, interpolation noise and noise due to the discretization of the continuous-time K-FADS model were present. Also, because only the integrated RV factor is available, the first time point where the RV factor is nonzero can never be known with certainty. With these three sources of noise, the maximum error of the MBF parameters estimates was 1.5 percent. It should be noted that including more data points should, lead to improved MBF estimates because the parameter β is voxel independent.
II. A Second Approach
A second approach to MBF analysis will be described as follows. For clarity, equations will be renumbered to start with equation (1) within the discussion of this other approach.
As discussed above, the problem of estimating the weights and signal vectors of the above described models is a blind inverse problem. In this case, the data are nonnegative J×1 vectors {a1, a2, . . . , aJ} that are modeled as a weighted sum of signal vectors plus noise
where the J×1 signal vectors, {fk}, and signal weights, {ci,k}, are nonnegative and unknown, and {ei} is the noise. In our discussion of this approach, it will be convenient to use an equivalent expression for the model
where aij and eij are the jth components of ai and ei, respectively, and fjk is the jth value of the kth signal vector. An example where this model is used is cardiac imaging using dynamic positron emission tomography (PET), where aij is a measure of the radiopharmaceutical concentration in the ith voxel at the jth time point. Another example is multispectral imaging, where aij represent the value of the ith pixel in the jth spectral plane.
Given the data {aij}, the problem is to estimate the weights {cjk} and signal vector values {fjk}. The least-squares estimates of the weights and signal vector values are obtained by minimizing the least-squares objective function L subject to a non-negativity constraint.
The vectors c and f contain the signal weights and signal vector values, respectively:
c
[c
11
,c
21
, . . . ,c
I1
,c
12
,c
22
, . . . ,c
I2
, . . . ,c
1K
,c
2K
, . . . ,c
IK] (5)
ƒ[ƒ11,ƒ21, . . . ,ƒI1,ƒ12,ƒ22, . . . ,ƒI2, . . . ,ƒ1K,ƒ2K, . . . ,ƒIK] (6)
The least squares estimation problem is ill-posed, so the results are highly dependent on the initial estimates.
Estimation of Signal Vectors and Weights
A known standard least-squares algorithm by Lee and Seung monotonically decreased the least-squares objective function, L, and produced nonnegative estimates. In this section, this “standard least-squares algorithm” is re-derived using a known technique called the majorize-minimize method (MM). This derivation will place the proposed least-squares extensions in context so that their advantages over the standard least-squares algorithm are clear.
An approach for minimizing the least-squares objective function, L, would be, in an alternating fashion, to minimize L with respect to the signal weights while holding the signal vectors fixed to their current value, and then minimize L with respect to the signal vectors while holding the signal weights fixed to their current value. Given initial estimates {cik(0)} and {ƒjk(0)}, this algorithm can be expressed mathematically as follows:
Here, Steps 1′ and 2′ imply that the resulting algorithm monotonically decreases the least-squares objective function
L(c(n+1),ƒ(n+1))≦L(c(n),ƒ(n)) for all n=0,1,2, . . . (7)
Introduction to the MM Method
The minimizations problems in Steps 1′ and 2′ are difficult so an alternative approach is needed. In this section, the MM method is introduced: then, the following section demonstrates how the MM method can be used to develop an algorithm that, by construction, monotonically decreases the least-squares objective function, produces nonnegative estimates, and is straightforward to implement.
Consider a real valued function f with domain DεRn that is to be minimized. A real valued function g with domain {(x1, x2): x1, x2εD} is said to majorize the function f if the following conditions hold for all x, yεD:
g(x,y)≧f(x) and (C1′)
g(x,x)=f(x). (C2′)
MM algorithms are a viable approach provided a majorizing function can be found that is easier to minimize than the original objective function. Assuming that g is a suitable majorizing function for f, the corresponding MM algorithm is
It follows from (C1′) and (C2′) that the MM algorithm defined by (8) is monotonic
f(x(k+1))≦g(x(k+1),x(k))≦g(x(k),x(k))=f(x(k)). (9)
Standard Least-Squares Algorithm
The following describes an MM algorithm for estimating signal vectors and signal weights that is equivalent to the least-squares algorithm of Lee and Seung.
In the discussion that follows, it will be convenient to define sets Dc and Df, where Dc and Df are the set of non-negative vectors of dimension IK×1 and JK×1, respectively. Let c(n) and f(n) be the current estimates for the signal weights and signal vectors, respectively. Further, let q and r be certain majorizing functions that satisfy the following conditions for x, y, cεDc and s, t, fεDf be
q(x,y,f)≧L(x,f) (C1)
q(x,x,f)=L(x,f) (C2)
r(s,t,c)≧L(c,s) (C3)
r(x,x,c)=L(c,x). (C4)
Using the idea behind equation (8), CLA put forth the following MM algorithm for minimizing the least-squares objective function L
It is straightforward to show that least-squares objection function monotonically decreases with increasing iterations. First, from (C1) and equation (10) it follows, respectively that
L(c(n+1),ƒ(n))≦q(c(n+1),c(n),ƒ(n)) (12)
q(c(n+1),c(n),ƒ(n))≦q(c(n),c(n),ƒ(n)). (13)
Similarly, from (C3) and equation (11) it follows, respectively, that
L(c(n+1),ƒ(n+1))≦r(ƒ(n+1),ƒ(n),c(n+1)) (14)
r(ƒ(n+1),ƒ(n),c(n+1))≦r(ƒ(n),ƒ(n),c(n+1)). (15)
Now, from (C2) and (C4) it follows, respectively, that
q(c(n),c(n),ƒ(n))=L(c(n),ƒ(n)) (16)
r(ƒ(n),ƒ(n),c(n+1))=L(c(n+1),ƒ(n)). (17)
Consequently, we can conclude from equations (12)-(17) that
L(c(n+1),ƒ(n+1))≦L(c(n),ƒ(n)). (18)
At this point, all that remains is to determine the majorizing functions q and r. From equation (4), the least-squares objection function can be written as
Using a known approach, the convexity of the square function is exploited to obtain the following inequality:
where c(n)εDc, ƒεDƒ, and
It should be noted that (20) is a convex combination with weights wijkcik(n) ƒjk[Σk′=1K′cik′(n)ƒjk′]−1, where wijk≧0 and Σk=1K wijk=1 for all i, j. Replacing the (Σk=1K cikƒjk)2 term in (19) by the right hand side of (22), we get the following majorizing function for L
where, by construction, q(c, c(n), ƒ)≧L(c, ƒ) and q(c, c, ƒ)=L(c, ƒ) for all c, c(n)εDc and ƒεDƒ.
By repeating the steps used to derive equation (24) with the roles of c and ƒ switched, the majorizing function
which satisfies the properties r(ƒ, ƒ(n), c)≧L(c, ƒ) and r(ƒ, ƒ, c)=L(c, ƒ) for all cεDc and ƒ, ƒ(n)εDƒ.
To determine updates defined in equations (10) and (11), the partial derivatives of q(c, c(n), f(n)) and r(f, f(n), c(n+1)) are computed with respect to c and f, respectively, with the corresponding equations set to zero. It is straightforward to show that the derivatives are given by
Setting the derivatives in equations (26) and (27) to zero leads to the following least-squares algorithm for estimating the signal weights and signal vectors:
As shown above, the starting point for developing the proposed MM algorithm was equations (10) and (11). Alternatively, an MM algorithm can be developed by updating the signal vectors first and then the signal weights:
In this case, the resulting MM algorithm is
Additive Least-Squares Algorithm
The standard least-squares algorithm defined by equations (28) and (29) (see also (32) and (33)) is referred to as a multiplicative algorithm. In this section, we derive an additive algorithm by developing an alternative majorizing function for the least-squares objective function L.
Exploiting the convexity of the square function we obtain the following inequality
where
where, by construction, qA(c, c(n), ƒ)≧L(c, ƒ) and qA(c, c, ƒ)=L(c, ƒ) for all c, c(n)εDc and ƒεDƒ.
When the steps used to derive equation (24) are repeated with the roles of c and ƒ switched, we get the following majorizing function:
which satisfies the properties, rA(ƒ, ƒ(n), c)≧L(c, ƒ) and rA(ƒ, ƒ, c)=L (c, ƒ) for all cεDc and ƒ, ƒ(n)εDƒ.
The updates defined by equations (10) and (11) are determined with q and r replaced by qA and rA, respectively. Specifically, the partial derivatives of qA(c, c(n), f(n)) and rA(f, f(n), c(n+1)) are computed with respect to c and ƒ, respectively, and then the resulting equations are set to zero. The desired derivatives can be given by
Setting the derivatives in equations (38) and (39) to zero leads to the following additive least-squares algorithm for estimating the signal weights and signal vectors
where [x]+max(0, x). Now, when the signal vectors are updated first and then the signal weights, the corresponding additive least-squares algorithm is given by
Extensions of Standard Least-Squares Algorithm: Application to Myocardial Blood Flow Estimation Using Positron Emission Tomography
In this section, the standard least-squares algorithm application to the problem of estimating absolute myocardial blow (MBF) noninvasively using positron emission tomography is addressed. Then, extensions of the standard least-squares algorithm are presented that lead to improved MBF estimates. These extensions are also applicable to the additive least-squares algorithm.
To assess the heart of a patient, it is desirable to estimate the patient's MBF noninvasively. One way to obtain this information is to first perform a dynamic PET scan of the patient's heart and then apply estimation algorithms that are based on equation (2) and other available models for dynamic cardiac PET data. Note, in the PET literature, the terms factor curves and factor weights are used for the terms signal vectors and signal weights. For PET based MBF estimation, the least-squares algorithm in equations (28) and (29) could first be used to estimate the factor curves and weights for a given dynamic PET data set. Then, using the resulting estimates, standard methods could be used to estimate the absolute myocardial blood flow of the patient. The accuracy of the MBF estimates would greatly depend on the performance of the standard least-squares algorithm, which, as mentioned previously, is highly dependent on the initial estimates. Therefore, we develop extensions of the least-squares method that greatly reduce the parameter space by incorporating a priori information. In practice, the proposed algorithms are expected to be more stable and produce more accurate estimates of the factor curves and weights than the standard least-squares algorithm. Therefore, improved MBF estimation is anticipated when the proposed algorithms are used instead of the standard least-squares algorithm.
Model
Let ai(t) denote the continuous-time activity in voxel i at time t, where tε[0, T] and T is the duration of the scan. In practice, only samples of the data are available so aij denotes the activity in voxel i at time t=jTs(i.e., time frame j)
a
ij
a
i(jTs),i=1,2, . . . ,I,j=0,1,2, . . . ,J−1 (44)
where Ts is the sampling interval, I is the number of voxels, and J is the number of time frames. Referring to equation (2), it is typically assumed that the activity aij is a linear combination of K=3 unknown factor curves representing the sampled right ventricular blood pool, left ventricular blood pool, and myocardial tissue curves, respectively. Note, the factor weights for a particular time frame can be viewed as an image and therefore are collectively referred to as a factor image.
Physiological Based Constraints: Right and Left Ventricle Tissue Curves go to Zero
Due to the physiology of the heart and half-life of the radiopharmaceuticals used in nuclear cardiology, the factor curves for the right and left ventricles go to zero as t approaches 0, which a priori information is incorporated into the estimation problem. Thus, an alternative to the least-squares formulation is to add a penalty term to the least-squares objective function that “forces” the factor curves for the right ventricle f1[f11, f21, . . . , fJ1] and left ventricle f2[f12, f22, . . . , fJ2] to go to zero. Given its potential advantage over the standard least-squares method, we propose the following penalized least-squares method
where the penalty parameters β1 and β2 control the degree of influence of the penalty terms Λ(f1) and Λ(f2), respectively. Although there are many possible choices for the penalty function Λ, in this approach the following function is used:
where g is a real J×1 vector. It can be seen that Λ(g) is the energy of g after the user chosen time frame j0.
The desired MM algorithm is obtained by setting the partial derivatives of r(f, f(n), c(n+1))+β1Λ(f1)+β2 Λ(f2) with respect to f to zero. For k=1, 2, the partial derivatives of the penalty function Λ are
Using this result and equation (27), the following iterative algorithm is derived for estimating the right and left ventricle tissue curves
where I is the indicator function and S={jo+1, jo+2, . . . , J} (i.e., Is(j)=1 for jεS and Is(j)=0 for jS). The updates for the factor weights and tissue curve (i.e., f3) are given by equations (28) and (29), respectively. The proposed algorithm, refer to herein as the PLS algorithm, monotonically decreases the penalized least-squares (PLS) objective function in equation (45) and is guaranteed to produce nonnegative estimates for the values of the factor curves, f, and factor weights, c. It should be noted that the PLS algorithm provides least-squares estimates when β1=β2=0. Also, the update for the factor weights is the same for both the LS and PLS algorithms.
Due to the nonnegativity of the factor curves for the right and left ventricles, another penalty function optionally can be used to account for the fact that they decrease for t sufficiently large is
Also, with a small modification, the PLS framework can incorporate other known penalty functions.
Exploit Fact that Maximum of Right and Left Ventricle Tissue Curves can be Reliably Estimated
When the radiopharmaceutical is administered to a patient, the physiology of the body is such that the activity shows up in the right ventricle first, then the left ventricle, and finally the myocardium. These delays are such that the maximum values of the right and left ventricles are essentially free of activity from the myocardium, despite the motion of the heart and point spread function of the PET system. Thus, an estimate of the maximum value of the right ventricle can be estimated by averaging the maximum values of TACs that lie in the central region of the right ventricle. In a similar way, an estimate of the maximum value of the left ventricle can be obtained. Methods are available that identify the voxels that lie in the right and left ventricles, and myocardium.
Let μ1 and μ2 represent the unknown maximum values of the right and left ventricles, respectively, and j1 and j2 denote the locations of the maximum values of the right and left ventricles. To incorporate knowledge of μ1 and μ2, we estimate the tissue factors using
ƒ(n+1)=arg min r(ƒ,ƒ(n),c(n)) subject to ƒ≧0,|ƒj
where ε is a tolerance parameter chosen by the user, and {circumflex over (μ)}1 and {circumflex over (μ)}2 are estimates of the maximum values μ1 and μ2.
The function r(f, f(n), c(n)) is decoupled in the sense that there are no terms of the form fjkfj′k′, except when j=j′ and k=k′. Thus, the solution to the optimization problem in equation (50) is straightforward and leads to the following update for the right and left ventricle tissue factors (i.e., k=1, 2):
The updates for the factor weights and tissue curve are given by equations (28) and (29), respectively. Also, the denominator terms in equations (51) and (52) would be Σi=1Icik(n+1)âij(c(n+1), ƒ(n))+βkƒjk(n)IS(j) if we included the penalties on the right and left ventricle tissue curves that were discussed in the previous section.
Reduce Unknown Parameters Via a Suitable Model for Left Ventricle Tissue Curve
It has been postulated that the left ventricle tissue curve can be modeled as the convolution of the right ventricle tissue curve with a gamma function. Described below is an exploitation of this idea and development of an extension of the standard least-squares algorithm that has significantly fewer unknowns.
1) Model: Let r(t) and l(t) denote the unknown continuous-time tissue curves for the right and left ventricles, respectively. The left ventricle tissue curve is modeled as the convolution of the right tissue curve with a delayed, gamma function
l(t)=r(t)*
where “*” denotes the convolution operator
u(t) is the unit step function, and τ>0 is the delay. It should be noted that the delay is due to the fact that the radiopharmaceutical activity first appears in the right ventricle and then the left ventricle. From (54) it follows that L(s)=R(s)
As noted previously, we let fj1 and Fj2, j=0, 1, 2, . . . , J−1 denoted the sampled right and left ventricle tissue curves. Where the scan durations for dynamic sequence PET protocols are not uniform, the assumption that the TACs are sampled uniformly is inappropriate. However, uniform samples of the TACs can be obtained from non-uniform samples of the TACs via a suitable interpolation. It is of interest to determine a discrete-time system with the property that its response to the right ventricle tissue curve f1 closely approximates the left ventricle tissue curve f2. The bilinear transformation is a popular way to transform a linear time-invariant continuous-time system into a linear time-invariant discrete-time system. A limitation of the bilinear transform is that a delay in a continuous-time system must be an integer multiple of the sampling interval. Assuming the delay τ is a multiple of the sampling interval Ts, the system function of the desired discrete time system, H(z), can be obtained by applying the bilinear transformation to the continuous-time system Hc(s)=
where τ=dTs for some inter d, g=a(2/Ts+b)−(m+1), and p=(2/Ts+b)−1(2/Ts−b) (note: hc(t)
Let hj(θ) denote the inverse z-transform of H(z), using a notation that illustrates its dependence on the parameters θ=(g, p, m, d). Then, the assumed relationship between the sampled right ventricle and left ventricle tissue curves can be written as
ƒj2=ƒj1*hj(θ)=Σs=0J−1ƒs1hj−s(θ) (59)
where, for simplicity, our notation for fj2 does not account for its dependence on θ. Moreover, the corresponding least-squares objective function, Lθ, has the same form as L (see equation (4))
except it depends on θ because fj2=fj1*hj(θ). To insure that hj(θ) is a nonnegative function, the following feasible set for θ is chosen:
D
θ
{g,p,m,d: g≧0,0≦p≦1,m=0,1,2, . . . ,d=0,1,2, . . . }. (61)
Hence, for θεDθ, ƒj2 is a nonnegative function provided ƒj1 is a nonnegative function.
Convolution Least-Squares Algorithm
Given the similarity of Lθ and L, it follows that the corresponding majorizing functions for Lθ are
where, again, fj2=fj1*hj(θ). By construction, for all c, c(n)εDc, ƒ1, ƒ3≧0, and θεDθ,
q
θ(c,c(n),ƒ1,ƒ3,θ)≧Lθ(c,ƒ1,ƒ3,θ) (64)
q
θ(c,c,ƒ1,ƒ3,θ)=Lθ(c,ƒ1,ƒ3,θ) (65)
r
θ(ƒ1,ƒ3,ƒ1(n),ƒ3(n),c,θ)≧Lθ(c,ƒ1,ƒ3,θ) (66)
r
θ(ƒ1,ƒ3,ƒ1,ƒ3,c,θ)=Lθ(c,ƒ1,ƒ3,θ) (67)
Using the MM methodology, the updates for estimating the factor weights and tissue curves, given the current parameters estimates, are given by
The optimization problem in equation (69) is not straightforward because the objective function in (69) has “cross terms” of the form fj1fj′1, j≠j′. These cross terms are due to the ƒj22 term in equation (63). Thus, it will be beneficial to construct a majorizing function for ƒj22. It is straightforward to show that the steps used to derive the inequality in equation (22) can be repeated to get the following inequality
which has the property that υ(f1, f1, θ)=ƒj22 for all f1≧0 and θεDθ. Now, equation (71) is substituted into equation (63) to get a majorizing function that satisfies equations (66) and (67) and can be easily minimized with respect to f1
Comparing equations (24) with (62), and (25) with (63), it is evident that the update for the factor weights c (see equation (68)) and the third factor curve f3 (see equation (69)) are given by equations (28) and (29), respectively, except that now ƒj2(n) is given by equation (74). In order to get the update for the right ventricle f1 (see equation (69)), we take the partial derivative of
Now, equation (76) is set to zero to get the desired update for the right ventricle tissue curve, which is
When knowledge about the maximum value of the right ventricle tissue curve is incorporated, as well as the penalty function Λ, then the update for the right ventricle tissue curve is
With the update for the right ventricle curve in hand (i.e., update in equations (77) or (78)), the algorithm for minimizing the least-squares function Lθ, is proposed, which is referred to herein as the convolution least-squares (CLA) algorithm. First, let dmin and dmax denote the minimum and maximum delay considered, respectively, and mmax denote the maximum value considered for the parameter m. The CLA algorithm follows:
The desired parameter estimates are the estimates from the CLA algorithm that produce the smallest least-squares error. It should be noted that the objective function Lθ decreases monotonically with increasing iterations.
The problem in equation (79) can be solved using the following iterative coordinate descent method. It will be convenient to express hj(θ) as hj(θ)=g
With this notation, the objective function in (79) can be written as
Thus, the unconstrained minimizer of Lθ(c(n+1), ƒ1(n+1), ƒ3(n+1), g, {circumflex over (p)}, {circumflex over (m)}, {circumflex over (d)}) with respect to g is simply
where {circumflex over (p)} denotes the current estimate for the filter parameter p. With the above result, the steps for solving the optimization problem of equation (79) is given using the coordinate descent method: Let pold=p(n) and [x]+=max(0, x)
So configured, the CLA algorithm is monotonic and guaranteed to produce nonnegative estimates.
Initialization
User-dependent and user-independent methods are available for determining initial estimates for the right and left ventricle tissues curves, and myocardial tissue curve. The initial factor weights are typically chosen to be uniform in the sense that cik=⅓ for all i, k. Regarding the range of values for the delay, suitable values for the minimum and maximum delay between the right and left ventricle tissue curves are available from the literature in human physiology.
It is expected that a suitable maximum value for the filter order, mmax, can be determined through experiments. Consequently, the following addresses computing initial estimates for the filter parameters g and p.
Because F2(z)=H(z)F1(z) from equation (59), it follows that F2(0)(z)≈H(z)F1(0)(z), where ƒ1j(0) and ƒ2j(0) denote the initial right and left ventricle tissue curves, and H(z) is given by equation (58). Therefore,
Thus, the initial left ventricle tissue curve is assumed to be approximately equal to the response of a certain mth-order, all-pole filter to the initial right ventricle tissue curve. This observation and equation (91) forms the basis of the following method for obtaining initial estimates for g and p:
The set of parameters that produce the smallest least-squares error are the desired initial estimates for the parameters g, p, m, and d, which we denote respectively as g(0), p(0), m(0), and d(0).
The initial estimates for the parameters m and d are not explicitly used in the CLA algorithm. However, they could be used to reduce the range of values considered for the parameters m and d. For example, the “for loop” for the delay in the CLA algorithm could be instead: for {circumflex over (d)}=d(0)−Δ: d(0)+Δ, where Δ>0, would be an integer chosen by the user.
III. An Approach without Assumption of a Mathematical Relationship Between Right Ventricle and Left Ventricle Curves
In a further approach to estimating MBF, we first incorporate the Hutchins pharmacological kinetic model into the standard FADS model in a way to not assume that the RV and LV tissue curves obey any particular mathematical relationship unlike the methods described above. The improved model, which we refer to as the second pharmacological kinetics based FADS (K-FADS-II) model, provides another way for estimating voxel dependent myocardium tissue curves that are physiologically meaningful. In our next step, we perform a discretization that transforms the continuous-time K-FADS-II model into a discrete-time K-FADS-II model. It should be noted that there is a simple relationship between the discrete-time and continuous-time K-FADS-II parameters. Lastly, we develop an algorithm, which we call the Improved Voxel-Resolution Myocardial Blood Flow (IV-MBF) algorithm that estimates the parameters of the discrete-time K-FADS-II model by iteratively minimizing a certain least-squares (LS) objective function. The desired MBF estimates are computed in a straightforward manner from the estimated discrete-time K-FADS-II model parameters. The IV-MBF algorithm was evaluated subjectively and objectively through experiments conducted using synthetic data and patient data, and found to be accurate and stable in the test scenarios we considered. Hence, we believe the proposed method is feasible for determining physiologically meaningful estimates of MBF.
To avoid confusion in view of the discussion above regarding other approaches, we start again from the fundamentals and restart the numbering of equations. The kinetic behavior of ammonia in the myocardium is modeled by the compartment model shown in
where CP is the ammonia concentration in blood plasma, CF is the concentration of the free (i.e., unbound) ammonia, and CT is the concentration of the trapped (i.e., bound) ammonia.
In the standard FADS model, it is assumed that aij, the activity in voxel i of frame j, is a linear combination of K primary factor curves
where {ƒkj} are the values of the factor curves and the coefficients {cik} are the factor weights. The primary factor curves for conventional MBF estimation applications are the right ventricle (RV), left ventricle (LV), and myocardial tissue curves, which model the ammonia concentration as a function of time in the RV, LV, and myocardium, respectively. The mathematical task is to find both the factor curves and weights so that the linear combination of factor curves for every voxel in the myocardium matches the corresponding measured time activity curve (TAC) as close as possible. This problem is constrained by requiring that the factor curves and weights all be nonnegative.
Other approaches include spillover correction because of the finite resolution of the scanner and because the myocardium is moving during the scan. In one known approach, factor analysis was used to obtain, in theory, spillover independent time activity curves of the RV, LV, and myocardial blood tissue. By using curves generated from factor analysis, the spillover component in the model can be eliminated in theory. However, it is noted that factor analysis does not correct for the under measurement due to the partial volume effect. Correcting errors due to the partial volume effect would require the use of a contrast recovery coefficient.
A. K-Fads-II Model
In this section, we first present a model that combines Hutchins' pharmacological kinetic model with the standard factor analysis model (see equation (3)). The resulting model is an improvement over an earlier version discussed above, so we call it the second pharmacological kinetics based FADS model (K-FADS-II). Next, we discuss a discretization that transforms the continuous-time K-FADS-II model into a discrete-time K-FADS-II model. Specifically, the discrete-time K-FADS-II model is obtained by applying the bilinear transform to the continuous-time K-FADS-II model. It should be noted that there is a simple relationship between the discrete-time and continuous-time K-FADS-II parameters.
1. Continuous-Time Model for Activity in the Myocardium
The unknown ammonia concentrations in the RV and LV (i.e., CP) are assumed to be spatially independent, whereas the unknown concentrations of the free ammonia (i.e., CF) and trapped ammonia (i.e., CT) are assumed to spatially dependent. With these assumptions in mind, we let the RV and LV tissue curves be denoted by the continuous-time functions ƒ(t) and g(t), respectively, and the free and trapped ammonia concentrations in the ith voxel within the myocardium (i.e., the ith myocardium voxel) be denoted by gi,F(t), and gi,T(t), respectively.
Under the above assumptions it follows that the rate constants in Hutchin's model are spatially dependent. After applying the Laplace transform to voxel dependent versions of equations (1) and (2), we get the following expressions for the concentrations of the free and trapped ammonia in the ith voxel within the myocardium
g
i,F(t)=ki,1g(t)*exp(−
g
i,T(t)=ki,3gi,F(t)*u(t) (5)
where
a
i(t)=ci,1ƒ(t)+ci,2g(t)+ci,3gi,F(t)+ci,4gi,T(t),i=1,2 . . . ,I. (6)
The first term in (6) is identified as the amount of spillover from the RV, and the second term is a combination of the ammonia activity in blood vessels within the myocardium and spillover from the LV. More specifically, the constant ci,1 accounts for the amount of the ammonia activity in voxel i that is due to the blood plasma in the RV. Further, the constant ci,2 accounts for the amount of the ammonia activity in voxel i that is due to the blood plasma in the LV (i.e., LV spill over) and blood plasma in the blood vessels of the myocardium. The third and fourth terms in (6) model the free and trapped ammonia activity in the myocardium, respectively. The coefficients ci,3 and ci,4 represent the fractional volume of voxel i that can be occupied by the ammonia activity in either the free or trapped states, respectively. Given that the free space for water in myocardial tissue is approximately 80%, we assume that ci,3=ci,4=0.8.
2. Discrete-Time Model for Activity in Myocardium
When a dynamic PET scan of the heart is taken, the images are separated into three sets of voxels that lie in the RV, LV, and myocardium such as that described by V. Appia, B. Ganapathy, A. Yezzi, and T. Faber in “Localized principal component analysis based curve evolution: A divide and conquer approach,” 2011 International Conference on Computer Vision, pp. 1981-1986, 2011, which is incorporated by reference. Let the measured TACs associated with the set of voxels in the RV, LV, and myocardium be referred to as the RV TACs, LV TACs, and myocardium TACs, respectively. To model the underlying sampling process of dynamic PET scanning protocols, we let di[n], r[n], and l[n] represent the discrete-time signals obtained by sampling ai(t), ƒ(t), and g(t), respectively
d
i
[n]
a
i(nTs),n=0,1, . . . ,N−1,i=1,2, . . . ,I (7)
r[n]ƒ(nTs),n=0,1, . . . ,N−1, (8)
l[n]
g(nTs),n=0,1, . . . ,N−1, (9)
where
is the sampling rate and T is the sampling interval. We will refer to the quantities di[n] as the ith sampled myocardium TAC, and the discrete time signals r[n] and l[n], which are unknown, as the sampled RV tissue curve and sampled LV tissue curve, respectively. In applications where the scan durations for dynamic sequence PET protocols are not uniform, the assumption that the TACs are sampled uniformly is inappropriate. However, uniform samples of the TACs can be obtained from non-uniform samples of the TACs via a suitable interpolation using known methods.
It is of interest to determine a discrete-time system with the property that its response to the sampled RV and LV tissue curves closely approximates the ith sampled myocardium TAC. The bilinear transformation is a popular way to transform a linear time-invariant continuous-time system into a linear time-invariant discrete-time system. Specifically, the system function of a continuous-time system, H(s), can be converted into the system function of a discrete-time system Hd(z) using
In our analysis, we make use of the z-transform. Let x[n] be an arbitrary discrete-time sequence. The z-transform of x[n] is defined to be X(z)=Σn=−∞∞x[n]z−n.
Taking the Laplace transform of (6), we get the following relationship
After sampling ai(t), and applying the bilinear transformation to the system functions Hi,F(s) and Hi,T(s), we get the desired discrete-time model
The bilinear transformation has the property that it maps a stable continuous-time system into a stable discrete-time system. Moreover, the bilinear transformation avoids the problem of aliasing by mapping the jΩ axis into the unit circle of the complex plane. However, frequency warping occurs as a result of mapping the entire jΩ axis into the unit circle. Note, the frequency warping problem can be ameliorated by choosing a sufficiently high sampling rate ƒs.
From equation (14), it follows that the ith myocardium TAC, which is noisy in practice, can be expressed as
d
i(n)=ci,1r(n)+ci,2l(n)+ki,1(2/Ts+ki)−1l[n]*hF[n;pi]+ki,1ki,3(2/Ts+ki)−1(2/Ts)−1l[n]*hT[n;pi]+e[n] (15)
where e[n] is the noise and * denotes discrete-time convolution. Also,
where Z−1 denotes the inverse z-transform. Given the data {di[n]}, the problem is estimate the K-FADS-II model parameters:
r
[r[0],r[1], . . . ,r[N−1]] (18)
l
[l[0],l[1], . . . ,l[N−1]] (19)
c
{c
1,1
,c
2,1
, . . . ,c
I,1
,c
1,2
,c
2,2
, . . . ,c
I,2} (20)
k
1
[k
1,1
,k
2,1
, . . . ,k
I,1] (21)
k
2
[k
1,2
,k
2,2
, . . . ,k
I,2] (22)
k
3
[k
1,3
,k
2,3
, . . . ,k
I,3] (23)
Recall the parameters k1, are the desired MBF parameters. Thus, the parameters r, l, c, k2, and k3 can be viewed as nuisance parameters because they must be accounted for in the analysis even though they are not of direct interest.
In the next section, we develop an algorithm for estimating the sampled RV and LV tissue curves and K-FADS-II parameters that we call the Improved Voxel Resolution MFB (IV-MBF) algorithm. The IV-MBF algorithm is based on a model that accounts for the fact that the shape of TACs due to ischemic and normal tissue are different. In fact, the model allows for tissue curves that represent free and trapped ammonia to be voxel dependent and physiologically appropriate. By contrast, in the standard FADS model, it is assumed that TACs in ischemic and normal tissue can be modeled as a linear combination of the same three factors. We believe the use of a more appropriate model offers the possibility of more accurate MBF estimates than available methods.
The IV-MBF algorithm performed well in a limited study using real patient data. The results of the study suggest that the IV-MBF algorithm is robust and would perform well in practice, where MBF values due to ischemic and normal tissue can vary over a wide range.
B. IV-MBF Algorithm
Consider the problem of minimizing a real valued function ƒ with domain DεRn. The majorize-minimize (MM) technique can be used to develop an algorithm that produces a sequence of iterates {x(m)} such that ƒ(x(m)) is a monotonically decreasing function such as those described by D. Hunter and K. Lange, “A Tutorial on MM Algorithms,” The American Statistician, vol. 58, pp. 30-37, 2004, which is incorporated by reference. In this section, we first briefly introduce the MM technique and then we develop the IV-MBF algorithm by applying this technique to a certain LS objective function.
1. Review of the Majorize-Minimize Optimization Technique
A real valued function g with domain {(x1, x2): x1, x2εD} is said to majorize the function ƒ if the following conditions hold for all x, yεD:
g(x,y)≧ƒ(x) (C1)
g(x,x)=ƒ(x). (C2)
MM algorithms are a viable approach provided a majorizing function can be found that is easier to minimize than the original objective function. Assuming that g is a suitable majorizing function for ƒ, the corresponding MM algorithm is
where x(k) is the current estimate for the minimizer of ƒ. It follows from (C1) and (C2) that the MM algorithm defined by equation (24) satisfies the monotonicity property [20]
ƒ(x(k+1))≦g(x(k+1),x(k))≦g(x(k),x(k))=ƒ(x(k)) (25)
2. IV-MBF Algorithm: LS Estimation of K-FADS-II Model Parameters
In our discussion we assume that suitable initial estimates are available. Later, in Section VII, we present a procedure for determining initial estimates for the K-FADS-II model parameters.
1) Construction of a LS Objective Function L:
Because no statistical information about the noise is available, we propose to estimate the K-FADS-II parameters using the LS estimation method. From equation (15), the LS objective function is given by
where h[n; θi,h]ci,2δ[n]+ki,1(2/TS+ki)−1hF[n; pi]+ki,1ki,3(2/TS)−1(2/TS+ki)−1hT[n; pi] and θi,h[ci,2, ki,1, ki,2, ki,3]. It follows that a LS estimate of the K-FADS-II parameters is a solution to the following constrained optimization problem
({circumflex over (r)},{circumflex over (l)},ĉ,{circumflex over (k)}1,{circumflex over (k)}2,{circumflex over (k)}3)=arg min L(r,l,c,k1,k2,k3) subject to r,l,c,k1,k2,k3≧0 (28)
b) Minimization of Least-Square Objective Function L:
We propose to minimize the LS objective function L by using the block coordinate descent method. In this method, the coordinates are partitioned into a fixed number of blocks and, at each iteration, the objective function is minimized with respect to one of the coordinate blocks while the remaining coordinates are fixed to their current estimates. Let r(m), l(m), c(m), k1(m), k2(m), and k3(m) denote the current estimates for r, l, c, k1, k2, and k3, respectively. An outline of an algorithm that, in principle, could be used for minimizing L follows
A simpler alternative to Step 1 is to use the MM technique described above to obtain iterates r(m+1) and l(m+1) that decrease the least-squares objective L in the sense that, for all m=0, 1, 2, . . . ,
L(r(m−1),l(m+1),c(m),k1(m),k2(m),k3(m))≦L(r(m),l(m),c(m),k1(m),k2(m),k3(m)) (33)
Applying the same reasoning, we determine iterates c(m+1) and k1(m+1) such that
L(r(m+1),l(m+1),c(m),k1(m+1),k2(m+1),k3(m+1))≦L(r(m+1),l(m+1),c(m),k1(m),k2(m+1),k3(m+1)) (34)
Taking this approach, steps 1 and 3 of the proposed algorithm are now
[Step 1a] Find iterates r(m+1) and l(m+1) such that
L(r(m+1),l(m+1),c(m),k1(m),k2(m),k3(m))≦L(r(m),l(m),c(m),k1(m),k2(m),k3(m)) (35)
[Step 3a] Find iterates c(m+1) and k1(m+1) such that
L(r(m+1),l(m+1),c(m+1),k1(m+1),k2(m+1),k3(m+1))≦L(r(m+1),l(m+1),c(m),k1(m),k2(m+1),k3(m+1)) (36)
Solution to Step 1a:
To construct a majorizing function for L at (rold, lold), where rold and lold are the current estimates for the sampled RV and LV tissue curves, respectively, we first construct a majorizing function for (ci,1r[n]+l[n]*h[n; θi,h])2 by exploiting the following result, which is proven in the Proof of Result 1 section below:
Result 1:
Let z1[n], z2[n], and w[n] be causal positive sequences (a sequence w[n] is said to be causal if w[n]=0 for n<0), where z1[n] and z2[n] are of length N. A majorizing function for (z1[n]+z2[n]*w[n])2 is
where z1old[n] and z2old[n] are causal positive sequences of length N. From Result 1, it follows that a majorizing function for L at (rold, lold) is
It should be observed that, as expected, qL(rold, lold, c, k1, k2, k3; rold, lold)=L(rold, lold, c, k1, k2, k3).
As discussed above, MM algorithms are attractive because they have the property that their respective objective functions are monotonically decreased with increasing iterations. Thus, it follows from this property that Step 1a can be obtained by solving the following optimization problem:
Before solving equation (39), we make the following definition
{circumflex over (d)}
i
[n;θ
i,d
]
c
i,1
r[n]+l[n]*h[n;θ
i,h] (40)
where θi,d[r, l, ci,1, ci,2, ki,1, ki,2, ki,3]. Now, to determine an update for r and l, we take the partial derivative of the objective function on the right hand side of equation (39) with respect to r and l, respectively, and set the results to zero. For s=0, 1, . . . , N−1 the desired partial derivatives are
where θi,d(m)[r[m], l[m], ci,1(m), ci,2(m), ki,1(m), ki,2(m), ki,3(m)] and θi,h(m)[ci,2(m), ki,1(m), ki,2(m), ki,3(m)], and {circumflex over (d)}i[n; θi,d(m)] is the current estimate of the data point di[n]. Setting equations (41) and (44) to zero leads to the desired updates for the sampled RV and LV tissue curves:
The iterates defined by equations (45) and (46) have two important properties: (i) they satisfy the desired monotonicity properties given in equations (35), and (ii) are nonnegative provided the initial estimates, r(m), l(m), c(m), ki,1(m), ki,2(m), and ki,3(m) are nonnegative. In preliminary simulation studies, we have found that the iteration in equation (45) has not consistently resulted in accurate estimates for the sampled RV tissue curve. We believe the inaccuracy may be due to the fact that the myocardium TACs contains limited information about the RV tissue curve. Therefore, although the update in equation (45) is theoretically meaningful, it may be advisable to use the estimate for the RV tissue curve obtained from the algorithm described below based on a semi-parametric model.
Solution to Step 2:
The minimization in Step 2 is equivalent to the following I minimization problems:
Each of the above one-dimensional optimization problems can be solved using a line search algorithm such as the golden section method [21].
Solution to Step 3a:
The minimization in Step 3 is equivalent to the following I one-dimensional minimization problems:
It follows that equation (51) can be conveniently expressed as:
Observe that Step 3 requires the solution to equation (56) for all i=1,2, . . . , I. However, since we propose to solve Step 3a instead of Step 3, we use a majorizing function for a certain class of linear LS objective functions that was put forth by De Pierro in “On the relation between the ISRA and the EM algorithms for positron emission tomography,” IEEE Transactions on Medical Imaging, vol. 12, pp. 328-333, 1993, which is incorporated by reference. Given that θi, di, and the matrix Ai(m+1) are nonnegative, De Pierro's result can be used to obtain the following majorizing function for ∥di−Ai(m+1)θi∥22 about the point θi(m)
where Ai,j,k(m+1) is the element in jth row and kth column of the matrix Ai(m+1), and θi,k is the kth element of the vector θi. Additionally, for a vector ν the quantity [ν]k is defined to be the kth element of ν. Now, the partial derivative of the majorizing function q1 with respect to θi,k equals
Setting this partial derivative to zero yields the desired update, which is also known as the iterative space image reconstruction algorithm (IRSA) as described in M. E. Daube-Witherspoon and G. Muehllehner, “Treatment of axial data in three-dimensional PET,” Journal of Nuclear Medicine, vol. 28, pp. 1717-1724, 1987, which is incorporated by reference.
From equation (60), the updates for the parameters ci,1, ci,2, and ki,1 can be equivalently written as
C. Penalized MBF Algorithm
When the myocardium TACs are noisy, the MBF estimates generated by the iteration in equation (62) may be so noisy that some smoothing is necessary. Our approach for the addressing the noise is to add a limiting factor or penalty function that penalizes solutions where the MBF estimates for adjacent voxels differ significantly. Specifically, we develop the following penalized least-squares (PLS) method:
where P(k1) is a penalty function, Ni is a neighborhood about the ith voxel, and the penalty parameter, λ, determines the level of influence of the penalty function. For the penalty function, we choose the following quadratic penalty function:
A choice for Ni is the intersection of the set of eight nearest voxels to the ith voxel and the set of voxels that lie in the myocardium. Note, penalty functions could also be used to enforce smoothness on the other parameters c, k2, and k3.
The following majorizing function for the penalty function P about the point ki,1(m) was provided in G. Wang and J. Qi, “Penalized likelihood PET image reconstruction using patch-based edge-preserving regularization,” IEEE Transactions on Medical Imaging, vol. 31, pp. 2194-2204, 2012, which is incorporated herein by reference:
Because q1 is a majorizing function for the LS objective function ∥di−Ai(m+1) θi∥22, it follows that
q(θi,θi(m))q1(θi,θi(m))+λq2(k1,k1(m)) (66)
is a majorizing function for the PLS objective function in equation (63) about the point θi(m). Consequently, an MM algorithm for obtaining PLS estimates of the MBF parameters can be expressed as
To get the update for ki,1 using the PLS formulation, we first take the derivative of q with respect to ki,1:
where |Ni| is the number of elements in the set Ni. Now, equating this result to zero leads to the desired result:
The penalty function does not depend on the parameters ci,1 and ci,2 so the updates ci,1(m+1) and ci,1(m+1) are given by equation (61).
In the PLS method, a choice is made for the penalty parameter λ. In our experiments, we chose the penalty parameter experimentally. However, the popular L-curve method could be used instead to determine the penalty parameter as discussed in P. C. Hansen, “Analysis of discrete ill-posed problems by means of the L-curve,” SIAM Review, vol. 34, pp. 561-580, 1992; P. C. Hansen and D. P. OLeary, “The use of the L-curve in the regularization of discrete ill-posed problems,” SIAM Journal Scientific Computing, vol. 14, p. 14871503, 1993; and T. Reginska, “A regularization parameter in discrete ill-posed problems,” SIAM Journal Scientific Computing, vol. 17, p. 740749, 1996, each of which are incorporated herein in their entireties.
1. Semi-Parametric Model for Sampled RV and LV Tissue Curves
From an understanding of human physiology, it is well known that the RV and LV tissue curves will decay to zero. With the expectation of more accurate MBF estimation, we incorporate this knowledge by modeling the LV tissue curve as
where n0,l and β are unknown constants. We refer to this model as a semi-parametric model because only the sampled LV tissue curve values for n≧n0,l are described by a parametric model. In our semi-parametric model based method, we account for the fact that the myocardium TACs contain little information about the RV tissue curve by combining the initial RV tissue curve estimate, r(0), which is estimated in a non-parametric fashion, with the semi-parametric model for the LV tissue curve in the following manner:
where the parameter n0,r is unknown. For the estimated sampled RV and LV tissue curves to be nonnegative and decay to zero, the parameter β must satisfy the constraint 0<β<1.
2. LS Estimation of Sampled RV and LV Tissue Curves for n0,r and n0,l Known
For the moment, we will assume that the parameters n0,r and n0,l are known. Using the models for the sampled RV and LV tissue curves expressed by equations (70) and (71), the associated LS objective function is given by
where ln
Model Based Step 1a: Find an iterate ln
L
Model Based Step 1b
where ε is a small positive constant. Note replacing the constraint 0<β<1 with ε≦β≦1−ε makes the problem in Model Based Step 1b well defined.
First, we address the problem in Model Based Step 1a. Let the N×1 vectors srold and slold denote an estimate of the semi-parametric RV and LV tissue curves, respectively:
s
r
old
[s
r
old[0;n0,r,βold],srold[1;n0,r,βold], . . . ,srold[N−1;n0,r,βold]] (75)
=[r(0)[0],r(0)[1], . . . ,r(0)[n0,r],r(0)[n0,r](βold)1, . . . ,r(0)[n0,r](βold)N−1−n
s
l
old
[s
l
old[0;n0,l,βold],slold[1;n0,l,βold], . . . ,slold[N−1;n0,l,βold]] (77)
=[lold[0],lold[1], . . . ,lold[n0,l],lold[n0,l](βold)1, . . . ,lold[n0,l](βold)N−1−n
Mimicking the steps used to derive equation (38), a majorizing function for
For ν=0, 1, . . . , n0,l, the derivative of q
where sr(m) and sl(m) are derived in a similar way as sr(old) and sl(old), respectively, and
i
[n;s
r
(m)
,s
l
(m)
,c
i,1
(m),θi,h(m)]ci,1(m)sr(m)[n;n0,r,β(m)]+sl(m)[n;n0,l,β(m)]*h[n;θi,h(m)] (82)
is an estimate for the data point di[n] using the semi-parametric model for the sampled RV and LV tissue curves in equations (70) and (71). From the definition in equation (70), for n=0, 1 . . . . , n0,l, it follows that sl[ν; n0,l, β]=l[ν] and sl(m)[ν; n0,l, β(m)]=l(m)[ν]. Thus, for n=0, 1, . . . , n0,l, the partial derivative in equation (82) becomes
Setting equation (85) to zero leads to the desired update for the sampled LV tissue curve for n=0, 1, . . . , n0,l
The next update for β is obtained by solving the minimization problem in Model Based Step 1b by applying a one-dimensional line search such as described by M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming Theory and Algorithms. New York, N.Y.: John Wiley & Sons, Second Edition, Chapter 8, 1993, which is incorporated by reference. Finally, given β(m+1), the next estimates for the RV and LV tissue curve at the same point n=n0,r+1, n0,r+2, . . . N−1 and n=n0,l+1, n0,l+2, . . . N−1, respectively, are given by:
r
(m+1)
[n]=r
(0)
[n
0,r](β(m+1))(n−n
l
(m+1)
[n]=l
(m+1)
[n
0,l](β(m+1))(n−n
Summarizing, the updates for the sampled RV and LV tissue curves when the semi-parametric model is assumed are given by
3. Estimation of the Parameters: n0,r and n0,l
We define davg,r[n] and davg,l[n] to be the average of the RV and LV TACs, respectively. Let nr,max and nl,max equal the time points where davg,r[n] and davg,l[n] equal their maximum values, respectively. Additionally, let nr,half represent the time point after nr,max where davg,r[n] is halfway down from its maximum value (note: nl,half is defined similarly). A reasonable assumption is that the time points where the sampled RV and LV tissues curves begin to decay exponentially are nr,half and nl,half, respectively.
Another approach would be to first compute the K-FADS-II parameters for each (n0,r, n0,l) pair that comes from a set of (nφ,r, nφ,l) pairs, such as the set
{(nr,half,nl,half),(nr,half+1,nl,half+1), . . . ,(nr,half+Tmax,nl,half+Tmax)} (90)
where Tmax would be specific by the user. Then, an error criterion known as the minimum description criterion such as that described in B. Porat, Digital Processing of Random Signals: Theory and Methods. New York, N.Y.: Prentice Hall, 1994, which is incorporated by reference, would be used to determine the optimal estimates for n0,r, and n0,l, and then the K-FADS-II parameters would be estimated using the proposed algorithm.
E. Initialization Procedure
In this section, we first discuss a straightforward way to obtain initial estimates of the RV and LV tissue curves. Then, given these initial estimates, we present a reliable way to generate initial estimates for c, k1, k2, and k3.
1. Initial RV and LV Tissue Curves
Due to the motion of the heart and the finite resolution of PET imaging, the RV and LV TACs are corrupted by activity in the myocardium so they do not decay to zero. Keeping in mind the reasoning behind the semi-parametric model for the RV and LV tissue curves, the initial RV and LV tissue curves are chosen to be
where β(0) is the initial estimate of the parameter β.
An extension of this initialization procedure is to first determine the maximum value of each of RV TAC and then scale r(0)[n] in equation (91) so that its maximum value equals the average of the Q largest maximum values. The corresponding scale factor would be applied to l(0)[n] in equation (92). The underlying assumption behind this extension is that the RV and LV TACs with the largest maximum values have the least amount of noise due to activity in the myocardium. Note, in the experiments discussed in Section VIII we used β(0)=0.95 and Q=3.
2. Initial Estimates for the Parameters: k2, and k3
Now, we will describe a method for finding initial estimates for k2 and k3 given the initial estimates r(0) and l(0). The model in equation (14) can be written as:
where
Comparing equations (93) and (94) it can be seen that ai[1]=pi. Given the myocardium TACs {di[n]} and initial estimates for the sampled RV and LV tissue curves, the problem is to estimate the parameter vector φ[ai[1], bi[0], bi[1], bi[2], mi[0], mi[1]].
To estimate φ, we propose an equation-error method that is a modification of the well-known Steiglitz-McBride algorithm and based on
D
i(z)Ai(z)=Mi(z)R(z)+
which follows from equation (94). The advantage of using equation (95) instead of equation (94) is that the former equation leads to a linear LS method while the latter results in a nonlinear LS formulation. The corresponding time-domain expression for equation (96) is
d
i
[n]*a
i
[n]=r[n]*m
i
[n]+l[n]*u[n]*b
i
[n] (97)
where u[n] is the discrete-time unit step function. It follows from equation (97) that the parameter φ could be estimated by minimizing Li,init(φ) where
where
Algorithm 1 below is a summary of the method proposed for determining initial values for the parameters p[p1, p2, . . . , pl], k2, and k3. The reader should keep in mind the following relationships from above:
The optimization problem in (99) and (100) are straightforward because they are in the form of an unconstrained, linear LS estimation problem, which has a known solution. Specifically, the problems are of the form:
where y is the data and A is a known matrix. The unique minimum L2 norm solution is given by {circumflex over (x)}=A+b, where A+ is the pseudoinverse of A per known applied linear algebra techniques.
It can be shown using K. Steiglitz and L. McBride. “A technique for the identification of linear systems,” IEEE Transactions on Automatic Controls, vol. 10, pp. 461-464, 1965, which is incorporated herein by reference, that the objective function Li,init(s) can be viewed as a suitable approximation to the nonlinear LS objective function that follows from the relation in equation (94). Also, it should be noted that ki,2(0)+ki,2(0)=
3. Initial Estimates for the Parameters: c, k1
Given the initial estimates r(0), l(0), ki,2(0), ki,3(0), we generate initial estimates for c and k by solving the following minimization problem
As discussed above, this problem can be solved using the ISRA algorithm as shown below in Algorithm 2 (see also equations (61) and (62)).
In our experiments, we used J2=500.
F. Experimental Results
We tested the IV-MBF algorithm, which is summarized in Algorithm 3 below, using real patient data provided by Dr. John Votaw, Professor and Vice Chair for Research, Director of Physics and Computing in Radiology, and Professor of Radiology and Physics at the Emory University School of Medicine.
An alternative implementation simply uses the initial estimates r(0), l(0) after subtracting off activity due to the myocardium, as the final RV and LV tissue curves, respectively.
1. Description of Experimental Results Analysis
The dynamic PET data set comes from scanning an unhealthy patient at rest using a standard protocol consisting of 20 scans of duration 3 sec, followed by 5 scans of duration 12 sec, and ending with 6 scans of duration 30 sec (i.e., 31 sub-scans in total). The scanner used in the study has 22 planes and the reconstructed cardiac images are of size 42×30. Therefore, the data set consists of 31 images of size 42×30 per plane.
Let ai,j denote the measured activity in the ith myocardium voxel during the jth sub-scan. Then, {ai,1, ai,2, . . . , ai,J} for J=31 is the ith myocardium TAC in the dynamic PET data set. The measured {ai,j} can be modeled as
where ai(t) is the ith ideal continuous-time myocardium TAC (see (6)), and T1=3 sec, T2=12 sec, and T3=30 sec are the sub-scan durations, and uj,1jT1, uj,220T1+(j−20) T2, and uj,320T1+5T2+(j−25) T3. The IV-MBF algorithm is based on regularly sampled myocardium TAC data (i.e., di[n]=ai(nT)). Consequently, the measured myocardium data, {ai,j}, must be pre-processed in order to estimate the regularly sampled myocardium TAC data {di[n]}. A popular approach is to first assume the measured myocardium TAC data is nearly piece-wise linear. Under this assumption and as discussed in W. G. Kuhle, G. Porenta, S. C. Huang, D. Buxton. S. S. Gambhir, H. Hansen, M. E. Phelps, and H. R. Schelbert, “Quantification of regional myocardial blood flow using 13N-ammonia and reoriented dynamic positron emission tomographic imaging,” Circulation, vol. 86, pp. 1004-17, 1992, which is incorporated by reference, it follows that the values of ai(t) at the midpoints of the sub-scan windows are approximately equal to
a
i(uj,1−T1/2)≈ai,j,j=1,2, . . . 20 (104)
a
i(uj,2−T2/2)≈ai,j,j=21,22, . . . 25 (105)
a
i(uj,3−T3/2)≈ai,j,j=26,27, . . . 31 (106)
Thus, the regularly sampled myocardium TAC data can be estimated from the measured myocardium TAC data using interpolation. Specifically, we use the “known” values for ai(t), which are {ai(uj,1−T1/2)}j=120, {ai(uj,2−T2/2)}j=2125, and {(ai(uj,3−T3/2)}j=2631, and linear interpolation to obtain estimates for the regularly sampled myocardium TACs. Note, in our experiment we used Ts=0.5 sec for the sampling interval. It should be mentioned that the approach described above for generating regularly sampled myocardium TACs from the measured myocardium TAC data would also be used to generate the regularly sampled RV and LV TACs.
In the initialization procedure, the initial sampled RV and LV tissue curves were obtained using equations (92) and (93), and the modified Steiglitz-McBride algorithm was run for J1=20 iterations to get initial estimates for k2 and k3 (see Algorithm 1). Also, J2=500 iterations of the ISRA algorithm were used to determine initial estimates for c and K1 (see Algorithm 2). The IV-MBF algorithm was run for J3=300 iterations and all one-dimensional line searches were performed using a variant of the golden section method. Finally, the penalty parameter was chosen experimentally to be λ=100.
2. Discussion of FIGS.
Some results for the IV-MBF algorithm are shown in
For a specific myocardium voxel,
G. Proof of Result 1
Let z1[n], z2[n], and w[n] be positive, casual sequences of length K. The term (z1[n]+z2[n]*w[n])2 can be expressed as
where Δold[n]z1old[n]+z2old[n]*w[n]), and z1old[n] and z2old[n] are nonnegative, casual sequences. Due to the convex combination on the right hand side of equation (107) and the convexity of the square function, it follows that
Now, we determine a majorizing function for the sequence (z2[n]*w[n])2.
The square of the convolution of casual, nonnegative sequences z2[n] and w[n] can be written as
where zold[n] is a casual, nonnegative sequence and
Because γnk≧0 for all n, k, and Σk=0K γnk=1 for all n, the convexity property of the square function can be exploited to yield the desired result
Finally, Result 1 is obtained by replacing (z[n]*w[n])2 in (109) with the majorizing function defined by (114).
VII. Implementation of the Algorithms
The methods and techniques described in the various approaches above may be utilized, implemented, and/or run on many different types of systems, including for example computers, game consoles, entertainment systems, etc. Referring to
By way of example, the system 400 may include, but is not required to include, a central processing unit (CPU) 410, a random access memory (RAM) 420, and a mass storage unit 430, such as a disk drive. The system 400 may be coupled to, or integrated with, any of the other components described herein, such as an input device 450, 460 and other input device 470. The system 400 comprises an example of a processor based system. The CPU 410 may be used to execute or assist in executing the steps of the methods and techniques described herein. In one approach, the system 400 may further comprise a graphics processing unit to execute or assist in executing the steps of the methods and techniques described herein. In some embodiments, the input device 450 may comprise a first touch sensitive panel and the input device 460 may comprise a second touch sensitive panel. Furthermore, in another aspect, the system 400 comprises another input device 460 that may comprise other user input means such as buttons, keyboard, mouse, joystick, and the like. In another aspect, other input device 460 may further comprise output means, such as displays, sound emitters, light emitters, and the like configured to provide feedback or output to a user. In one embodiment one or more of the input device 450, input device 460 and other input device 470 comprise display functionality. In one embodiment various program content, images, shadows, lighting, and the like may be rendered on one or more of the input device 450, 460 and other input device 470.
The mass storage unit 430 may include or comprise any type of computer readable storage or recording medium or media. The computer readable storage or recording medium or media may be fixed in the mass storage unit 430, or the mass storage unit 430 may optionally include external memory 470, such as a digital video disk (DVD). Blu-ray disc, compact disk (CD), USB storage device, floppy disk, or other media. By way of example, the mass storage unit 430 may comprise a disk drive, a hard disk drive, flash memory device, USB storage device, Blu-ray disc drive, DVD drive, CD drive, floppy disk drive, and the like. The mass storage unit 430 or external memory 470 may be used for storing program code or macros that implement the methods and techniques described herein.
Thus, external memory 470 may optionally be used with the mass storage unit 430, which may be used for storing program code that implements the methods and techniques described herein. However, any of the storage devices, such as the RAM 420 or mass storage unit 430, may be used for storing such program code. For example, any of such storage devices may serve as a tangible computer readable storage medium for storing or embodying a computer program for causing a console, system, computer, or other processor based system to execute or perform the steps of any of the methods, code, and/or techniques described herein. Furthermore, any of the storage devices, such as the RAM 420 or mass storage unit 430, may be used for storing any needed database(s), gestures, lists, macros, etc.
In some embodiments, one or more of the embodiments, methods, approaches, and/or techniques described above may be implemented in a computer program executable by a processor based system. By way of example, such processor based system may comprise the processor based system 400, or a computer, console, graphics workstation, and the like. Such computer program may be used for executing various steps and/or features of the above-described methods and/or techniques. That is, the computer program may be adapted to cause or configure a processor based system to execute and achieve the functions described above. For example, such computer program may be used for implementing any embodiment of the above-described steps or techniques for performing a task at the handheld device. As another example, such computer program may be used for implementing any type of tool or similar utility that uses any one or more of the above described embodiments, methods, approaches, and/or techniques. In some embodiments, the computer program may comprise a computer simulation, or system software such as an operating system, BIOS, macro, or other utility. In some embodiments, program code macros, modules, loops, subroutines, etc., within the computer program may be used for executing various steps and/or features of the above-described methods and/or techniques. In some embodiments, the computer program may be stored or embodied on a computer readable storage or recording medium or media, such as any of the computer readable storage or recording medium or media described herein.
Therefore, in some embodiments a computer program product comprising a non-transitory medium for embodying a computer program for input to a computer and a computer program embodied in the medium for causing the computer to perform or execute steps comprising any one or more of the steps involved in any one or more of the embodiments, methods, approaches, and/or techniques described herein.
While the methods and systems have been described in conjunction with specific embodiments, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing description.
This application is a continuation in part of U.S. application Ser. No. 14/008,021 filed Sep. 27, 2013, which is the National Stage of International Application No. PCT/US2012/031263, filed Mar. 29, 2012, which claims the benefit of U.S. Provisional application No. 61/468,765, filed Mar. 29, 2011, and this application also claims the benefit of U.S. Provisional application No. 61/887,290, filed Oct. 4, 2013, each of which is incorporated by reference in their entireties herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US14/59062 | 10/3/2014 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61468765 | Mar 2011 | US | |
61887290 | Oct 2013 | US | |
61468765 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14008021 | Sep 2013 | US |
Child | 15026697 | US |