1. Field of the Invention
The present invention relates to the field of medical imaging. More particularly, embodiments of the invention relate to methods, systems, and devices for imaging, including tomography-based and MRI-based applications.
2. Description of the Related Art
Recent advancement in the computed tomography (CT) field has increased its applicability significantly. The ionizing nature of x-rays has now become a major concern, especially for high dose procedures like cardiac CT examinations. The latest estimate suggests that CT radiation may account for 1.5-2% of all cancers in US. Current CT scanners require that an entire cross-section of the patient body must be covered by the system field of view even if we are only interested a small interior region of interest (ROI), such as the heart or the cochlea.
Modern biomedical imaging demands innovative solutions to inverse problems, since the key is to noninvasively obtain information about internal biomedical features. Imaging should avoid harm to the subject and maintain the biological processes under investigation intact. A wide portion of the electrical and magnetic spectrum is utilized by various biomedical imaging modalities. Among them, there are fairly direct means such as ultrasound B scan, but in most cases inverse problems must be solved where all we can acquire are projected, diffused or otherwise convoluted signals, with x-ray and optical data being two well-known examples. X-ray computed tomography (CT) and optical molecular tomography (bioluminescence tomography and fluorescence tomography) are respectively classical and emerging modalities utilizing x-ray and infrared photons. They have important applications and share challenges in deriving innovative solutions to a set of linear inverse problems which are highly ill-posed due to incomplete, inconsistent and intricate measurements. These modalities would be enhanced by cutting-edge inverse technologies, leading to faster data acquisition, lower radiation dose, higher image quality, greater flexibility, lower system cost, better and more applications.
Clinical X-ray CT. X-ray CT has a central role in clinical imaging, often as the first and only imaging study before definitive intervention for a wide variety of conditions in the head, chest, abdomen, pelvis and extremities. The acceptance of x-ray CT in clinical practice is a result of numerous technical innovations including faster data acquisition, higher image quality and lower radiation dose. In 1991, a spiral/helical cone-beam scanning mode was conceptualized to solve the long object problem [1]A. In 1990s, single-slice spiral CT became the standard mode [2]A. In 1998, multi-slice spiral CT was introduced [3, 4]A. Despite the importance of CT and efforts devoted so far, there are well-recognized challenges [3]A.
First, scanning speed should be increased for cardiac imaging, dynamic contrast enhanced studies, and image guided inventions. Even with the state-of-the-art Siemens dual-source CT scanner, temporal resolution of 83 ms is only achievable with aid of ECG gating around stable phases (around 30% or 70% of the R-R period) [4-8]A. Since the mean velocity of the right coronary artery in a healthy adult is 69.5 mm/s, and higher/irregular heart rates are often encountered in pediatric/injured patients, a greater temporal resolution of <20-80 ms is needed. Cardiac imaging of animals with rapid heart rates, that serve as human disease models, also requires much higher temporal resolution. Second, radiation dose must be minimized especially for children. CT is a radiation-intensive procedure, yet the cone-beam technology continues increasing the usage of CT. A British study quantified the cancer risk from diagnostic x-rays, causing ˜700 cases of cancer per year in Britain and >5,600 cases in US [9]A. In the case of cardiac CT, the clinical dose benchmark is 5 mSv. With other conditions fixed, to cut image noise by half, the dose will be increased by 4 folds; to improve spatial resolution by 50% in 3D, the dose must be boosted by 16 times; and for a cardiac CT perfusion study, the dose is proportional to the imaging time. Therefore, dynamic volumetric cardiac CT appears impossible at <5 mSv.
Optical Molecular Tomography. Molecular imaging is being rapidly developed for diagnostics, drug discovery, and individualized adaptive therapies [10-13]A as part of the NIH roadmap. Small animal molecular imaging was developed to investigate inflammation processes, cardiac diseases, cancer, cystic fibrosis, and many other biological questions. The use of bioluminescent and fluorescent proteins as reporters, in particular, has enabled powerful and flexible strategies to understand biological building blocks and pathways at the cellular and sub-cellular levels. The 2008 Nobel Prize for the discovery of green fluorescent protein (GFP) [14-18]A underlines the critical role of optical molecular imaging using fluorescence and bioluminescence. Among optical molecular imaging techniques, optical molecular tomography (OMT) (fluorescence molecular tomography (FMT) and bioluminescence tomography (BLT)) is most promising [19]A.
The introduction of OMT relative to popular planar fluorescent and bioluminescent imaging can be likened to the development of x-ray CT from radiography. Without OMT, in vivo fluorescent or bioluminescent imaging is primarily qualitative. With OMT, quantitative and localized analyses become feasible on fluorescent or bioluminescent source/probe distributions.
BLT and FMT are highly ill-posed inverse problems. Although FMT of small animals has seen significant progress over past years [20]A, BLT is currently emerging. We rigorously showed the illposedness of BLT in the very first paper on BLT and subsequent analyses [21]A. Specifically, there are two issues behind this illposedness.
First, the diffusion approximation (DA), which is almost exclusively used, is not sufficiently accurate for optical imaging over the whole spectrum of bioluminescent (and fluorescent) probes. Since 2D multi-probe fluorescent imaging were reported [22-24]A, it would be invaluable to distinguish multiple probe tomographically in living mice for visualization of complex cellular processes [25]A. Second, diffuse optical tomography techniques can often fail to produce accurate maps of optical attenuation coefficients. Without such in vivo knowledge of optical properties, an individualized mouse model cannot be established to compensate for the optical heterogeneity, and the imaging process would be too uncertain to be inverted. Clearly, innovative solutions to these two issues will significantly improve not only BLT but also FMT.
Multi-scale/parameter X-ray CT. Biology and medicine are intrinsically multi-scale and multi-dimensional scientific endeavors that require systematic approaches. In this context, multi-scale x-ray CT may serve as a common platform allowing compatible staining and imaging techniques from cm to nm scale, which cannot be done by any other imaging modality. However, dissection into ever smaller samples can change morphology, hydration and other physiological/pathological statuses. Fortunately, interior tomography the PI's group has pioneered [26-31]A allows a sample to be much larger than the field of view of a CT system. Early this year, we received an NIH grant and acquired a high-end micro-CT scanner. Last month, we received an NSF grant and will develop a state of the art nano-CT scanner. Coupled with our existing CT scanners, we will have a most comprehensive multi-scale x-ray CT facility to seamlessly cover over six orders of magnitude in length scale and sample size.
Current biomedical x-ray imaging is restricted by the attenuation contrast mechanism. On the other hand, phase shift and dark field x-ray imaging promise 1-3 orders of magnitude improvement in soft tissue contrast or reduction in radiation dose [32]A. X-ray phase-contrast imaging can be implemented using interferometry, diffractometry or in-line holography. Because interferometry and diffractometry require monochromatic x-rays and high precision crystals, they are limited to synchrotron radiation. Alternatively, the in-line method is feasible with a polychromatic micro-focus x-ray tube, but such a source is expensive and rather weak in photon flux [33]A. Since 2006, Pfeiffer et al. have been developing grating-based imaging methods [34-36]A, capturing both refractive (phase-contrast) and Rayleigh scattering (dark-field) signals at cellular levels.
Although grating-based phase-contrast and dark-field imaging open new doors to physiological and pathological information, there is no major program in the USA, and the field faces two obstacles towards biomedical applications. First, data collection for grating-based tomography is time consuming. The working principle is to utilize interferometer based on the Talbot effect, a periodic self-imaging phenomenon, to extract phase shift and Raleigh scattering information via a time-consuming correlation process. The data will be then preprocessed into line integrals for tomography. The higher resolution, the longer imaging time, which is usually in an order of hours or days. Second, gratings of large sizes and high slit aspects are difficult to fabricate and model. According to the published procedure [37]A, the gratings can be made from photolithography masks using a nano-fabrication facility. For example, the analyzer grating consists of Au pillars encased in epoxy and bounded using a frame. This process is prone to errors that are hard to control in the case of gratings of large sizes and high aspects. Even if large gratings are made to the specifications, the resultant imaging model can be challenging for a point x-ray source.
Thus, there exists an immediate need for innovative solutions of inverse problems in x-ray computed tomography (CT) and optical molecular tomography (bioluminescence and fluorescence tomography). These imaging modalities are limited by highly ill-posed linear inverse problems due to incomplete, inconsistent and intricate measurements. The biomedical value and impact of the modalities would be enhanced by cutting-edge inverse technologies, leading to faster data acquisition, higher image quality, lower radiation dose, greater flexibility, lower system cost, and improved, more widespread application in the life sciences.
For example, while the interior problem (to reconstruct an internal region of interest (ROI) only with x-rays through the ROI) has been considered to have no unique solution for decades, interior tomography offers an exact and stable solution, given practical prior knowledge. While the Nyquist/Shannon sampling rate imposes a fundamental limit on resolution, compressive sensing unlocks hidden signal sparsity and requires much less data for recovery of an underlying signal. Over past years, our team and collaborators have made significant theoretical and technical progress in spiral cone-beam CT, exact cone-beam reconstruction, interior tomography, compressive sensing, multi-scale CT, dynamic spatial reconstruction, carbon nanotube based distributed x-ray source array, bioluminescence tomography and fluorescence tomography. Building on the emerging theories, the inventors have created x-ray and optical imaging tools which provide practical solutions to fundamental linear inverse problems for biomedical imaging in vivo and in vitro.
Further, for example, the dynamic cardiac elastography (DCE) approach is based on the fact that impaired myocardium undergoes changes in the passive properties, typically becoming stiffer, and/or decreased in active contractility, and thus changes the deformation characteristics of the heart. In other words, material parameters that describe the passive properties and active contractility can be extracted from the time-varying cardiac image sequences of the heart. Previous parametric extraction methods make approximations including small strain, quasi-static deformation and linear elastic properties of myocardium, which lead to inaccurate and sometimes misleading description of myocardium, and make it impossible to address the highly nonlinear and dynamic properties, in particular the active contraction of myocardium. This invention provides a highly efficient and robust DCE approach and system, based on a dynamic nonlinear adjoint gradient, for accurate quantification of the nonlinear viscoelastic passive properties and active contractility of healthy myocardium and in diseases. For direct clinical applications, the output of DCE, including passive/active material parameters of healthy and impaired myocardium and dynamic stress and strain maps, can serve as novel quantitative indices for detecting and monitoring heart diseases. The output can also have important applications for patient-specific design and application of cardiac equipments, implants and surgeries like coronary artery bypass grafting, surgical ventricular restoration, and cardiac resynchronization therapy devices (CRT).
Dynamic Cardiac Elastography. Heart disease is growingly pervasive with high morbidity and mortality at tremendous social and healthcare costs. 70% of heart failure is due to ischemia where myocardium is damaged. Current diagnostic methods include blood test, chest X-Rays, electrocardiogram, echocardiogram, cardiac catheterization, CT and MRI imaging, etc. However, the obtained global/geometric measurements are not adequate for quantitative detection and evaluation of regional myocardial abnormalities, and for predicting the progression, especially at the early stage when the size of the impaired regions is small. Surgical treatments, like coronary artery bypass grafting and surgical ventricular restoration, have been proved to be therapy methods for heart failure. It is well recognized that optimal designs of surgical equipments, implants and surgical procedures highly depend on the knowledge of mechanical properties of the patient heart, i.e., the myocardial passive viscoelastic properties and active contractility. The current diagnostic methods and image processing algorithms do not provide quantitative knowledge of such mechanical properties. Thus, what is needed is a novel cardiac imaging processing method, such as tomography-based dynamic cardiac elastography (DCE) for in vivo identification of the passive nonlinear viscoelastic properties and active contractility of myocardium of individual heart.
The ability to evaluate dynamic elastic properties of myocardial tissue is clinically important to assess risk of heart diseases and monitor their treatment. To date, this line of investigation has been greatly limited by the lack of non-invasive imaging methods. The inventors have developed image-based cardiac elastodynamic biomarkers for prevention and treatment of human heart diseases. Further, the inventors have developed and applied a novel imaging modality to extract cardiac elastodynamic characteristics tomographically. Myocardial elastodynamic properties are the phenomenological manifestation of underlying cellular activity and connectivity. This information is presented through myocardial deformation and can reveal the pathological status of the heart. In the absence of such information, false treatment may be initiated for suspected patients with one or more symptoms, meanwhile treatment may be neglected for some patients with developing heart disease without exhibiting any symptoms thus likely leading to sudden death or heart failure. Elastodynamic biomarkers of myocardium have the potential to enable better prevention and treatment of heart diseases.
The techniques developed by the inventors uses the state-of-the-art techniques to solve a highly ill-posed complicated inverse problem. Embodiments of the invention may be used to image the heart within one breath-hold, which will overturn common perception of long examination time of cardiac MRI; and 2) it can extract elastodynamic features from myocardium, which enables new understanding of progressive deterioration of heart muscle and improve diagnosis and treatment of cardiac diseases.
Exact and Stable Interior ROI Reconstruction for Radial MRI. A number of magnetic resonance imaging (MRI) applications require high spatial and/or temporal resolution. With technological advances in MRI hardware systems and development of multiple echo data acquisition [1-4]D and image reconstruction [5-11]D strategies, fast MRI has increasingly become feasible. However, the current achievable temporal resolution may still not satisfy the requirements of many specific applications, such as MR-guidance biopsies/intravascular procedures, visualization of cardiac motion, study of brain function, cancer metastases, MRI perfusion, etc. In addition, the existing techniques to improve temporal resolution cannot be generally and simultaneously used to improve spatial resolution, due to k-space sampling that obeys the Nyquist theorem. This limits MRI applications in certain areas such as functional MRI with high desires for both temporal and spatial resolution.
Tomography limitations in vivo due to incomplete, inconsistent and intricate measurements require solution of inverse problems. The new strategies disclosed in this application are capable of providing faster data acquisition, higher image quality, lower radiation dose, greater flexibility, and lower system cost. Such benefits can be used to advance research in cardiovascular diseases, regenerative medicine, inflammation, and nanotechnology.
The inventors have defined and solved some of the ill-posed inverse problems presented in the fields of x-ray CT and optical molecular tomography, and have validated and optimized the resultant systems, and have established and supported state-of-the-art infrastructure for their fast, routine and widespread applications in clinical and preclinical settings. The inventors have developed an interior tomosynthesis system, an integrated optical molecular tomography platform, and a multi-scale/parameter CT facility in the interior tomography and compressive sensing framework for diagnosis and intervention of cardiovascular diseases, regenerative medicine, and inflammation research.
In embodiments of the invention, the inventors use a system referred to as interior tomography, which can accurately reconstruct a ROI only from the x-rays passing through it when precise knowledge of a sub-region in the ROI is known. As illustrated in
Reconstruction algorithms are provided for compressive sampling based tomosynthesis which involve: chord averaging, SART-based integral constraint, and total variation regularization which are sequentially repeated. The inventors have shown that interior tomography is a very effective way of reducing x-ray dose for local imaging applications like cardiac CT. It is complimentary to other dose reduction techniques like prospective gating, tube-current modulation, reduction in geometry of the scanner, etc. Existing interior tomography reconstruction algorithms suffer from image artifacts like a drop in image intensity.
The inventive method of projection onto convex sets based interior-tomography reconstruction algorithm is an improved method over existing systems. The exemplary algorithm described herein can provide comparable image quality even in the absence of known sub-region and using only one filtering direction. These methods were applied to real projection data thus demonstrating the suitability of our methods for practical applications.
Objects of embodiments of the invention further provide for interior MR imaging of the heart with significantly improved temporal, spatial and contrast resolution. One solution is to provide interior MR imaging of the heart in the compressed sensing framework. Specifically, embodiments of the invention involve combining a newly developed 3D radial MRI method (called UTE-GRE)[2-3], compressed sensing techniques [4-5], and interior tomography principles [6-7]. Using 2D radial MR data acquisition with the aid of a digital filter, a ROI-focused MR image of a phantom has already been successfully reconstructed using the interior tomography approach by our team [8], and can be enhanced using 3D radial data acquisition and the compressed sensing approach described in this disclosure. It is expected that interior MRI of the heart will decrease examination time and increase spatial resolution by at least a factor of 2 or more, and with enhanced contrast resolution to describe the deformation of the myocardium, compared to the state-of-the-art 2D multi-slice techniques for cine MRI such as radial k-t FOCUSS[9]E. Such an improvement is significant for extraction of cardiac elastodynamic biomarkers as well as for the clinical applications of cardiac MRI.
The present invention further provides a tomography-based nonlinear dynamic cardiac elastography (DCE) framework for characterization of the mechanical properties of myocardium based on blood pressure and heart deformation measured with biomedical imaging techniques. The nonlinear DCE estimates the mechanical parameters by optimally minimizing the difference between computed dynamic epi- and endocardial displacement and the corresponding experimental measurement. A dynamic nonlinear adjoint method can be used to calculate the gradients of the objective function. Investigated. The nonlinear DCE may prove indispensable for identification of the viscoelastic properties of myocardium.
Additionally, both temporal and spatial resolution can be significantly improved using a region of interest (ROI)-focused MRI data acquisition scheme. In many practical MRI applications, i.e., cardiac imaging, only a small portion of the field of view (FOV) may be of clinical interest. To image a ROI may still provide sufficient clinical information. However, in radial MRI, there is no such an acquisition-based solution available. The present invention provides an interior MRI methodology to perform ROI reconstruction without artifacts, or with a reduced amount of artifacts.
Interior tomography has been recently proposed as an exact and stable solution to this long-standing problem. Embodiments of the invention include interior MRI methodologies to perform ROI reconstruction without creating any artifacts, or reducing the number of artifacts. This method is based on radial projection data acquisition and interior tomography reconstruction technique. The interior tomography reconstruction technique was recently developed by Wang's group [12D]. This technique provides an exact and stable solution to the interior problem (reconstruction of an interior ROI from line integrals associated with only paths through the ROI), which, in conventional wisdom, does not have a unique solution. Though it was originally proposed for image reconstruction of computed tomography (CT), the technique can be directly extended to other tomographic modalities, such as MRI, PET, SPECT, and so on.
The interior MRI methodology provides a general MRI ROI reconstruction approach using a truly truncated Hilbert transform [12D] along with a digital filter [13D]. Aided by partial knowledge on one neighboring intervals of a line-segment inside the ROI, the ROI imaging was successfully reconstructed.
Specific embodiments of the invention include methods, systems, and devices for interior tomography reconstruction comprising performing projection onto one or more convex sets (POCS) according to:
fn=Tnf,
where n indicates an iteration number;
T=P1P2 . . . PM;
P1, . . . , PM are operator to project onto respective convex sets C1, . . . , CM; and
wherein the convex sets are chosen from Hilbert transform, integral constraint, minimum value, maximum value, and known-information convex sets.
Such methods, systems, and devices can be configured such that the convex sets are chosen from Hilbert transform, integral constraint, and minimum value. Preferred are embodiments, further comprising performing simultaneous algebraic reconstructive technique based (SART-based) integral constraint. Even further preferred are embodiments, further comprising performing total variation regularization. Embodiments can optionally further comprise sequentially repeating one or more method step or function of the processing module.
Other specific embodiments comprise a system for interior tomography reconstruction comprising: a processing module operably configured for: receiving information relating to a region of interest (ROI) of a scanned object; reconstructing the ROI into an image by performing projection onto one or more convex sets (POCS) to generate image data according to: fn=Tnf, where n indicates an iteration number; T=P1 P2 . . . PM; P1, . . . ; PM are operator to project onto respective convex sets C1, . . . , CM; and wherein the convex sets are chosen from Hilbert transform, integral constraint, minimum value, maximum value, and known-information convex sets; and comprising a processor for executing the processing module.
A multi-source interior tomography system is also included within the scopr of the invention comprising: 1) a processing module operably configured for: a) analyzing an individualized prior CT scan of a subject; and b) defining a corresponding high order TV (HOT) objective using a set of projection data measured in the CT scan and expressed as: a 1D vector Y with elements Yimeas, i=1, . . . , NY, where NY is the product of the number of detector elements and the number of projection views; wherein each measurement is a realization of a random variable:
wherein: μ(x,E) is the energy-dependent linear attenuation coefficient; sm(i) the number of scattered photons, and the ith measurement not only reflects the signal from the path Li but also other paths Lj, j≠i, with weight bij to take off-focal radiation into account to reconstruct μ(x,Er) at a reference energy Er; c) reducing to a discrete model:
where an object consists of Np pixels with attenuation coefficients μ=[μ1, μ2, . . . , μN]T, aij denotes the length of a ray path in a pixel, and ri a known constant which compensates for measurement noise and may be ignored especially in initial analyses; d) converting to an optimization problem:
Further included in embodiments of the invention is an ultra-fast tomography system comprising: 1) a processing module operably configured for: aligning and averaging multiple independently acquired datasets of the same ROI, after image registration, in a common coordinate system; 2) and comprising a processor for executing the processing module.
Such systems are also encompasses, wherein the processing module is further operably configured for processing datasets acquired by way of symmetric data acquisition. Additionally, systems of the invention can be configured wherein the processing module is further operably configured for processing datasets acquired using source-multiplexing.
A tomography-based dynamic cardiac elastography system is included in embodiments, which comprise: 1) a processing module operably configured for: performing tomography-based DCE reconstruction a ROI of an object by: extracting material parameters of heart tissues (set p) from a measurement of endo-epicardial displacement UIm chamber pressures, and ECG record; minimizing overall difference between UIm and computed displacement uI(p); calculating heart wall complete dynamic strain/stress fields; computing endo/epicardial surfaces at all time points; and calculating overall distance from the N sets of guide points to the computed endo/epicardial surfaces and if the distance is too large, updating UIm according to the currently computed endo/epicardial surfaces until a desired criterion is met; and 2) a processor for executing the processing module.
Other embodiments may include methods for extracting cardiac elastodynamic biomarkers comprising: reconstructing passive stiffness parameters C in W (nonlinear potential energy) of myocardium from biomedical imaging measurements by: performing an optimization-based method combined with compressed sensing; finding distribution C, T such that the computed displacement uP(X,t;C,T) matches the best with the displacement measurement UM(X,t) taken with MR imaging techniques subject to the minimization of the weighted TV [w·TV(C)] in the parameter distribution; minimizing the objective function: Φ(C)=∫0τ(∫Vh·χ(X,t)·hdV)dt+w·TV(C); h=h(X,t;C)=uP(X,t;C)−UM(X,t), where the time interval [0,τ] represents one cardiac cycle, and the weight function χ(X,t) equals zero if the displacement is not measured at point X and t; once measurements are obtained on the displacement uM(X,t) in part of the myocardial ROI, and on chamber blood pressure pM(X,t), performing inverse computation to reconstruct the passive stiffness C and active contractility T of myocardium; and optionally solving governing PDEs and adjoint PDEs by way of finite element (FE) format.
A system for exact/quasi-exact interior reconstruction of radial MRI comprising: 1) a processing module operably configured for: performing a projection onto convex set (POCS) algorithm or singular value decomposition (SVD) technique from constraints of truncated Hilbert Transform:
and 2) a processor for executing the processing module, are encompassed by the invention as well.
Embodiments of the present invention further include: an imaging system comprising: 1) a processing module operably configured for performing Gaussian-based computations using bioluminescence tomography type data and for reconstructing into image form a region of interest (ROI) of an object from which the data was acquired; and 2) a processor for executing the processing module.
It is noted that although only specific embodiments or methods, systems, and devices are listed in this summary, these embodiments can be expanded to cover methods, systems, and/or devices regardless of the type of embodiment is listed. For example, when referring to only a method, such disclosure should be construed to include devices and systems comprising the same elements. Further, these specific embodiments can be altered or modified by omitting one or more elements specifically listed and/or by combining elements of another listed embodiment therewith. For example, if a method embodiment refers to having two method steps, that embodiment can be construed as a system capable of performing only one of those functions and/or as a system capable of performing both of the listed functions and any other function listed for another embodiment. It is within the capabilities of those of ordinary skill in the art to modify this disclosure in this way.
FIGS. 29A1, 29A2, and 29B are schematics illustrating multi-source interior tomography systems in accordance with the invention.
This compressive sampling (CS) based interior reconstruction technique can be directly applied to tomosynthesis. To demonstrate its feasibility, a mouse study was performed using a carbon nano-tube based technique.
For the study, an x-ray tube was used that is a compact and portable version of the x-ray source reported in Cao, G. H., et al. Respiratory-gated micro-CT using a carbon nanotube based micro focus field emission x-ray source—art. no. 691304, in Medical Imaging 2008 Conference, 2008, San Diego, Calif.: Spie-Int Soc Optical Engineering. The x-ray tube was operated at 1 mA peak tube current, 45 kV anode voltage, 112×152 μm effective focal spot size, and 20 msec pulse width. The x-ray exit window consists of 0.2 mm Be. No additional filtration was used in this experiment.
The x-ray detector was a CMOS flat-panel sensor with a CsI scintillator plate directly deposited on a photo diode array. It has 2400×2400 pixels with a nominal pixel size of 50 μm, giving an active area of 120×120 mm2. It has a thin carbon fiber cover and allows up to 4×4 on-chip binning and 9 frames/sec high-speed imaging.
The camera was configured to run at 1 frame/sec and utilize only about ¼ of its full field-of-view (FOV) at the central region without binning, producing two-dimensional 1024×1024 projection images at 50 μm×50 μm pixel resolution. This detector configuration together with the scanner geometry give an effective FOV of 39×39 mm2 and an effective digital sampling of 38.5 μm at the object plane. The gantry is driven by a digital stepping motor (Velmex, Inc., Bloomfield, N.Y.) and able to travel a full circle with a resolution of 0.001 degree.
In vivo mouse imaging experiments were performed following the protocols approved by the University of North Carolina at Chapel Hill. Several mice with an average weight of 25 grams were imaged. Animals were anesthetized with 1-2% isoflurane at a flow rate of 1.5-2 L/min from a vaporizer. The animal breathing rates after anesthetization were typically in the range of 80-100 breaths/min. A physiological monitoring system (BioVet, Spin Systems (QLD) Pty Ltd, Brisbane, Australia) was used to obtain the cardiac and respiration signals. The anesthetized animals were placed on top of the pressure sensor in the mouse sample holder and secured with adhesive restraints. The animals were put in the prone position such that the respiration sensor was approximately in the position of the abdomen to achieve maximum sensor coupling. To obtain the cardiac signals, ECG electrodes were taped to the footpads. Prior to imaging, iodinated contrast agent (Fenestra V C, Advanced Research Technologies Inc., Montreal, Canada) was infused via tail vein injection (0.02 mL/gram of mouse).
Projection images were acquired using an inventive gating approach, where the gating electronics is same as the one described in Cao, referenced above, except that in this study the physiological triggers were generated based on both respiration and cardiac gating. Physiological triggers were generated if and only if the first QRS complex of the ECG occurred within an acquisition window defined at a specific phase of the respiration cycle. For CT scans carried out in this study, 400 projections were acquired over a circular orbit of 199.5 degrees with a stepping angle of 0.5 degree at single frame acquisition. By running the detector at 1 frame/sec (camera integration time=500 ms), the scan time was 15-30 min, depending the mouse's respiration and heart rates. This scan time is critically depending on the temporal coincidence between the physiological triggers and the exposure windows, in addition to the temporal coincidence between the QRS complex of ECG and the acquisition window of the respiration signal.
For the above acquired in vivo mouse cardiac projection datasets, we performed a CS-based interior reconstruction. Using the generalized Feldkamp algorithm, Feldkamp, L. A., L. C. Davis, and J. W. Kress, Practical cone-beam algorithm. J. Opt. Soc. Am., 1984. 1(A): p. 612-619. Wang, G., et al., A General Cone-Beam Reconstruction Algorithm. IEEE Transactions on Medical Imaging, 1993. 12(3): p. 486-496, first a volumetric image was reconstructed to serve as a global standard for the interior reconstruction. From such an image volume, a circular ROI was specified on a transverse slice to cover the contrast-enhanced beating heart. Then, a mask image was created for the ROI and a forward projection was performed to generate a mask projection. Later the mask projection was binarized to extract the projection data only through the ROI as the interior scan dataset. The interior reconstruction was performed using a CS based algorithm. Yu, H. and G. Wang, Compressed sensing based Interior tomography. Phys Med Biol, 2009. 54(9): p. 2791-2805, the disclosure of which is hereby incorporated by reference herein in its entirety. For different scanning ranges of tomosynthesis, the final reconstruction results are shown in
More specifically,
This new method has great potential to reduce the x-ray radiation dose. The key step in the interior reconstruction is to invert a truncated Hilbert transform. Here, an improved projection onto convex sets technique is presented for interior reconstruction. To construct convex sets, several constraints are used, which include knowledge of a subregion in the ROI, Hilbert transform data within the ROI, image bounds including positivity, interior projection data through the ROI, cross-section boundary, total-variation regularization, line integral of x-rays through the ROI and scout views covering the cross-section.
Constraints for Interior Tomography in a POCS Framework. Numerical studies are performed to compare the effects of different constraints. These constrains are imposed simultaneously, selectively and individually to obtain corresponding reconstructions and evaluate relative strengths of typical constrain combinations.
Recently, this x-ray interior tomography was proposed for local applications such as cardiac imaging [1], which is a promising way to significantly reduce x-ray dose. Using such methods and systems, the scan field of view (FOV) can now be as small as the ROI and the projection data can be truncated at both ends beyond the ROI. The intensity of a sub-region with the ROI must however be known. Previously, efforts to reduce the projection data for reconstruction permitted using shorter x-ray source angular scan ranges [2-4] and also truncating projection data at one end beyond the ROI [5], none of these allowed use of a FOV completely within the object support. Also, interior tomography results have been independently reported by other research groups [6, 7].
Along the direction of interior tomography, we extended its conditions to permit 3-D ROI reconstruction [8], and tested the feasibility of using blood in aorta and bone in vertebra as known sub-regions for cardiac imaging [9]. When exact information is not available, we also proposed compressive sensing based interior tomography [10] to reconstruct piece-wise continuous imaging objects. In the framework of backprojection filtration, by backprojecting the differentiated projection data we can obtain the Hilbert transform image which must be inverted to obtain the original image. However, as shown in
where, PV represents the principle value and [−1,1] is normalized compact support. Eq. (1) can be discretized as
is the Hilbert Transform matrix for which the coefficients can be calculated as described in [11, 12], Fl=[f1, f2, . . . , fn, . . . , fN]T are the intensity values of pixels on the compact support [−1,1] and Gl=[g1, g2, . . . , gm, . . . , gM]T is a vector of known Hilbert transform corresponding to sample points on the interval [p, r].
Note the values of Fl are known on [p,q] and reconstructible on [q,r]. It is a non-trivial problem to invert truncated Hilbert transform. Method of projection onto convex sets (POCS) [13] is normally employed to perform the inversion operation. Inversion can also be carried out using our recently proposed truncated singular value decomposition based method [11]. Known sub-region is generally small and permits reconstruction of strip (group of lines). This reconstructed strip can be used as known sub-region to reconstruct in orthogonal direction to enable entire ROI reconstruction [9]. For this approach, the final image quality is dependent on the inversion errors of first pass. To improve the image quality, we developed a chord averaging method in which the image strips are reconstructed along various directions.
Inversion with POCS involves repetitive projection of desired variables from one convex set to another. The solution lies in the intersection set of all convex sets. The convex sets/constraints commonly employed for interior tomography applications are i) the Hilbert transform of the image along the π-lines in the FOV is equal to the backprojection of differentiated projection data, ii) integral of the image along the 77-lines is equal to the projection along that line, iii) knowledge of known sub-region within ROI, iv) maximum intensity in the image and v) image positivity. The solution set corresponding to these convex sets is broad and the reconstructed image quality has been found to be limited [5, 9]. The performance of POCS method can be improved by providing some additional information outside the ROI in the form of scout views covering the entire object support and introducing additional convex sets to more accurately implement the line integral constraint for all the rays.
Hilbert transform constraint operates one line segment (F) in the image at a time and the SART-based integral operates on 2-D image obtained as a union of these 1-D images. These constraints provide following least squares solutions
Minimize∥Fl∥2 subject to HFl=Gll∈L (3)
Minimize∥F∥2 subject to AF=P, (4)
where L is union of set of lines in all filtering directions, F is a the vector form of the image, A is the system matrix, and P is a vector corresponding to projections.
L2 norm minimization makes inherent assumption that the image is smooth and cannot provide information about the sharp features like edge in an image [14, 15]. Recently, there has rightly been an increased interest in total variation regularization. This involves minimization of a total variation function which can be written as:
Minimize∥∇F∥1 subject to HFl=Gll∈L, AF=P, and F∈C1∪C2∪ . . . ∪CN (5)
where ∇ is a gradient operator, C1, C2, . . . , CN are N additional constraints.
The seminal work of Rudin et al [15] demonstrated the edge preserving properties of this method. They also pointed out the capability of Total variation methods to project the solution back to the constraint set. Due to nonlinearity these methods are computationally expensive. Vogel et al further demonstrated their advantages and developed computationally efficient iterative algorithm [16] to improve computational efficiency. In the past years, compressed sensing [17, 18] theory highlighted previously unknown capabilities of total variation methods. The theory states that sparse or approximately sparse signals can be exactly recovered for severely underdetermined system even if the measurements are inaccurate. Medical images themselves are not sparse but their gradients generally are with uniformity within the organ and variations at the boundary [19]. Total variation has already been successfully applied to diffraction tomography [20], SPECT [21], and CT [19, 22].
This invention provides an improved projection onto convex sets method and evaluates the contributions of these constraints on reconstruction image quality.
METHODS. An improvised POCS method is presented for interior tomography reconstruction. POCS involves repetitive projection onto various convex set. If P1, . . . , PM, are operator to project onto respective convex sets C1, . . . , CM then the composite operator can be written as T=P1 P2 . . . PM and the POCS procedure can be written as fn=Tnf, where n indicates the iteration number. The convex sets used and how to impose them is presented next followed by the reconstruction algorithm for interior tomography.
Convex Sets. Hilbert Transform Constraint: Backprojection of the differentiated projection data gives the Hilbert transform of the image in the ROI as in Eq. (1). In every iteration, this constraint can be imposed by equating the Hilbert transform within the ROI with its known values and then inverting to obtain images intensities obtained after enforcing these constraints. For a line in the image this constraint can be mathematically written as
C
1=(f∈Fl,|HFl=gl(x),x∈(p,r)) (6).
Integral Constraint for x-rays in the ROI: Since projection for all the x-rays passing through the ROI are known, the integral of intensities along a line is known. This can be mathematically written as
C
2
={f∈F
l
,|C
f=∫−11fl(x)dx} (7).
To implement this constraint the intensity of each sample point along the line is multiplied with the ratio of estimated to real integral Cf along the same line.
Image Positivity: In the CT field, the image intensities represent the x-ray attenuation coefficients which are always positive. This constraint can be imposed by equating all image pixels with negative intensity to zero. This constraint can be mathematically written as
C
3
={f∈F
l
,|f
l(x)≧0,x∈[−1,1]} (8).
Maximum Value: Generally there is some idea about what the maximum within the image would be. In non-contrast imaging of humans this may correspond to the maximum attenuation coefficient of the bones. A suitable value above the maximum value possible can be set as the maximum intensity value possible. This constraint can be imposed by equating all image pixels with the intensity above this value to it. This constraint can be mathematically written as
C
4
={f∈F
l
,f
l(x)≦fmax,x∈[−1,1]} (9).
Known Sub-region: This constraint can be imposed by equating the pixels for which the values are known beforehand to the corresponding values, which constraint can be written as
C
5
={f∈F
l
,|f
l(x)=fknown(x),x∈(p,q)} (10).
Knowledge of Compact Support: Knowledge of the compact support may be obtained prior to performing a scan. A light source can be rotated around the object like the x-ray source and whether the light arrives at the detectors can be sensed. A sinogram-like data can then be generated by assigning a small constant value at the locations where the light does not reach detector and a value of zero at locations where it reaches the detector. These data can be differentiated, and backprojected in the horizontal and vertical filtering directions to obtain respective hilbert transform images. The sum of the absolute values of these two images provides an image with values close to zero within the boundary and high values at the boundary. From this the compact support of the object can be detected in various ways. For numerical experiments in this paper, the compact support was obtained as a region with values below a small threshold. The compact support so determined was broadened slightly by first adding 10 pixels to its boundaries along the row and then adding 10 pixels to those along the column. Let us call the set containing the pixels within the compact support as , then this convex set can be written as
C
6
={f∈F|f∈
}U{0|f∉} (11).
SART-based integral constraint for x-rays passing through ROI: The SART algorithm can be used to impose the integral constraint for all the x-rays passing through the ROI. This constraint can be written as [23, 24]:
where iter indicates the iteration number, Fiter is a vector of image pixel values at current iterate, i=1 . . . l indexes the rows of the system matrix A each of which corresponds to an x-ray projection, M is a set of x-rays passing through the ROI, j=1 . . . J indexes the columns of A and the image pixels, ai is the ith row of A, aij=wijΔx with wij being the weight contribution of fj on the image grid to the projection pi, Li is the length of the intersection between the ith x-ray path and boundary of the region being reconstructed with SART, wi=Σj=1Jwij and pi−aiFiter is the ith difference between the real line integral and the line integral estimated from current image Fiter.
SART-based integral constraint for x-rays in scout views. A scout-view contains x-rays through the entire compact-support. In clinical setting, two orthogonal scout views are normally acquired to determine the start and end points of the scan in longitudinal direction. This constraint can be written as:
where, N is a set of x-rays in the scout views and description of all other parameters is the same as for Eq. (13).
Total variation regularization: The problem can be defined as:
C
9=(f∈F|min∥∇F∥1 subject to C1 to C8) (15).
In image processing literature, it is well known that the gradient magnitude of an image can be written as
Equation (16) can be approximated in terms of image pixels as [19]:
|∇F(x,y)|=√{square root over ([f(x,y)−f(x−1,y)]2+[f(x,y)−f(x−1,y)]2)}{square root over ([f(x,y)−f(x−1,y)]2+[f(x,y)−f(x−1,y)]2)}{square root over ([f(x,y)−f(x−1,y)]2+[f(x,y)−f(x−1,y)]2)}{square root over ([f(x,y)−f(x−1,y)]2+[f(x,y)−f(x−1,y)]2)} (17).
TV function ∥∇F∥1 is now obtained as U=∥∇F∥1=ΣxΣy|∇F(x,y)| (18)
Reconstruction Algorithm. The algorithm consists of three major steps: chord averaging, SART-based integral constraint, and total variation regularization which are sequentially repeated. The compact support (C6) constraint is imposed after each of the three steps. During the chord averaging step, in each filtering direction, the current image intensities are projected onto the convex sets (C1-C5) of Hilbert transform, integral constraint, minimum value, maximum value, and known-information. In the next step, an iteration of SART-based integral constraint (C7) is performed using the x-rays passing through the ROI followed by an iteration of SART-based integral constraint (C8) for the x-rays corresponding to the 2 scout views. Gradient descent algorithm as in references [19, 22] is used to perform total variation regularization. To use gradient descent algorithm, the gradient of the total variation function (Eq. 18) must be available with respect to the image pixels f(x,y). This gradient can be approximated as in [19, 21]:
where V is the gradient of the total variation function U. δ is a small number necessary to avoid denominator value from going to zero. The pseudo code of the inventive algorithm is as follows:
1) a=0.2, steprad=0.95, NTV=20, flag=0,
2) F=0
3) repeat main loop
4) Fo=F
5) Project F onto convex set C6. Impose the compact support constraint
6) For k=1, K
7) Project F onto convex sets C1 through C5, i.e., impose the Hilbert transform, integral, image Positivity, image maximum and known sub-region constraints
8) End for
9) Project F onto convex set C6. Impose the compact support constraint
10) Fres=F
11) PM=AMF. Estimate the forward projections for the x-rays passing through the ROI.
12) Project F onto convex set C7. Impose the SART-based integral constraint for the x-rays passing through the ROI
13) PN=ANF. Estimate forward projections for x-rays in the scout views.
14) Project r onto convex set C3. Impose image positivity constraint
15) Project F onto convex set C6. Impose the compact-support constraint.
16) Project F onto convex set c8. Impose the SART-based integral constraint for x-rays corresponding to scout views
17) Project F onto convex set C3. Impose image positivity constraint
18) Project r onto convex set C6. Impose the compact support constraint
19) NonTVupdate=∥F−Fo∥2
20) If (flag==0), Estimate the initial step-size
21) step=x×NonTVupdate
22) flag=1
23) End if
24) Fo=F
25) For out=1,NTV
26) Calculate the gradient of TV function using Eq. 19 and obtain normalized descent directions D
27) F=F−step×D
28) End for
29) TVupdate=∥F−Fo∥2
30) If TVupdate<stepred×dp then step=step×stepred.
31) Until stopping criteria are met.
NonTVupdate represents the image update due to imposition of constraints C1 through C8. K is the number of filtering directions used for the chord averaging step. TVupdate represents the same for gradient-descent step to impose constraint C9.
Number of gradient descent steps taken, in each iteration, is defined by the parameter NTV. The initial step-size (step) is determined by the parameter α as a small percentage of first Non-TV update due to constraints (C1-C8). To ensure that the change in image due to both non-TV constraints and TV constraint are similar the gradient-descent step-size is reduced by a factor stepred if the ratio of gradient-descent step to POCS step is greater than xred. We used a fixed number of iterations, which was 1000 for this study, as the stopping criterion and Fres is the resultant image obtained after chord averaging step and imposition of compact support constraint.
Numerical Results. The low-contrast Shepp-Logan phantom was used for rigorous testing of the reconstruction algorithm. In particular, 1200 views of non-truncated projection data were acquired. However, only the x-rays passing through the ROI were used for the reconstructions. Two scout views covering the entire compact support were also utilized. The source to iso-center distance was 75 cm. A linear virtual detector 84 cm in length with 1050 detector pixels was used. The reconstructed image size is 50×50 cm2 with 513×513 pixels and radius of ROI was 8 cm.
Various simulation cases are described in Table 1.
The purpose of these simulations was to demonstrate the merits of the algorithm and understand the importance and contributions of various constraints. These cases are divided into three types to present the results in a simple and meaningful manner.
For type 1 reconstructions, simulation cases of 1 through 11 and case 18, 60 filtering directions over 180° and a combination of constraints are utilized for the chord averaging step. In each filtering direction, only pixels on the lines passing through the defined known sub-region are projected onto the convex sets used during chord averaging step, regardless of whether known-information is utilized or not.
For type 2 reconstructions, cases 12 and 13, only one constraint is utilized for reconstruction. Reconstruction for case 12 is carried out similar to that for type 1 one cases except that only Hilbert transform constraint is enforced. In case 13 only SART is used and hence there is no filtering involved.
When no known information is utilized, pixels on all the lines within ROI can be projected onto convex sets during the chord averaging step. We call these as type 3 reconstructions. Accordingly, Cases 14 through 17 and case 19 are reconstructed using one of more filtering directions.
Original images, with the known information, ROI, known sub-region, and image profile locations, are shown in
Convex sets C2 and C7 both impose integral constraints. Importance/or need of a particular integral constraint can be understood by comparing the results for cases 1, 2 and 3. Most accurate image is obtained both in terms of subjective and quantitative image quality for case 2. Thus, the simple integral constraint by itself, as in case 1, or even in combination with SART-based integral constraint, as in case 3, degrades the image quality. Hence, in the all the numerical simulation cases after case 3 the simple integral constraint was not utilized.
From the results for case 4, it can be seen that known-information has very small contribution in improving the image quality and hence the unavailability of known-information does not have much impact on reconstructed image quality. This is a very desirable result since the known-information may not be available in some cases. Case 5 results demonstrate that positivity constraint (C3) helps to improve both the subjective and quantitative image quality while its absence does not significantly worsen it. Results for case 6 show that the image maximum constraint (C4) helps in reducing the maximum error but it does not make significant contribution to final reconstructed image quality. The maximum value of 2.5 was used for our simulations which is greater than the maximum intensity of 2 in the Shepp-Logan phantom. From the results for cases 7 and 8, it can be seen that SART-based integral constraint for x-rays passing through ROI (C7) and for x-rays in scout-views (C8) are important constraints and absence of either of them results in poor reconstructed image quality. Case 9 results show that the TV-regularization constraint (C9) makes a significant contribution to final reconstructed image and the absence of it leads to increased errors but it does not seem to be absolutely necessary. From the results for cases 10 and 11, it can be seen that the Hilbert transform constraint (C1) and the Compact Support constraint (C6) are important and their absence results in poor reconstructed image quality. Though the Hilbert transform constraint (C1) and the SART-based integral constraint (C7) are important constraints, the results for cases 12 and 13 show that either of them is not enough to reconstruct the image by itself.
As explained above, type 3 reconstruction becomes possible when knowledge of known-sub-region is not necessary. Case 14, a type 3 case, is very similar to case 4, a type 1 case, except that only one filtering direction was utilized for the former. The image quality for case 14 is comparable to case 4 and hence when known information is not utilized entire ROI can be reconstructed using only one filtering direction. The results for case 14, which utilizes only vertical filtering direction, indicate that more accurate reconstruction can be obtained if filtering direction is chosen in which there is less truncation within the compact-support. The results for cases 16 and 17, in which 2 and all 60 filtering directions are respectively utilized, show that use of more filtering direction decreases the error and non-uniformity but also the resolution.
From our numerical simulations for cases 1 through 13 we observed that four convex sets of Hilbert transform, compact support, SART-based integral constraint, and SART-based scout constraint are essential. Result for Case 18, a type 1 reconstruction, using only these constraints are somewhat inferior to case 2, a type 1 reconstruction, in which all other constraints except the simple integral constraint (C2), are included. But image seems to be consisting of essential feature. So, whenever possible, other constraints should be used. Results for case 19, a type 3 reconstruction, are inferior to those for case 14, which is similar except that it uses more constraint. Note that both case 19 and case 14 do not use known information, which is the good news, since that is the only information that may not be available in some cases.
The inventive method was also applied to real projection data of anthropomorphic cardiac phantom. The FBP reconstructed image from non-truncated data, the ROI, the known-sub-region and the compact support used are shown in
The good reconstructed image quality, closely matching profiles and good image quality parameters effectively demonstrate the effectiveness of these methods. The high maximum errors are due to differences at the edges of the high contrast structures within the ROI. Low mean error and noise demonstrate accuracy of interior tomography reconstructions.
New convex sets of compact support, SART-based integral constraint for x-rays passing through the ROI, SART-based integral constraint for x-rays in scout views and total variation algorithm were introduced. Evaluation of the exemplary algorithm showed that the first three of these constraints are essential for accurate interior tomography reconstruction. While total variation regularization constraint is not necessarily required for reconstruction, its use improved image quality significantly.
As shown, the algorithm is capable of accurately reconstructing the interior ROI. An important finding is that the simple integral constraint (C2) used in existing algorithm introduces image artifacts and degrades the image quality. Additionally, the contribution made by known-information is very small and hence the image can be accurately reconstructed even in the absence of known-information. Using this information, reconstruction of the image using only one filtering direction resulted in a reconstructed image comparable to an image reconstructed using several filtering directions and known information.
Some very minor artifacts exist due to fixed number of iterations and limitations of the algorithm itself. Simplest way to over-come these artifacts would be to add a few views which cover the entire compact support [27]. From our experience; approximately 10-15 such views will enable exact reconstruction. Acquiring these views in the form of scout-view may not be acceptable but these views can be acquired as a part of the scan. These may be incorporated into current CT scanners by simple acquisition software changes. It may also be possible to improve reconstruction image quality by using estimated projection data for a few views. These projection data can be estimated using existing methods [28, 29]. Better processing of the edges, as proposed in our compressive sensing based interior tomography article [10], may also improve image quality. These edges can be pre-determined using lambda-tomography. Additional convex sets which establish relationship between neighboring pixels can also introduced. More accurate forward projection models for SART will also reduce reconstruction errors.
Biomedical x-ray and optical tomography explicitly or implicitly result in challenging linear inverse problems of a highly ill-posed nature, since x-ray imaging (attenuation, refraction and Raleigh scattering) and optical molecular imaging (fluorescence and bioluminescence) can be well approximated as linear systems.
An ideal solution to solve these linear inverse problems would be to collect most essential data in the shortest possible time, and yet produce the best possible images. To capitalize on these potential benefits, we need to overcome the inherent illposedness due to a limited amount of measured data. Powerful imaging models and reconstruction methods are needed that reflect fundamental relationships and constraints, utilize all prior knowledge, new insights and unconventional tools.
Fortunately, recent theories of interior tomography and compressive sensing effectively challenge the conventional wisdom in tomographic imaging and signal processing [29, 38, 39]A. While the interior problem (to reconstruct an internal region of interest (ROI) only with x-rays through the ROI) has no unique solution in general [40, 41]A, interior tomography offers an exact and stable solution, given practical prior knowledge. While the Nyquist/Shannon sampling rate has been used since the digital era, compressive sensing unlocks hidden signal sparsity and requires much less data for recovery of an underlying signal.
Also, the phase approximation (PA) model outperforms the diffusion approximation (DA) over a broad spectrum of fluorescent and bioluminescent probes with a computational complexity similar to DA. This gives us the opportunity to develop multi-spectral/probe optical molecular tomography with satisfactory performance.
Furthermore, recent technologies of carbon nanotube (CNT) based x-ray source arrays and photoacoustic tomographic (PAT) systems improve data acquisition capabilities [42-45]A. The CNT-based scheme makes it possible for the first time that many x-ray sources can be distributed and controlled arbitrarily and cost-effectively, suggesting novel x-ray imaging architectures [46]A. PAT combines high optical contrast and deep ultrasound penetration, collecting information inaccessible by purely optical means. Over years, our team and collaborators have made significant theoretical and technical progress in spiral cone-beam CT, exact cone-beam reconstruction, interior tomography, compressive sensing, multi-scale CT, dynamic spatial reconstruction, carbon nanotube based distributed x-ray source array, bioluminescence tomography and fluorescence tomography. It is all of these theories, technologies, insights and expertise that give us confidence to address these key challenges.
Developments in X-ray Computed Tomography. Spiral/helical cone-beam scanning and approximate algorithms have been developed to solve the long object problem since 1991[1, 47, 48]A, enabling practical implementations of area detector tomography. As remarked by Defrise et al., “To solve the long-object problem, a first level of improvement with respect to the 2D FBP algorithms was obtained by backprojecting the data in 3D along the actual measurement rays. The prototype of this approach is the algorithm of Wang et al.” [49]A. “Many advances in CB reconstruction have been made recently thanks to the quest for an attractive reconstruction method in helical CB tomography.”[50]A. This group has made numerous contributions to allow exact CB reconstruction with general trajectories (
Interior tomography was recently developed by this group [26-28, 31, 52, 53]A, (which are hereby incorporated by reference in their entireties), and others [54, 55]A for exact and stable ROI reconstruction only from projection data along lines through the ROI. Our initial theory assumed exact knowledge of a subregion in a ROI, such as air in the lungs or blood in the heart as shown in
In 2008, based on compressive sensing (CS) theory [38]A, we proved that an ROI can be exactly reconstructed via the total variation (TV) minimization if the ROI is piecewise constant [29, 56, 57]A, without exact subregion knowledge. Then, we have extended our theory to allow a piecewise polynomial ROI [58]A. These breakthroughs have important applications including cardiac CT, lung CT, temporal bone imaging, and intraoperative guidance to handle large objects, minimize radiation dose, suppress scattering artifacts, enhance temporal resolution, reduce system cost, and increase scanner throughput.
Example—Interior Tomography. A CT experiment was performed with a living sheep to demonstrate the feasibility of interior tomography [46]A. First, an entire cross-section was reconstructed using the popular filtered backprojection (FBP) method from complete projections. Then, a trachea (in which we had the CT number of air) was selected. Around the trachea, we specified a circular ROI and kept only the projection data through it. As expected, interior tomography produced an excellent ROI reconstruction. For comparison, we adapted the FBP algorithm and simultaneous algebraic reconstructive technique (SART) [59]A for ROI reconstruction with the optimal shift to match the air value.
An algorithm comparison for interior reconstruction in terms of the mean error
Since 2008, interior tomography has been applied on the carbon nanotube (CNT) based x-ray imaging systems [60]A. Additionally, cone-beam micro-CT systems using a single source-detector pair have been developed. Recently, a stationary digital breast tomosynthesis (DBT) scanner has been developed using a CNT-based 25-pixel source array [61, 62]A. Using the micro-CT scanner of Charybdis [45, 63]A, an in vivo mouse CT scan was performed to obtain 400 projections of 1024×1024 elements of 50 μm aperture using an inventive physiological gating technique over an angular range of 200°. This dataset was used to evaluate the CS-based interior reconstruction technique. As shown in
Next, let us consider an extreme case of multi-source interior tomography based on the aforementioned dataset. Suppose that the x-ray tubes take up no room on the gantry and local projections correspond to detector array segments seamlessly connected, a centralized ROI of radius 15.4 mm (the gantry is assumed to be of radius 570 mm) can be illuminated in parallel from 58 evenly distributed source positions. These projections were equi-angularly selected from the 1160 views by discarding 19 projections in every 20 projections. With the same pixel size as in the above reconstruction, the ROI was of 54 pixels in radius. Then, interior tomography of the ROI was performed on 58 groups of parallel lines, and each group included 22 uniformly distributed parallel lines. Results are shown in
Grating-based X-ray Tomography. The invention provides a 1D grating-based x-ray imaging platform as shown in
Most tissue components give rise to significantly distinct refractive and scattering characteristics and can be extracted using the 1-D-grating-based x-ray imaging system. For example, refraction and Rayleigh scattering behaviors of tumors significantly differs from that of healthy tissue [74]A. Recently, Franz Pfeiffer et al. demonstrated that 1-D-grating-based x-ray imaging is informative to reveal subtle structural variations in an object, as shown in
The images of
Limited-angle Interior Tomography. Dynamic contrast enhancement can be studied with CT in a region-of-interest (ROI) for quantification of microvasculature function. Radiation exposure cannot be decreased with a smaller number of scans because the contrast dilution curve must be determined over time to describe the toe, peak and decay of the dilution curve. Consequently, the options are to reduce the number of projections per scan, delimit the x-ray beam to the ROI for each projection view, and develop algorithms for optimal image quality. A system is provided that is sufficiently compact and accessible to allow a broad spectrum of applications and interventions. In particular, an interior tomosynthesis system using the carbon nano-tube (CNT) based x-ray source array to quantify ROI perfusion is provided.
Interior Tomosynthesis System Prototyping. While tomosynthesis has been studied for decades, interior tomosynthesis is a new concept motivated by our recent success with interior tomography and compressive sensing. Regularized by an individualized prior CT scan, interior tomosynthesis not only reduces radiation dose, data flow and system cost simultaneously but also improves temporal resolution effectively, suggesting numerous biomedical applications. For interior tomosynthesis, a standard medical CT protocol is first executed as routinely done for a patient, which results in a volumetric image to serve as a prior knowledge for interior tomosynthesis. At the same time, an optical surface scan is performed of the patient in the same body posture. Subsequently, a tomosynthesis scan and a similar optical surface scan are conducted of the same patient. The patient position can be adjusted according to the surface models so that the ROI center coincides with the iso-center of the interior tomosynthesis system. Also, the surface models enable the registration between CT and tomosynthetic scans.
To demonstrate the merits of interior tomosynthesis, an interior tomosynthesis system using a cutting-edge CNT x-ray source array (XinRay, Research Triangle Park, N.C.) [79]A and an area detector is provided. As shown in
Exemplary specifications of an interior tomosynthesis system in accordance with embodiments of the invention are provided in Table 6.
Due to the detector size limitation, the targeted ROI at the iso-center is currently 8×4 cm2. With a larger detector, the targeted ROI can be increased. Given the source array, detector, and imaging geometry, four projections can be acquired simultaneously by turning on and collimating four x-ray beams to the ROI. Such an acquisition scheme will not only efficiently utilize the detector area but also proportionally reduce the image acquisition time.
According to an exemplary method of the invention four of the 52 x-ray pixels can be turned on each time in 13 detector integration cycles. The ith detector integration cycle will capture projections from the corresponding x-ray pixels (L1Pi, L2Pi, L3Pi and L4Pi, where LjPi stands for the ith x-ray pixel at the jth source line, as shown in Fig. C.1.1.1(c)). The detector can be operably configured to achieve the specified spatial resolution and noise level. The pre-source collimation is needed to restrict x-rays to the ROI in a high mechanical precision. The collimators can be machined out of metal blocks. Finally, the source array, collimators, and detectors can be integrated into the inventive system. In addition to the aforementioned data acquisition parts, key system components can include a high-precision mechanical framework, a geometrical calibration setup, and a host computer to control x-ray sources and detectors and manage data and images.
HOV-based Interior Algorithm Development. Based on prior results, such as that relating to prior CT regularized breast tomosynthesis [80, 81]A and lung perfusion CT [82]A, CS-based interior tomosynthesis can use an individualized prior CT scan and define an appropriate objective function. Since a smooth surface can be well approximated by a polynomial function, embodiments of the invention are capable of providing an interior tomosynthesis approach based on a piecewise polynomial image model [29, 56, 58]A. Accordingly, a high order TV (HOT) objective can be used for interior tomosynthesis, along with other constraints from the prior scan and practical knowledge.
Assume that an interior tomosynthesis scan measures a set of projection data expressed as a 1D vector Y with elements Yimeas, i=1, . . . , NY, where NY is the product of the number of detector elements and the number of projection views. As formulated in [83]A, each measurement is a realization of a random variable
which μ(x,E) is the energy-dependent linear attenuation coefficient, sm(i) the number of scattered photons, and the ith measurement not only reflects the signal from the path Li but also other paths Lj, j≠i, with weight bij to take off-focal radiation into account. Although it is optimal to reconstruct μ(x,E) at every energy level of interest, it is generally impossible without use of strong assumptions on the material composition. In CT applications, the goal is typically to reconstruct μ(x,Er) at a reference energy Er.
Based on reasonable assumptions, Eq. (C.1.1.1) can be reduced to the discrete model (http://www.eecs.umich.edu/˜fessler/):
where an object consists of Np pixels with attenuation coefficients μ=[μ1, μ2, . . . , μN]T, aij denotes the length of a ray path in a pixel, and ri a known constant which compensates for measurement noise and may be ignored especially in initial analyses. Preferably, the analysis is performed using Eq. (C.1.1.2), but dual-energy cases can be implemented as well. Guided by the CS theory, interior tomosynthesis can be converted to the following optimization problem:
where HOT(μ) is an objective function including the HOT and other constraints such as the prior CT scan and tomosynthetic data. The POCS technology [84]A is effective, flexible and promising for our CS-based interior tomosynthesis, which rapidly improves the objective function, and under certain conditions achieves the global convergence [85]A. POCS can be adapted to solve Eq. (C.1.1.3). Let us assume that an original image f belongs to the intersection C0 of J closed convex sets C1, . . . Cj, . . . , CJ, i.e., f∈Co=∩j=1JCj. Let the projection operators onto these J convex sets be denoted as P1, . . . , Pj, . . . , PJ, and the composite operator as T=P1 P2 . . . PJ. The POCS can be defined as μk=Tkμ, where k indicates the number of iterations. Since the function μ represents the attenuation coefficients, it must be non-negative, bounded and compactly supported. Also, high-performance computing techniques can be used, if desired, to reduce the computational cost.
Validation and Optimization. An interior tomography simulator has been implemented in MatLab and C++. Several physical aspects were simplified in the simulator. Specifically, a point source and a monochromatic x-ray spectrum were assumed. It is possible to enhance the simulator with all major physical features. Using numerically synthesized phantoms with known geometric and radiographic features, image quality in representative imaging geometry and dose settings can be analyzed. Also, truncated CT scan datasets can be generated from whole scan sets and be made available for interior tomosynthesis. A few complete scans can be used to train the algorithm and protocols. Then, 30 test datasets made to contain just truncated projections can be prepared to create a blinded study. Three types of truncated scan datasets are (1) a centered chest region containing the heart in all views, (2) a cardiac region just containing the whole heart in all views, and (3) a partial cardiac region that includes just the LV wall or a coronary artery. Myocardial perfusion curves can then be calculated from interior tomosynthesis images to provide the “ultimate” test of utility for measuring functional data with the minimum exposure. Resultant results can then be systematically compared to the full reconstruction data.
The interior tomosynthesis algorithms can be validated by a physical phantom, e.g., standard CT performance phantoms and/or a dynamic heart phantom (Shelley Medical Imaging Technologies, Ontario, Canada; www.simutec.com) as shown in
User-friendly software, precise dual servomotors and an actuator are connected to the heart phantom allowing for independent control of compression, stretching and torsion motion of tissues. The software allows users to execute sophisticated cardiac motion events, including RR intervals and arrhythmia. This phantom can be scanned at representative cardiac phases using a state-of-the-art CT scanner to establish the gold standard. Then, the phantom can be scanned using the inventive interior tomosynthesis system for comparison. Radiation dose associated with interior tomosynthesis scans can be measured with a thermo-luminescent dosimeter (TLD). All the results will be systematically compared and statistically analyzed. Target criteria of interior tomosynthesis systems in accordance with embodiments of the invention include: ˜50 ms temporal resolution, <0.5 mm spatial resolution, and ˜2% contrast resolution at <2.5 mSv dose.
Optical Molecular Tomography. Optical molecular probes, both fluorescent and bioluminescent, are essential tools for in vivo biological studies [15, 19, 86, 87]A. Fluorescence imaging uses active illumination and has high spatial resolution [88, 89]A. Bioluminescence imaging requires no exciting light and offers superior signal-to-noise ratio [90, 91]A. More importantly, optical probes have unique labeling capabilities [15, 19, 86, 87]A. Regenerative medicine utilizes principles of cell biology and biomaterials to develop and transplant substitute tissues and organs [92, 93]A. Since engineered tissues and organs resemble their natural counterparts, optical molecular tomography (OMT) methods are required for development of regenerative medicine.
An integrated optical molecular tomography platform for mouse studies in regenerative medicine is provided by embodiments of the invention. As shown in
Innovative features of the platform include (1) novel optical imaging system designs, (2) a novel phase-approximation model for photon propagation in tissues, (3) a novel PAT-based DOT method, and (4) compressive sensing (CS) based FMT and BLT algorithms. These techniques can provide solutions to current challenges in OMT, yielding an excellent OMT performance in localizing and quantifying fluorescent and bioluminescent probes in living mice of multiple tissue engineering models.
Exemplary OMT Platform.
[Surface Scanning System] An optical surface scanner (
[DOT System] To determine optical parameter distributions within a living mouse in a spectrally resolving manner, a time-resolved DOT system can be used [95, 96, 108]A (
[BLT System] An advanced BLT system can be used to perform a multi-view multi-spectral data acquisition (
[FMT System] In the FMT system (
[PAT System] As an enhancing feature of the inventive platform, PAT capabilities can be developed for possible improvement in DOT and FMT. First, a PAT-based DOT approach can be developed to refine optical parameter estimation. Such an approach can combine photoacoustic and optical constraints, converting the traditionally ill-posed DOT problem into a well-posed setting [101-103]A. Second, the recently proposed multispectral optoacoustic tomography (MSOT) approach [18]A can be improved using the PA model in the CS and interior tomography framework.
MSOT is based on detection of ultrasonic waves induced by characteristic absorption of pulsed light specific to various fluorophores. In this PAT/MSOT system, the body posture of a mouse is fixed in a cylindrical container. The transducer array (5 MHz central frequency) and illumination fibers are placed along a full ring of a radius 25 mm around the container. A sequence of high-energy laser pulses of 8 ns width and 30 Hz repeat frequency can be delivered towards the mouse body by the fibers. The induced ultrasound signals can be intercepted by the transducer array and collected by a high-speed digitizer. The imaging ring can be horizontally scanned to cover the mouse longitudinally.
[Platform Integration] Pertinent photogrammetric and geometric calibration procedures can be performed for each system component and their intended combinations. A user-friendly interface can be developed along with standardization and optimization of software and phantoms. The mouse can be maintained in the same holder while being moved between different imaging modes. Images from various sources will be registered.
PA-based Reconstruction. [PA Model] An appropriate light propagation model is typically desirable for tomography with visible and near-infrared light [69, 111]A. Although the radiative transfer equation (RTE) well describes the photon propagation, its high computational complexity makes it improper for various applications. As mentioned above, the diffusion approximation (DA) to RTE is a most widely used photon propagation model but it performs poorly in highly absorbing tissues, near sources, and across boundaries [69, 111, 112]A. As an alternative, recently a phase approximation (PA) model [71, 72]A has been developed, which is highly accurate over the spectral range of fluorescent and bioluminescent probes. Accordingly, PA can be used for tomographic imaging for superior performance.
[Optical Characterization] It is well known that the steady-state DOT is highly ill-posed [113]A. Here the PA-based time-resolved DOT techniques can be used to estimate in vivo optical parameters of a mouse. The characteristics of detected light signals can reveal absorption and scattering coefficients more accurately, which may be further refined using CS techniques. As alternative technique for optical characterization, photoacoustic tomography (PAT) was recently applied for visualizing light absorbing structures in a highly scattering medium [114-116]A. Using PAT, the initially generated acoustic pressure field can be analytically reconstructed in 3D from externally recorded acoustic signals, which is the product of absorption coefficient and fluence rate at every location. Based on the PA model and from this 3D pressure field constraint, the PAT-based DOT approach can reconstruct the involved optical parameters of mouse tissues in a unified fashion.
[Fluorescent/Bioluminescent Source Reconstruction] Given the structural and optical properties of a mouse, the PA model can be discretized into a linear system via finite element analysis to link an unknown source distribution inside the mouse and measured photon fluence rate data on the mouse body surface in the case of either FMT or BLT. While these inverse source problems are underdetermined, experimental data show that both fluorescent and bioluminescent probes distribute quite locally, smoothly, or in a somehow patterned manner in the mouse body, with substantial measurement noise.
Inspired by the CS theory, CS-based FMT and BLT algorithms can be used to utilize the inherent sparsity in fluorescent and bioluminescent source representations for improved FMT and BLT reconstructions. In this CS-based FMT and BLT scheme, the source reconstruction is subject to an ii norm optimization, which can be equivalent to the total variation (TV) minimization [117, 118]A or the HOT minimization as discussed above. The TV minimization is a convex optimization problem, and can be solved in polynomial time, most likely allowing better outcomes than the least square reconstruction. The HOT minimization might be time-consuming but can be used for more efficient algorithms, and/or may use high-performance computing techniques.
Validation and Optimization. The current Monte Carlo simulation software can be enhanced to simulate photon propagation in biological tissues in several ways. First, the photons can be made to carry multi-spectral and time-resolved information. Second, the Monte Carlo simulator can be extended to deal with complex biological tissue structures defined in finite element geometry. Third, the computational efficiency can be improved using a multi-core workstation and GPU programming. Extensive numerical simulation can be performed to optimize the system designs and test reconstruction algorithms. Numerical phantoms can be designed to mimic mice in terms of major organ regions and soft-tissues. Bioluminescent and fluorescent sources can be specified to reflect typical locations, intensities, shapes, and spectral characteristics. A perturbation analysis can be conducted with respect to geometrical model mismatches, optical parameter errors, and measurement noise to validate the OMT techniques. Statistical analysis will access the accuracy and stability in terms of source position, power and shape. To validate the inventive OMT techniques physically, a number of heterogeneous physical mouse phantoms can be fabricated.
Such platforms can be made from various materials such as candle gel, high-density polystyrene and white nylon. White-sea foam powder and a mix of red and blue dyes can be used to vary the absorption and scattering properties in the gel. In the case of BLT, artificial diffusive light sources can be embedded in the phantoms to simulate the bioluminescent light emission in a mouse phantom. Multiple datasets can be acquired to reflect phantom geometry, source locations, intensities and spectral characteristics. These datasets can be combined or modified to test the BLT algorithms. The results can be analyzed in each simulation scenario to characterize the performance of the reconstruction techniques with a high statistical significance. The cases of FMT, DOT and PAT can be similarly handled.
Target Parameters. The OMT platform can be configured to meet one or more of the following criteria: (1) In the FMT case, ˜1 mm source localization accuracy and <15% source power estimation error; (2) In the MSOT case, improving the key parameters of FMT by ˜50%; (3) In the BLT case, systems having ˜1.5 mm source localization accuracy, >20% source power estimation error, >4 folds data acquisition speedup compared to the current sequential acquisition process; (4) In all the above cases, to outperform the diffusion approximation (DA) by >30% in terms of source localization and power estimation with computational time comparable to that required by DA.
X-ray nano-CT is a relatively new technique that offers unique capabilities stemming from its large penetration length in biological materials. To date, up to 25 μm spatial resolution has been demonstrated with sub-keV energy “soft” x-rays, and 50 nm resolution has been achieved with multi-keV “hard” x-rays. Multi-keV x-rays are used for nano-CT with a 1/e attenuation length of >0.25 mm in soft tissues and >100 μm in hard tissues such as bone. As a consequence, thick tissue or bone sections can be used with little sample preparation. Furthermore, x-ray absorption edges and characteristic emissions can be used to map the elemental composition of a sample.
While X-ray imaging is of paramount importance for clinical and pre-clinical applications, it is fundamentally restricted by the attenuation contrast mechanism. Since 2006, based on the Talbot effect our Consultant Pfeiffer et al. performed groundbreaking work on x-ray phase-contrast and dark-field tomography using the grating interferometer technology with a hospital-grade x-ray tube [34, 36, 77]A.
However, the required gratings are difficult to make and analyze, especially when the size and aspect ratio of gratings are large. Also, the grating-based imaging takes a long time, particularly when the resolution requirement is great. Supplementing attenuation-oriented images from the current multi-scale CT facility with images of refractive index and Raleigh scattering coefficient can provide an interior x-ray grating-based tomographic imaging approach in the CS framework to enhance soft tissue contrast resolution for inflammation and tissue imaging.
Grating-based System Prototyping.
Grating-based Image Reconstruction. It has been reported that underlying distributions of refractive index and Rayleigh scattering coefficient can be reconstructed using appropriate reconstruction methods [121-125]A. Grating-based image reconstruction algorithms can be used in the interior tomography and CS framework [126, 127]A. Non-paraxial formulation for spherical and Gaussian incident x-ray beams [128]A can also be considered. The modeling and analyses in these more realistic cases are complex, but feasible based on the cited references.
Multi-scale/parameter Image Registration. Interpretation, quantitative analysis, and visualization of multi-scale and multi-parameter images require that the same physical location be represented in all images, i.e., that images have spatial correspondence. Since these images may be taken using different imaging devices, at various resolution, and under diverse preparations, identifying such spatial correspondence requires image registration. The registration task is inter-modality and will require identification of both affine and small nonlinear deformations.
A set of sample holders that have a set of visible fiducials compatible with all imaging modalities made from polyacrylic materials with embedded beads of contrast inducing materials can be used to achieve this goal. This enables identification of a common coordinate system relative to the sample as it moves between the imaging modalities. ROI prescription for the interior tomography methods can be placed relative to a scout scan in which these fiducials are visible. This will allow automated detection of the fiducials and initial alignment of the sample to its other images.
To refine the registration of sample images, representative objective functions based on information theoretic measures can be evaluated and new ones defined if needed. Such measures can be combined in the group framework [129, 130]A to model image artifacts and reduce their influence on the registration quality.
Validation and Optimization. To validate the performance of the inventive grating-based imaging system, four physical phantoms can be constructed. The first phantom is for spatial resolution and made of polystyrene [131]A. The line-pairs can be made of epoxy resin as bone equivalent material with frequencies around 5 line pairs per mm, which reflects the intended spatial resolution 100 μm. The second phantom is for contrast resolution and made of polystyrene [131]A. A series of holes of varying depths and diameters can be engraved into the phantom surface. Along each row, a range of contrast values can be implemented. The third phantom is for linearity and uniformity and made of 50% Glandular and 50% Adipose Tissue [132]A. One half side of this phantom can be for uniformity, while the other side can contain a step wedge for linearity. Each step is of 2.5 mm width with a decreasing thickness 40-5 mm in 5 mm increments. The fourth phantom is for assessment of 3D reconstruction and can be configured to contain spheres of 1.5 mm diameter made of PTFE and PMMA (polymethylmethacrylate) [133]A. All the phantoms can be calibrated for geometrical accuracy using a micro-CT scanner and for diffraction properties. Each phantom can be restricted within a 40 mm3 cube to be fitted in the field of view of the system. The system can be validated in the projection domain using the first three phantoms, and also in the image domain using the fourth phantom with the edge responses for spatial resolution and differences between foreground and background as well as associated noise levels for contrast resolution. The image quality in relation to the hardware and software components can be quantified.
Target Criteria. Exemplary target criteria can include, e.g.: (1) the x-ray phase contrast/scattering imaging system can have been prototyped to yield high-quality phase-contrast and dark-field data no significant difference from previously reported results, (2) the CS-based interior reconstruction algorithms can have been developed to reconstruct refractive index and Rayleigh scattering images of comparable image quality from <50% projection data and (3) allowed sample sizes can have been increased by a factor of 2 using our interior tomography approach relative to the published imaging protocols.
Example I. Ultra-Fast Tomography. For a general CT-scan of the heart and the coronary arteries, a field of view (FOV) of ˜15 cm in diameter is preferred (apart from some very special cases where e.g. only the middle part of the right coronary artery would be of interest) and can be achieved using multi-source interior tomography by utilizing a limited number of sources to increase the ROI coverage and scanning the source-detector chains in a circular, saddle-curve or another trajectory to avoid under-sampling, cross-scattering, and other issues. In the case of 2N+1 sources, the radius r of the field of view (FOV) can be maximized by arranging the source-detector pairs equi-angularly to reach:
where R is the radius of the scanning circle.
According to Eq. (3), the radius of FOV is decreased as the number of sources is increased. Specifically, we can cover an FOV of 16.2 cm or 13.4 cm in diameter with 11-source or 13-source interior tomography architectures respectively, assuming the imaging geometry R=57 cm of a Siemens CT scanner (SOMATOM Volume Zoom, Siemens Medical Systems) used in our laboratory. In these two cases, the temporal resolution will be improved by >10 and >12 folds respectively (taking into account the effect the fan-angle which is small in the interior tomography mode), while the image quality remains the same as that shown in
Currently, x-ray computed tomography (CT) requires source scanning so that projections can be collected from various orientations for black/white image reconstruction. Limited by the scanning time, temporal resolution of CT is often inadequate when rapid dynamics is involved in an object to be reconstructed. Coincidentally, the color detector was invented in recent years. Embodiments of the invention provide a system comprising electronically-controlled-source multi-detector-segment cone-beam color CT architectures and methods for better temporal resolution and contrast resolution. Specifically, a region-of-interest is irradiated in parallel with narrow x-ray beams defined by many source-color-detector pairs for data acquisition. This ROI can be then reconstructed using interior tomography described above.
Current color detector based systems are very expensive. Thus, hybrid detector systems in which a color detector segmented is embedded in a regular detector assembly. As shown in
Concerning the exemplary system provided in
To achieve an ultrafast tomography performance, a limited amount of projection data is an inherent problem. Nevertheless, unlike conventional tomosynthesis techniques, the inventive multi-source interior scan allows symmetric data acquisition, avoiding biases from primary viewing directions of a tomosynthetic scan [31]B. Most importantly, next generation reconstruction algorithms may produce much better image quality from a limited amount of projection data than it is possible now, shown in
Compressive sensing/sampling theory (http://www.dsp.ece.rice.edu/cs) suggests, for example, that reconstruction quality may be maintained after a dramatic reduction in the number of projections [32-34]B. Interestingly, we recently proved that interior tomography can be done exactly without knowledge of a sub-region in an ROI if the ROI is piece-wise constant [35]B. This finding may be extended under wider conditions.
Exact knowledge of a sub-region can often be assumed because in many cases we already know the x-ray linear attenuation coefficients of air gaps or voids, water, blood or other liquid, or other calibrated structures such as implants and substrates; more generally, a pre-scan may often acquire such prior knowledge. It is recognized that the inflowing contrast agent changes the CT-numbers not only in the vessels but also in the myocardium and cardiac chambers. In this case, we may have to use spinal and lung features to help interior cardiac CT [36]B. Alternatively, a compressed sensing approach can be used to perform interior tomography based on some weak image model [35]B, as discussed above.
Ultrafast tomography is highly desirable in a number of disciplines. First, ultrafast tomography may improve cardiac CT. Cardiac CT has emerged as a promising tool for noninvasive coronary angiography [37]B. Electron-beam CT was the first dedicated cardiac CT modality with temporal resolution as low as 50 ms, but it is obsolete because of its major restrictions in spatial and contrast resolution. State-of-the-art medical CT scanners can achieve temporal resolution of 100 ms. However, given the rate and magnitude of the cardiac motion, temporal resolution should ideally be <10 ms for humans and <2 ms for small animals such as rats and mice [16]B. While achieving these resolutions have been extremely challenging, our ultrafast interior tomography approach can help improve temporal resolution to assess the calcium burden and vascular stenoses, to identify positive remodeling and plaque consistency, etc., especially in the cases of high/irregular heart rates.
Second, ultrafast tomography may enable clinical micro-CT such as for inner ear imaging [15]B. Two major obstacles in achieving such fine spatial resolution in vivo are physiological motion induced blurring (the finer the image resolution, the more significant the physiological motion will be, which is in an order of millimeter) and radiation dose concern (the finer the resolution, the much greater the radiation dose will be delivered to the patient). When the x-ray beam targets a small ROI and projections are acquired in parallel, the above two issues are effectively addressed at the same time, which may help derive ultrafine features of interest (image-based biomarkers) for many applications of diagnosis and therapy.
Third, ultrafast interior tomography is the ideal modality for so-called process tomography, which for example might be used for multiphase imaging of the contrast dynamics in the body [38]B, or mass flow of oil and water that is pumped from oil reservoirs [39]B.
A promising way to improve the image quality of ultrafast tomography is to average multiple instant snapshots of the same ROI after image registration (note that
Scatter in this multi-source system is not a significant issue. Briefly speaking, the larger the number of sources (which is typically odd) we want to use, the narrower each x-ray beam (π/N) we should make, and the smaller the field of view we will have. Clearly, the scatter in the data is roughly proportional to the product of the number of simultaneously active sources and the beam width (which is proportional to the size of the field of view). Thus, the scatter to primary ratio can be controlled to be practical. Also, detector collimation can be used to reject scattered photons.
Interior tomography enables ultrafast imaging performance utilizing (1) prior knowledge on a small spot in the ROI or a weak image model such as ROI being piece-wise constant and (2) multiple narrow-beam-oriented source-detector pairs. Interior tomography is advantageous to reduce radiation dose (no x-rays go outside an ROI), suppress scattering artifacts (no cross-talk from radiation outside the ROI), refine image quality (with the new reconstruction approach and prior knowledge), and handle large objects (measurement can be localized in any direction, assuming a centralized ROI or dynamic x-ray collimation).
Mechanical properties of myocardium are closely related to the histological structures that vary with age, sex, and physiological and pathological conditions in particular. For example, myocardium in the region of ischemia becomes stiffer due to loss of blood supply with the development of fibrosis. [32,45,54]C Characterization of the changes in myocardial mechanical properties may help diagnose heart diseases and guide the therapies. Accurate constitutive model of myocardium is essential for cardiovascular biomechanics researches [3,6,15,17,19,35,52,53,55,61]C and applications including digital heart models for cardiac surgical training, design of patient-specific cardiac equipments and artificial valves and so on.
There have been in vitro experiments to characterize the passive elastic properties of myocardium. [5,18,22,38,39,46,48,59]C While providing with a wealth of knowledge of the mechanical properties, these experiments use dissected myocardium specimens, and thus are not able to address in vivo dynamic properties of myocardium, which are clinically more important.
Cardiac elastography is an emerging imaging modality to estimate the mechanical properties of myocardium based on in vivo measurements of the blood pressure and transmural displacement or strain. [5,14,16,33,42,43]C Han et al.16C and Gotteiner et al.14C modeled the myocardium as a homogeneous and isotropic nonlinear material, and described the heart motion as quasi-static. Creswell et al.5C and Moulton et al.33C further took into account the material anisotropy in their two-dimensional models. Limited by the cardiac imaging techniques and computational power, these studies were not able to address the dynamic nature of the heart motion and the material heterogeneity due to impairment of myocardium. As a result, the effects of viscosity and inertia of myocardium received no attention in these works. Over the past years, significant advances have been made in cardiac imaging techniques, including the MRI dynamic imager, [11,24]C MRI displacement encoding with stimulated echoes (DENSE) method [57]C and ultrasonic phased-tracking method [20]C and so on. Techniques have also been developed to quantify the three-dimensional anatomic structures [8,9,11,20,24]C and myofiber architecture. [23,47,60]C Incorporating new MRI measurements, Augenstein et al.1C recently conducted a study to identify the nonlinear elastic parameters of a beating physical heart phantom.
While the spatial resolution of MRI has been significantly improved and is comparable with that of CT, MRI typically suffers from low temporal resolution and MR-incompatibility issues. Correspondingly, the above mentioned cardiac elastography works may not be suitable for patients with implants or when high image resolution with respect to time is required. Cardiac CT, which does not disrupt medical devices, has been developed as a high-resolution and high speed imaging modality. However, CT has not become the imaging modality associated with cardiac elastography because it does not provide displacement in the tissue unless natural or implanted makers are used. With the development of image segmentation and non-rigid registration techniques, it is now possible to obtain the displacement of some material points on epi- and endocardial surfaces. Noting that the above mentioned cardiac elastography works assumed displacement in the heart wall, it is pondered if surface displacement a sufficient measurement.
Another challenge remains for cardiac elastography regarding the algorithm for parametric identification. Because of the inevitable errors in the measurements of displacement, blood pressure, anatomic structures, myofiber architecture, practical algorithms are based on minimization method. [1,5,14,16,33,42,43]C It is well accepted for minimization methods that the numerical efficiency and robustness can be significant enhanced with user-supplied gradients of the objective function.44C The previous cardiac elastography algorithms, [1,5,14,16,33,42,43]C however, used time-consuming direct method or inaccurate finite-difference method, which is even more time consuming, for calculating the gradients of objective functions.
Thus, the inventors have provided a tomography-based nonlinear dynamic cardiac elastography (DCE) framework. In the context of the description of this technique, ‘tomography-based’ refers to: A) the volumetric image of the heart has been segmented such that the geometry of impaired myocardium region is known. The material parameters in the normal and impaired myocardial regions are approximated uniform, respectively; B) only epi- and endocardial displacement is employed as measurement, unlike the previous works that the whole displacement field in the heart wall is used. It is note that conditions A) and B) are within the capability of cardiac CT, and reflect the fact that impaired myocardium behaves significantly different from the normal tissue. This minimization-based framework was formulated with nonlinear viscoelastic finite-element method. A nonlinear dynamic adjoint method was derived for efficient and accurate computation of the user-supplied gradients of the objective function. Phantom simulations were conducted to identify the nonlinear viscoelastic material parameters of normal and impaired myocardium based on epi- and endocardial displacements. The influences of experimental errors were investigated, as well as the necessity of utilizing dynamic information for cardiac elastography.
The rationale is that the impaired myocardium undergoes changes in the passive properties, typically becoming stiffer, and/or decrease in active contractility, and thus changes the deformation characteristics of the heart (
One aspect of this invention is that the parameters that describe the passive properties and active contractility can be extracted from the recorded blood pressure and the dynamic deformation of the heart, which are measured in vivo and noninvasively using well-developed cardiac imaging techniques such as ultra fast cardiac CT and MRI. For example, optimization algorithms as outlined in
Elements 1˜4 describe quantitative assessment of heart motion. The method can be implemented for any 4D data set (x, y, z, t) provided by various medical imaging scanners. We will focus on CT in this description. This involves cardiac image acquisition, segmentation of cardiac chambers throughout a cardiac cycle, reconstruction of geometric models, mesh of myocardial walls, and tracing of the material displacement of the surface nodal (vertex) points at all the time points. Element 5 is the computational kernel of this invention, where the parameters describe the passive and active properties of myocardium are optimally extracted from the obtained displacement. More details are as the following:
1. Image acquisition. 3D reconstructions over time (4D) will be obtained with CT scanner. A total number of N+1 (N=10 (every 10% phase) to 20 (every 5% phase)) time points throughout one complete cardiac cycle will be reconstructed. Each set of volume data includes approximately 200-300 images with better than 1 mm 3D isotropic resolution, covering the heart region of interest.
2. Image segmentation. Segmentation of cardiac chambers is an invariable prerequisite for measuring heart motion, which can be performed manually or using a commercial software, i.e. AnalyzeAVW. Segmentation can be further improved using contrast enhancement in CT and MRI. In practice, many clinical studies still rely on manual delineation. Dynamic geometric models are then reconstructed from the segmented data at each time point (1). The result will be recorded as N+1 sets of guide points of the endo/epi-cardial surfaces.
3. Volumetric mesh of myocardial walls. The first set of guide points of the initial state (typically selected as the end-systolic state) will be used to reconstruct an anatomically accurate finite element (FEM) model of the heart region. We will first generate a ‘standard mesh’, and then adjust the coordinate of the nodes by optimally chosen virtual deformation such that all guide points sit right on the surfaces of the FEM mesh. We prefer quadrilateral elements for its higher order of continuity, and can also generate tetrahedron elements if needed.
4. Tracing of surface nodal displacements. The surface nodes of the initial FEM mesh displace to fit the successive N sets of guide points of the endo/epicardial surfaces throughout the cardiac cycle, and yield surface nodal displacement UIm (I=1, 2, . . . , N). Based on expert knowledge, this surface non-rigid registration can be manually, semi-automatically or automatically, and will be updated according to the results of the following element 5. The method can be improved using markers in CT, and will be validated by MR tagging, which is able to create and track material points (points attached to a fixed location of the myocardium) during cardiac cycle. While this procedure is feasible, many implementation possibilities exist as reported in the literature, and may be further improved if needed.
5. Tomography-based DCE reconstruction. It has previously been demonstrated that the material parameters of heart tissues, denoted as set p, can be extracted from the measurement of endo/epicardial displacement UIm, chamber pressures and ECG record, by minimizing the overall difference between UIm and the computed displacement uI(p) (it depends on p).
Once p is obtained, the complete dynamic strain/stress fields in the heart wall will be calculated, and the endo/epicardial surfaces at all the time points are computed. Then, the overall distance from the N sets of guide points to the computed endo/epicardial surfaces are calculated. If the distance is too large, UIm will be updated according to the currently computed endo/epicardial surfaces, until the criterion is met. This iterative procedure ensures that tracing of surface nodal displacements, or image registration, possesses full physical meaning
The inventive tomography-based DCE has features that distinguish it from the previous works of cardiac elastography. First, this invention employs a newly developed efficient and robust algorithm for identification of the passive and active material parameters of myocardium. With this algorithm, incorporating ventricular pressure, and image sequences of heart motion (obtained by CT and MRI etc.), our DCE has made the following improvements:
1. The dynamic deformation of the heart is fully taken account. The previous works only made quasi-static approach, and sometimes took overly-simplified static approximation.
2. The geometric nonlinearity is fully taken account, i.e., finite-strain description is employed. Most previous works employed infinitesimal-strain description, which lead to unrealistic results.
3. The passive properties of myocardium are described with anisotropic nonlinear finite-strain viscosity constitutive (strain-stress) relation. Most previous works reduced the degree of material anisotropy, and none has addressed the viscosity which is essential for biological tissues.
4. Parameters that describe the active contraction of the myocardium, healthy and in disease, will be extracted. Active contraction is an important index of myocardium pathology. None of the previous works is capable of considering the active parameters. With these advances, this invention promises a complete set of parameters that accurately describe the passive and active behaviors of myocardium, healthy and in diseases.
Second, this invention is tomography-based, i.e., it processes the displacement information on the endo/epicardial surfaces and outputs the material parameters. While the displacement inside the heart wall (volumetric displacement) should certainly benefit the inventive method, it is not essential. Unlike the previous works that rely on MRI tagging for volumetric displacement, this feature enables the use of high-resolution high-speed CT images, and thus overcomes the undesired limitations and potential risks of MRI.
Furthermore, for the first time, this invention provides image registration, namely matching physical (material) points on the endo/epi-cardial surfaces, rigorous mechanics significant. More specifically, the image registration reflects the actual mechanical deformation and material properties of the organ (heart), instead of the ad hoc assumptions being used in rigid and non-rigid image registration methods.
Significant and Potential Applications. This invention processes measurements, ventricular pressure and dynamic cardiac image sequences, which are obtained in vivo and noninvasively with today's biomedical imaging techniques; that is, no additional hardware cost is needed. The main products of DCE are accurate material parameters for describing the passive nonlinear viscoelastic properties and active contraction of myocardium in healthy and impaired heart regions. It is noted that no other methods can produce such a complete set of parameters. The potential applications include but not limit to:
The material parameters serve as quantitative indices of the abnormality of myocardium, in additional to the traditional cardiac indices. It improves diagnosis and monitoring of heart diseases.
Incorporating the material parameters, four-dimensional dynamic maps of the strain and stress in myocardium will be obtained. It is noted that without accurate material parameters obtained by this invention, the pervious works can only give approximate and sometimes mistaken stress in the hearts. These maps provide deep insight into the local physiological state of myocardium during heart beating. They reflect major physiological factors, such as the ventricular pumping performance, the energy consumption, the coronary blood flow, the vulnerability and expansion tendency of regions to infarction, remodeling of myocardium and risk of arrhythmias.
Accurate material parameters are the basis of patient-specific optimal design of surgical equipment and implants, such as pacemaker, bypass and artificial valves.
Accurate knowledge of the behavior of myocardium is basis of reliable patient-specific virtual surgical training and pre-surgery trials.
Accurate knowledge of the behavior of myocardium is basis of reliable patient-specific design of surgical procedure such as ventricle restoration and implant of pacemaker. For example, with cardiac mechanics simulation, well-designed surgery can not only restore the shape of the ventricle, but also restore the local physiological state (strain/stress) of myocardium.
Finite-element Equation of Motion. Consider a biological object undergoing dynamic viscoelastic deformation. The displacement tensor at location X and time t is denoted as u(X,t) with corresponding velocity of {dot over (u)} and acceleration of ü. The Green strain tensor is E=(∇u+(∇u)T+(∇u)T·∇u)/2 and, respectively. The 2nd-PK stress tensor, Σ(X,t), is assumed to be nonlinearly dependent on E and Ė, as Equation (11):
in which W is the passive nonlinear strain energy function, Svis describes the viscous stress that depend on E and Ė, and T gives the active contraction stress in the myofiber direction M due to local Calcium kinetics Ca+ and fiber stretch λF=(M·(2E+I)·M)1/2.
In the last term of (11), we collect all the material parameter, and denote with p=(CP,Cvis,TMax). On a part of the boundary Γu0, u and {dot over (u)} are prescribed. On the rest of the boundary ΓT0, surface force (traction) is applied. For a heart, the surface force is approximated as zero on the epicardial surface, and is −P(X,t)N(X) on the endocardial surface with P the blood pressure and N the unit outer normal vector.
Following the standard finite-element (FE) method,2C the displacement field u(X,t) is discretized as nodal displacement vector {u(t)}={u1(t), u2(t)}T, where u2(t) is prescribed on Γu0, and u1(t) is to be solved from the motion equation:
in which the internal nodal force {s}={s1,s2}T corresponds to the stress Σ; i.e., it depends on u1, {dot over (u)}1, and material parameters p. The external nodal force f1 corresponds to the surface force on ΓT0, and f2 is the unknown constraint force on Γu0. The mass matrices MIJ (I, J=1, 2) are constant.
A classic Newmark-β time integral scheme12C is employed to solve Eq. (21) for u1. Newmark-β scheme was employed to solve u1 and w1. Let's denote all quantities at time t with superscript “(t)” and those at time τ=t−h with “(τ)”. Let u1(τ), {dot over (u)}1(τ) and ü1(τ) be previously obtained, u1(t) and {dot over (u)}1(t) are approximated as:
u
1
(t)
=u
1
(τ)
+h{dot over (u)}
1
(τ)+(½−β)h2ü1(τ)+βh2ü1(t), {dot over (u)}1(t)={dot over (u)}1(τ)+(1−γ)hü1(τ)+γhü1(t), (A1)
where γ=½ and β= 1/12,12C the acceleration is unknown.
By substituting Eq. (A1) into Eq. (21), a nonlinear equation is obtained for ü1(t), denoted as ψ(ü1(t))=0. An iterative quasi-Newton method44C was employed to solve the nonlinear equation for ü1(t), as:
where superscript “n” denotes the nth iteration step, (u1(t))n and ({dot over (u)}1(t))n are calculated with (ü1(t))n by Eq. (A1).
In the last relation, the stiffness matrix K11=∂s1/∂u1 corresponds to the tangent elastic modulus ∂σ/∂ε, and the viscosity matrix C11=∂s1/∂üi corresponds to ∂σ/∂{dot over (ε)}. Cholesky factorization of Keff is calculated for solving the linear equation of δa1. The process is iterated until the residual value ∥ψ((ü1(t))n+1)∥ is small enough.
DCE Objective Function and Gradients. In experiment, the displacement is measured at time t=tI (I=1, 2, . . . , M) on a portion of the nodes that correspond to ul, and is denoted as UIm(t1). Given trial material parameters p, u1(tI;p) is solved from Eq. (21). The overall difference between u1 and U1m is then quantified with an objective function:
in which the weight matrix XI=diag(aI1, aI2, . . . , aIj, . . . ), with aIj=1 when the j-th component of U1m(tI) is measured, and aIj=0 otherwise. Thus DCE searches for material parameters such that Φ(p) is minimized.
Efficient and robust minimization algorithm request user-supplied gradients MoMp. The previous algorithms for cardiac elastography [1,5,14,16,33,42,43]C used finite-difference method or direct method for the gradients. The direct method requests high computational expense that further increases proportionally with the number of material parameters, and becomes unaffordable for large-scale cardiac elastography involving dynamic deformation. The finite-difference method is inaccurate in nature, and is even more time-consuming. Recently, an adjoint method was introduced for computing the gradients analytically. [27,28,40,41,51]C Continuum derivation of the adjoint method for nonlinear dynamic elastography was detailed by Liu et al.28C A more general FE version is derived in APPENDIX B. It is shown that the gradients can be calculated conveniently with:
where w1(t) is an adjoint displacement solved from the following linear differential equation:
in which
corresponds to
corresponds to
and
corresponds to
End conditions are prescribed for w1, as w1=0 and {dot over (w)}1=0 at t=T.
A backward Newmark-β scheme can be derived for solving Eq. (51) as follows.
DYNAMIC ADJOINT METHOD. Equation (41) for the gradients ∂Φ/∂p is derived here. FE motion equation Eq. (21) has an equivalent weak form, i.e., equality
∫0T(w1)T(s1+M11ü1+M12ü2−f1)dt=0 (A3)
holds for arbitrary displacement w1(t). By introducing Eq. (A3) into the objective function Φ(p) (Eq. 31), a Lagragian is obtained as:
L(u1,w1,p)=∫0THT×Hdt+∫0T(w1)T(S1+M11ü1+M12ü2−f1)dt, (A4)
where
In the functional L, the relation between u1 and p is decoupled. It can be shown that L=Φ and δL=δΦ for arbitrary p, u1 and w1. The variation δL can be expressed as Equation (A5):
for which δM11=0, δu2=0, and equality Eq. (A3) have been used noting that both w1 are δw1 are arbitrary. By integrating Eq. (A5) by part, we have Equation (A6):
Now, let the arbitrary displacement w1 satisfy differential equation:
then, the second integral in Eq. (A6) becomes zero. Noting that u1=0 and {dot over (u)}1=0 at t=0, boundary terms in Eq. (A6) can be cancelled out by specifying end conditions w1=0 and {dot over (u)}1=0 at t=T. Now, only the first integral remains in Eq. (A6), i.e.,
In the present simulations, f1 does not depend on the deformation, i.e., ∂f1/∂u1=0 and of ∂f1/∂{dot over (u)}1=0, so that Eq. (A7) becomes formula Eq. (51) since matrices ∂s1/∂u1 and ∂s1/∂{dot over (u)}1 are symmetric.
In comparison, the direct method calculates the gradients with Equation (A7a):
where
is the solution of
with initial condition v(n)={dot over (v)}(n)=0. The finite-difference method is straight forward, as:
The numerical scheme for solving Eq. (51) for the adjoint displacement w1 includes two phases:
(i) Backward Newmark-β scheme. This is for the case that no displacement measurement is taken in the time interval between τ and t (=τ+h). Let w1(t), {dot over (w)}1(t) and {umlaut over (w)}1(t) be known at time t, w1(t) and {dot over (w)}1(t) are approximated as:
in which {umlaut over (w)}1(t) is the solution of linear equation (A9):
(Keff−βh2D11)={umlaut over (w)}1(τ)=(w1(t)−h{dot over (w)}1(t)+(½−β)h2{umlaut over (w)}1(t))(K11−D11)+((1−γ)h{umlaut over (w)}1(t)−{dot over (w)}1(t))C11
where Keff, D11, K11 and C11 are evaluated with u1(t) and {dot over (u)}1(t) as in Eq. (A2). For material that ∂2σ/∂{dot over (ε)}∂t≡0, as in our simulations, it is shown that D11=0 and the solution for Eq. (A9) uses the already computed Keff and its Cholesky factorization.
(ii) Response to impulsive force −2XIHI at t=tI when measurement U1m is taken. As in Eq. (51), the difference between the computed and measured displacements at time t=tI, HI, acts as an impulsive force on the adjoint system. Let w1, {dot over (w)}1 and {umlaut over (w)}1 be known at time tI+ (right after tI), the quantities at tI− (right before tI) are calculated with
for which the continuity of C11 at time tI has been used.
The most significant advantage of the inventive dynamic adjoint method is the minimal computational expense. The previous finite-difference method (Eq. A7b) requests solving the nonlinear differential equation Eq. (21) once for every ∂Φ/∂pn (n=1, 2, . . . , N). The computational cost thus increases proportionally with the number of parameters. The direct method involves solving differential equation Eq. (A7a) for the same number of times; however, it is more efficient since the equation is linear. In comparison, the inventive dynamic adjoint method needs to solve linear differential equation Eq. (51) for only one time regardless of the number of the unknown parameters. Therefore, it is particularly favorable for large-scale elastography. [27,28,41]C However, there is a tradeoff: the backward adjoint differential equation Eq. (51) requires complex programming and additional computer memory to store the matrices K11 D11 and C11 at all the time integration steps when solving Eq. (21) for u1, as described above, for example, in the Newmark-β scheme and the backward Newmark-β scheme.
Minimization-based DCE Reconstruction Procedure. FORTRAN source codes have been developed for solving the FE motion equation and for DCE material parametric identification. The FE codes were comprehensively tested against commercial software ANSYS. The minimization-based DCE procedure is as follows: First, given trial material parameters p, displacement u1(t;p) is solved from the forward problem Eq. (21). The objective function Φ(p) is calculated to quantify the difference between u1 and U1m (Eq. 31). The adjoint displacement w1(t) is then solved (Eq. 51), and the gradients ∂Φ/∂p are computed analytically (Eq. 41). The objective function and gradients are passed into a L-BFGS minimization subroutine,25C where the trial material parameters p are updated. The procedure iterates until the convergent criterion ∥∂Φ/∂p∥≦δ is met, where δ is a tolerance.
LV Phantom. A thick-wall hollow ellipsoid was used for modeling a left ventricle (LV).21C Shown with
Myofiber Architecture. The myofiber architecture [23,47,60]C was described on all the integration points by the relationship between the local geometric coordinates (eh, eη, eξ) and myofiber architectural axes (nf,ns,nn) (
where λ=((x/a)2+(y/b)2+(z/c)2)1/2, λ′=(λ−λepi)/(λendo−λepi) is the relative wall depth, and z′=(zbase−z)/(zbase−zapex) is the relative height. The angels α and β vary linearly from the basal site to the apex. In the transmural direction, α changes linearly from endocardium to epicardium, and β changes quadratically.
Constitutive Model. The passive constitutive model of myocardium was assumed to consist of an elastic stress derived from an anisotropic exponential strain energy WE [15,52]C and an isotropic Voigt-type viscous stress, as Equation (71)
where εff=nf·ε·nf, εfs=nf·ε·ns and so on. For the normal tissue, the material parameters (γ,Eff,Ess,Enn,Ens,Efn,Efs,c) were pnorm=(0.44, 37, 7.16, 7.16, 5.6, 5.6, 5.6, 0.5), where γ is in kPa, Es are dimensionless, and c is in kPa·second. The values of γ and Es are consistent with the previous studies. [15,52]C The viscous coefficient c was selected such that the computed velocity field in the LV is low at the end-diastolic state. The ischemic myocardium was modeled to be three times stiffer32C but with the same material anisotropy as the normal tissue, i.e., pisch=(1.32, 37, 7.16, 7.16, 5.6, 5.6, 5.6, 0.5). The mass density of myocardium was set at ρ0=1 g/cm3.
Dynamic Deformation of LV. The basal site was assumed fixed, simulating the high rigidity of the fibrous rings to which the mitral and aortic valves anchor. The LV filling pressure P(t) was modeled as in
The deformed LV at t=0.4 is plotted in
Ideal DCE Reconstruction. The epi- and endocardial displacement at t=0.1, 0.2, 0.3, 0.4
The reconstruction converged after about 150 iteration steps, and yielded exact pnorm and highly accurate pisch, as the row “Ideal” in Table 11.
C
C
The convergent loci of γ, Eff, Ess, Efn and c are plotted in
DCE reconstruction was also conducted using displacement at t=0.05, 0.1, 0.15, . . . , 0.45 second (
Error of Myofiber Architecture. It is a challenge to measure the in vivo myofiber architecture. Recently, methods such as diffusion-tensor MRI [47,61]C have been developed to estimate the myofiber orientation in beating hearts. Alternatively, the myofiber architecture can be approximated from available database, [22,29,37,49,50]C based on the understanding that variation of myofiber architecture in individual hearts is relatively small after some anatomy-based mapping.
To investigate the effects of myofiber architectural error on DCE reconstruction, randomly picked values between −5° and +5° were added to the “exact” myofiber angle α and sheet angle β (Eq. 61) at the FE integration points. The results are shown as “Error-F” in Table 11. The convergent loci are very similar to the “Ideal” case, except for larger fluctuations at the first few steps. It took about 150 steps for convergence. The overall errors for pnorm and pisch were slightly higher as compared to the “Ideal” results. Similar reconstruction results were obtained using other ten sets of myofiber architecture data with random error between −5° and +5°. Among these eleven sets of reconstruction results, the relative error was found the highest for the ischemic viscosity coefficient, at 1.47%. When average was taken, the parameters pnorm and pisch reached almost the exact values. It is concluded that the DCE is not sensitive to the random error of myofiber architecture. This is explained by the concept of homogenization,26C i.e., the error random at a material point is largely balanced by the random error in its neighborhood. In a solid material, the perturbation of displacement by material variation at a single point is highly localized. The deformation depends more on the overall or effective material properties. Of course, the observation and explanation are based on the current FE mesh. With a coarse FE mesh where one integration point represents a large material volume, the effect of homogenization will become less significant, and the influences of random error of myofiber architecture may become profound.
Error of Displacement Measurement. The effects of error of displacement were investigated herein. Randomly generated −5% to +5% relative error was added to the epi- and endocardial displacement measurement U1m. The results are listed as “Error-D” in Table 11, and the convergent loci are plotted in
Reconstruction was also conducted with random ±5° myofiber architectural error and ±5% displacement error (“Error-FD” in Table 11). The convergent loci are almost identical to those with displacement error only, so are the resulting material parameters. This suggests that the effects of displacement error are much more significant than that of myofiber architecture.
Segmentation Error. The above DCE simulations assumed that geometry of the impaired region was known, in order to compensate the lack of transmural displacement measurement. Clinically, the impaired region can be delineated using imaging technique like PET. The image segmentations are often subjective when the boundary is fuzzy. To test the DCE reconstruction with inaccurate geometric data, we used two distorted ischemic regions to represent the extreme segmentation errors. The distortions were generated by randomly changing the mesh attributes. For Type-I distortion (
With Type-I segmentation error, pnorm and pisch reached convergent values after about 100 steps (
We then considered all the errors in DCE reconstructions, i.e., ±5° myofiber architectural error, ±5% displacement error, and Type I (“All-I” in Table 1) or Type II (“All-II” in Table 1) segmentation error, respectively. The convergent loci are very similar to “Goe-I” and “Geo-II”, respectively. It was found that the errors of displacement and myofiber architecture only led to slightly larger reconstruction errors than the segmentation error alone, with the parameters pisch being more sensitive to the errors than pnorm. It suggests that accurate depiction of the geometry of the ischemic region is most critical for DCE reconstruction, followed by displacement measurement.
Myocardium infarction typically spreads over a large region. We used a small ischemic region in purpose to test the feasibility of DCE. It is noted that the smaller the ischemic region is, the more difficult to identify the martial parameters, as the “size effect” we discussed.
On the Quasi-static Approximation. The previous cardiac elastography studies [5,14,16,33]C made simplification that the LV filling deformation was quasi-static, and omitted the inertia and viscosity. The displacement, however, was measured of the beating heart, typically near to the highest filling blood pressure, i.e., at around t=0.4 or 0.45 second in the present pressure profile (
As “Static A” and “Static B” in Table 11, γ and Es were not correctly identified for normal and ischemic myocardium, neither was the material anisotropy (the ratio between Es). For instance, the parameters (γ, Eff, Ess, Enn) for normal tissue were estimated as (9.553, 4.515, 2.417, 6.274) with displacement at t=0.4 second as measurement, while the exact values were (0.44, 37.0, 37.0, 7.16). It is also noted that the results “Static A” and “Static B” were very different to each other. Quasi-static reconstruction was also conducted using the exact material parameters as the initial guess, and yielded similar results to “Static A” and “Static B”. This is not surprising since the dynamic and static epicardial displacements predicted using the identical material parameters and pressure are indeed very different.
In a real heart, the left ventricle dilates to the maximum chamber volume near the end-filling stage. The corresponding velocity is overall very low in the myocardium and the effect of viscosity is considered minimal. From this point of view, quasi-static elastography is probably a good approach for the elasticity parameters γ and Es, and the above unsuccessful quasi-static reconstructions was because the “real” material parameters we used did not predict nearly static state at the highest filling blood pressure. On the other hand, while the velocity may be indeed very low at the highest pressure, the dynamic displacement may be qualitatively different from prediction using quasi-static approach, because of the effect of inertia (considering the high strain rate in the heart). To see this, we use a one-dimensional spring-mass system as a simple example. When the system is loaded with a periodic external force, the maximum displacement (at zero velocity) is higher than the case loaded with a static force of the same magnitude. Therefore, quasi-static cardiac elastography needs careful evaluations when dynamic in vivo measurements are used.
Limitations and Future Studies. The present DCE simulations focused on the passive myocardial properties. The constitutive model was assumed nonlinear viscoelastic as in Eq. (71), with linear displacement-strain relationship. While this constitutive model is not exact for myocardium, the DCE framework itself is flexible to accommodate any model since the algorithm is independent of the material models (see APPENDIX B). On the other hand, appropriate constitutive model for myocardium is still an open research topic. [3,13,36]C Recently, Schmid et al.46C evaluated five major constitutive models by fitting ex vivo experimental data of myocardium specimens under shear deformations. The DCE provides a new platform for testing constitutive models with in vivo measurement of the dynamic displacement in the heart. The DCE can also quantify the contractility of myocardium28C with appropriate model of the active stress. The parameters describing the active stress can be added directly into p (Eq. 11), without affecting the algorithm and reconstruction procedure of DCE.
DCE belongs to the class of biomedical inverse problems, whose applications highly depend on the availability and accuracy of the measurements. As we have observed, the measurement errors with displacement, myofiber architecture and geometry affect the reconstruction results to different degrees. There still needs comprehensive study on other types of error, such as non-uniform blood pressure, inaccurate displacement boundary conditions, systematic other than random error of displacement and myofiber architecture, and so on.
Uniqueness of DCE reconstruction result remains as an open topic. Mathematical theories have been developed to define the sufficient measurements that ensure unique result for various types of linear inverse problems. [30,31,34,56]C However, theory for nonlinear inverse problems such as DCE has not been found in the literature. As a general principle for inverse problems, the sufficient measurement depends on the number of unknown parameters, i.e., the fewer the unknown parameters, the less measurement is needed. In this study, the epi- and endocardial displacement measured at four time steps was sufficient identification of the sixteen unknowns in pnorm and pisch. When the material parameters are allowed to vary arbitrarily in the entire myocardium, it becomes necessary to measure the displacement field in the LV wall with higher spatial and temporal resolution, for example, in a recent study of Augenstein et al.1C
A tomography-based nonlinear dynamic cardiac elastography (DCE) framework was developed to identify the nonlinear viscoelastic parameters of myocardium, using dynamic endo-/epicardial displacement extracted from cardiac images. The scheme takes into account the myofiber architecture-induced anisotropy and heterogeneity of the material properties. The reconstruction algorithm employs a minimization procedure, with user-supplied gradients of the objective function. A nonlinear dynamic adjoint method was derived for efficient calculation of the gradients. DCE simulations were conducted to estimate the nonlinear viscoelastic parameters of the normal and ischemic myocardium in a left ventricle phantom. With exact measurements, the material parameters of the normal and ischemic tissues were accurately identified using epi- and endocardial displacement at four instants during the LV filling cycle. The effects of measurement errors were investigated. The influences of segmentation error are the most significant, followed by displacement error. Random errors of the myofiber architectural angles showed insignificant influence, probably due to the homogenization effects. A detailed comparison was made between DCE and the previous quasi-static cardiac elastography. The parameters estimated with the quasi-static approach were qualitatively incorrect, and showed large discrepancies when displacement at different time steps was used for the reconstruction. This indicates that consideration of dynamic deformation is indispensable for identification of the myocardial mechanical properties in vivo. The results support that epi- and endocardial displacement, together with geometric escription of the impaired myocardium region, are sufficient for identification of the material parameters of normal and impaired heart tissues. This suggests that cardic CT, incorporating with the inventive tomography-based DCE, is a promising imaging modality for accessing the mechanical properties of myocardium.
Throughout this application, various publications are referenced. The disclosures of each of these publications in their entireties are hereby incorporated by reference into this application in order to more fully describe the features of the invention and/or the state of the art to which this pertains. The references disclosed are also individually and specifically incorporated by reference herein for the material contained in them that is discussed in the portion of this disclosure in which the reference is relied upon.
Myocardial elastodynamic properties are the phenomenological manifestation of underlying cellular activity and connectivity. Clinical evaluation of these elastodynamic properties may offer important insight into risk assessment of heart diseases, and therefore serve as biomarkers for screening, diagnosis and treatment. To date, this line of investigation has been limited by the lack of noninvasive methods.
Stiffness of the small arteries has been identified as an early marker of vascular diseases [1]E. The method of arterial waveform analysis has been approved by FDA to market for early detection and prevention of vascular diseases such as hypertension. Mechanical and functional behaviors of diseased myocardial tissue are different from that in normal tissue. Adult human cardiac myocytes are terminally differentiated cells with little or no capacity to proliferate. The inability to generate functional myocytes means that an overloaded heart can only enlarge existing myocytes. Each myocyte contains myofibrils, which are long chains of sarcomeres, the contractile units of the cell. Elastodynamic properties of myocardial tissue are the phenomenological manifestation of the underlying cellular activity and connectivity. Thus, disease-related changes at the cellular level can be assessed by detecting and analyzing myocardial elastic dynamic characteristics.
Imaging techniques play a significant role in the prevention, diagnosis, and treatment of cardiac diseases. Several imaging modalities such as CT, nuclear cardiology, echocardiography, and MRI are currently used for evaluation of the cardiac anatomy and function including ejection fraction, myocardial strain and stress, and myocardial perfusion. However, echocardiography suffers from sub-optimal visualization of cardiac segments and operator dependency in interpretation of the artifact-rich images. CT and nuclear cardiology use ionizing radiation. The long scan time and inferior spatial resolution of MRI limit its applications. Furthermore, none of them is capable of mapping elastrodynamic features of myocardium, which, due to complex heart anatomy, large deformation and active force during heart contraction, are extremely challenging to be detected. In the absence of elastic dynamic characteristics of myocardium, false treatment may be initiated for suspected patients with one or more symptoms, meanwhile treatment may be neglected for some patients with developing heart disease without exhibiting any symptoms thus likely leading to sudden death or heart failure. What is needed is an imaging modality to extract cardiac elastodynamic characteristics for critical diagnostic information previously unavailable.
As a screening tool for prevention of cardiac disease, an elastodynamic imaging modality in accordance with embodiments of the invention can rely on MRI, because it has been well accepted as a safe, non-invasive, and high tissue contrast imaging technique, with a potential to become a one-stop cardiac imaging modality. To extract elastodynamic biomarkers using MRI, an interior MR imaging based elastodynamic reconstruction framework can be used. Compared to the existing cardiac MR imaging sequences, interior MRI allows much less scan time yet significantly improves spatial and contrast resolution. Also, this approach is based on a dynamic nonlinear model for accurate quantification of nonlinear elastic passive stiffness and dynamic active contractility of myocardium. In contrast, the conventional parametric extraction methods make strong approximations including small deformation, linear strain, quasi-static deformation and isotropic properties of myocardium, which may well lead to inaccurate and even misleading description of myocardium, because the physiologic and pathological process is highly nonlinear, anisotropic and dynamic, involving active contraction of myocardium.
Methods, systems, and devices of the invention potentially can have a major impact on cardiac imaging as well as prevention and treatment of cardiac diseases. The impact is broad in the studies of cardiovascular disease as well as in the cancer studies and functional MRI, since the interior MR imaging can be directly extended to those areas and beyond.
Interior MR Imaging of the Heart. The invention provides for interior MR imaging of the heart with significantly improved temporal, spatial and contrast resolution. MRI is more appropriate compared to other imaging modalities such as CT, in which radiation is of major concern. MRI has been gaining recognition as a safe, non-invasive, and superior imaging technique, which has the potential to become a versatile cardiac imaging modality. However, the application of MRI in imaging the heart is significantly limited by its long examination time and inferior spatial resolution, compared to those of cardiac CT. This makes MRI unsuitable for unstable patients and for the evaluation of small structures. For extraction of elastodynamic biomarkers of the heart, we rely on significantly higher temporal and spatial resolution than those currently available. Therefore, what is needed is the development of powerful imaging methods that reflect fundamental relationships and constraints, utilizing all prior knowledge, new insights and unconventional tools. One solution is to provide interior MR imaging of the heart in the compressed sensing framework.
Interior MR imaging of the heart is a linear inverse problems of a highly ill-posed nature. An ideal solution to solve such a linear inverse problems would be to collect most essential data in the shortest possible time, and yet produce the best possible images. To capitalize these potential benefits, we need to overcome the inherent illposedness due to a limited amount of measured data.
Recent theories of interior tomography and compressed sensing effectively challenge the conventional wisdom in tomographic imaging and signal processing. While the interior problem has no unique solution in general, interior tomography offers an exact and stable solution, given practical prior knowledge [6-7]E. While the Nyquist/Shannon sampling rate has been used since the first day of the digital era, compressed sensing unlocks hidden signal sparsity and requires much less data for recovery of an underlying signal [4-5]E. Recently our group successfully tested the feasibility of compressive-sampling-based interior tomography in micro-CT [10]E. The results show that the number of projections for image reconstruction can be significantly reduced (4-8 times) while keeping comparable image quality (
To accomplish interior MRI, time-domain signals (k-space) are preferably acquired in a radial manner. The UTE-GRE is ideal for development of interior MRI. In addition, appealing features of UTE-GRE include its ease of implementation (e.g., without hardware modification) and its apparent tolerance to experimental imperfections (e.g., gradient delays and eddy currents). In UTE-GRE, MRI signal from water molecules is produced by a short radiofrequency pulse (−200 μs), and following a very short delay (called TE), a projection of the object is rapidly acquired using a broad acquisition bandwidth. After an incremental change in the orientation of the gradient field, this process is repeated. A 3D image is reconstructed after a sufficient number of radial projections have been acquired to satisfy spatial resolution and signal-to-noise (SNR) requirements. By permitting very short TE (<400 μs), UTE-GRE can achieve short repetition time (TR), and thus allows rapid 3D imaging.
For example, with standard MRI hardware, TR=1.35 ms is readily achievable, in which case, 32,000 projections can be acquired in just 43 seconds. Because the new technique acquires k-space data in a radial manner, view sharing is permitted to capture more cardiac phases than the standard Cartesian acquisition. Furthermore, the short TE of UTE-GRE minimizes motion sensitivity-a critical advantage for cardiac applications. Finally, with its ability to attain very short TE, UTE-GRE has potential to image fibrotic myocardial infarction.
The inventive interior UTE-GRE sequence (
Based on preliminary results in micro-CT, the interior MRI approach can meet the requirements for high-quality cardiac imaging in terms of SNR, spatial and temporal resolution, artifacts, and exterior signal suppression, for an ROI of diameter ≧25 cm (typical imaging FOV for cardiac MR studies). From preliminary studies, the acquisition parameters that can be achieved include: RF pulse length=200 μs, TR=1.35 ms, TE=0.366 ms, acquisition bandwidth=175 kHz, matrix size (after gridding)=2563, which can yield a nominal isotropic resolution of 1 mm. As illustrated in
Interior MRI Testing on Normal Human Subjects. UTE-GRE can be used to accomplish interior MR imaging of the normal human heart. An exemplary acquisition protocol, including ECG triggering, is shown in
Interior MRI is feasible for dynamic 3D imaging of the heart, in short scan time and with high spatial resolution. Scan time is expected to be reduced by half or more as compared with that using conventional sequences (based on the assumption that the heart size is half or less than the thorax). Spatial resolution can also be improved with reduced scan times (due to reduced motion blurring). In addition, with UTE-GRE tissue water that is normally difficult to detect due to its short T2* (e.g. fibrosis in chronic myocardial infarction) can be enhanced. Using the methods, systems, and devices of the invention, interior MRI can offer high SNR, reduced partial volume effect, and minimized motion artifacts. For extraction of elastodynamic biomarkers, such a superior imaging performance of myocardium is desirable.
A potential limitation may be the SNR of the high resolution images. Because the voxel size of interior MRI is smaller than that of full FOV imaging (i.e., the nominal voxel width=FOV/256), low SNR could become a limiting factor on image quality in some applications, despite the compressed sensing approach that inherently suppresses image noise. If this happens, SNR will be increased using signal averaging (e.g., by acquiring multiple breathholds, instead of just one) or by using a MRI scanner with higher magnetic field (e.g., a 7 Tesla human scanner).
Compressed Sensing Based Extraction of Cardiac Elastodynamic Biomarkers. It is well recognized that impaired myocardium changes in passive stiffness and active contractility, and exhibits different deformation characteristics [11-12]E. Thus, material parameters that describe passive stiffness and active contractility are clinically informative. These features can be extracted from time-varying cardiac image sequences of the heart. Conventional elastographic imaging methods make rather strong approximations including quasi-static deformation and linear isotropic elastic properties of myocardium [13-17]E, which frequently lead to inaccurate and sometimes misleading description of myocardium, because the real process is highly nonlinear, anisotropic, and dynamic, in particular involving the active contraction of myocardium [18]E. Embodiments of the invention can provide a comprehensive and realistic elastodynamic reconstruction framework to depict dynamic elasticity changes in myocardium and extract elastodynamic biomarkers for diagnosis and treatment of cardiac diseases. The elastodynamic biomarkers include both passive stiffness and active contractility, which are intrinsic to cardiac muscles and their functionalities. This can be accomplished using a sophisticated nonlinear approach based on large strain deformation of myocardium throughout the entire cardiac cycle. The non-uniqueness and instability due to the nonlinear large strain and dynamic active force can be mitigated by the use of compressed sensing technique, especially the minimization of the total variation (TV) in the desired mechanical parametric distribution. The elastodynamic reconstruction framework is expected to enable global and regional dynamic characterization of myocardium for both short-term diagnosis and long-term prognosis.
An elastodynamic framework for reconstruction of passive stiffness and active contractility of myocardium from nonlinear large deformation can be constructed. The general flowchart for an exemplary elastodynamic reconstruction framework according to embodiments of the invention is given in
Under this framework, the inventors have developed a simplified cardiac elastodynamic algorithm based on small deformation and successfully identified passive elastic stiffness E's and active contractility Tin a previous simulation study (
For the large deformation, the nonlinear partial differential equations (PDEs) have potential issues. Primarily, we are subject to the so-called dimensionality curse. In other words, there are too many unknowns to fit into a quite limited amount of measured data, giving too many solutions with numerical instability. To solve this problem, compressed sensing can be employed to utilize the underlying sparsity of the desired parametric distribution, as reflected in the TV minimization term of the aforementioned objective function. The dynamic deformation of myocardium can be described with the nonlinear PDEs of:
D(u(X,t))=∇·P−ρ0ü(X,t)=0, (EQUATION BM-1)
where u(X,t) is the displacement field (deformation) (X: spatial coordinates, t: time, and ρ0 mass density). The time-varying blood pressure in the ventricles serves as part of the boundary conditions. The nominal stress P in the myocardium can be calculated from the Lagrangian strain tensor E and deformation gradient F by way of the tissue constitutive relation [19]e of:
P=[∂W(E;C)/∂E+T(λF,Ca+;TMax)MM]·FT (EQUATION BM-2)
where the nonlinear potential energy W describes the passive elastic properties, and the additional tensile force T along the local fiber direction M can be modeled as a superposition of finitely many harmonic terms.
To solve this inverse-problem, the passive stiffness parameters C in W are to be reconstructed from biomedical imaging measurements. Furthermore, the active contractility T is controlled by the dynamics of calcium concentration Ca+(X,t), and T is affected by the stretch of the sarcomere denoted with λF=(M·(2E+I)·M)1/2. Due to the complexity of anatomic structures, and nonlinearity, anisotropy, and inhomogeneity of the cardiac muscle, an optimization-based method combined with the technique of compressed sensing is appropriate for such reconstruction. To find distribution C, T such that the computed displacement up(X,t;C,T) matches the best with the displacement measurement UM(X,t) taken with MR imaging techniques subject to the minimization of the weighted TV [w·TV(c)] in the parameter distribution; i.e., minimize the objective function (EQUATION BM-3), is one goal:
Φ(C)=∫0τ(∫Vh·χ(X,t)·hdV)dt+w·TV(C); h=h(X,t;C)=up(X,t;C)−UM(X,t),
where the time interval [0,τ] typically represents one cardiac cycle. The weight function χ(X,t) equals zero if the displacement is not measured at point X and t.
Once the measurements are obtained on the displacement uM(x,t) in part of the myocardial ROI, and on the chamber blood pressure pM (X,t), inverse computation can be performed to reconstruct the passive stiffness C and active contractility T of myocardium.
Implementation of the Elastodynamic Reconstruction Framework Via FEA.
Gradients ∂Φ/(∂C,∂T) on a discrete grid[20]E can be computed. For cardiac elastodynamics involving nonlinearity and dynamic deformation, an explicit formula of ∂Φ/∂C [19]E has been derived by the inventors. Due to the complex anatomic structures and their nonlinearity, the governing PDEs and the adjoint PDEs can be put into the finite element (FE) format and solved accordingly. To achieve high accuracy with relatively small number of elements, Hermite-type high-order shape functions can be employed. Discretization of the time domain will follow the standard Newmark-β scheme. For clinical applications incorporating MR imaging, measurement of both surface and interior displacements can be included in UM(X,t).
The reconstruction framework for extraction of cardiac elastodynamic biomarkers can be systematically evaluated and validated in numerical and phantom studies, demonstrating that it is accurate and robust in identifying both passive and active parameters, by utilizing dynamic, large myocardial strain deformation. Quantitatively, it is possible to achieve 1 mm spatial resolution and <10% error in phantom experiments, giving confidence in use of the biomarkers in clinically relevant analysis.
For some applications, a potential problem is that the PC's computing power may not be sufficient to give a practical reconstruction speed due to the complexity of dynamics and nonlinearity. An alternative approach is to employ the parallel processing technique to speed up the 3D reconstruction and extraction of elastodynamic biomarkers. For example, a multiple-node PC cluster can be used in this regard, if desired.
Objective and Sample Size Consideration. The inventive imaging modality can be validated on its ability to extract elastodynamic biomarkers of heart muscle based on controlled phantom experiments and in vivo animal and human studies. Determining sample size can be based on obtaining reasonable estimates of the elastodynamic biomarkers of the imaging techniques, rather than to power for certain elastodynamic biomarkers at certain p values. Based on historical experience, for animal study, a minimum of 10 datasets should be collected. For human clinical study, twenty subjects in each group (healthy, chronic myocardial infarction (MI) and hypertrophic cardiomyopathy (HCM) groups) should provide relatively stable group proportions for demonstrating the potential of the imaging modality.
Phantom study. A ventricular phantom with controlled variations in the walls can be constructed for obtaining dynamic imaging data on its deformation under pulsatile flow, applying the interior MR imaging technique to scan the phantom, and extracting elastodynamic properties and performing dynamic mechanical test for comparison (See
The phantom can be fabricated with silicone replicas of patient-specific left ventricle and with realistic wall compliance (wall elastic properties). The Harvard pulsatile blood pump can be used to accurately simulate the ventricular action of the heart. This can be conducted using a 4.7 T, 40 cm bore MRI system.
Such a phantom study can satisfy two purposes: First, it can be used to verify the image registration/segmentation method. The embedded MR markers can be employed in the silicone model to provide a true point-to-point correspondence of the ventricular surface as the model heart changes its configurations throughout the cardiac cycle. Second, it is possible to apply the biomechanical testing system to perform uniaxial and multiaxial tests of cut specimens from various marked regions of the silicone phantoms to obtain their material properties. The regional properties predicted by the modality can then be compared with the material properties determined from mechanical testing.
Animal study. Juvenile Sinclair mini swine (5 months of age, ˜20 kg) can be used as a model of MI. Besides considerations of the cost and facility acceptance, the rationale for the pig model is its anatomic similarity to the human hearts especially for the microscopic anatomic attributes. This species has been selected for two reasons. First, the pig is of sufficient size to allow the instrumentation necessary to perform the measurements needed to complete this protocol. Second, a large body of information describing the cardiovascular system in the pig already exists[21]E.
Closed-chest mini-pig model of MI can be established by a coronary angiography via the carotid artery performed using a guiding catheter [22]E. Under fluoroscopy, after insertion of a percutaneous transluminal coronary angioplasty (PTCA) guide wire and catheter system into the LAD distal to the 211″ diagonal branch, a vessel-size flexible foreign body can be advanced into the coronary artery to occlude the target artery for 60 minutes. Sinclair swine are anesthetized with intravenous sodium pentobarbital (30 mg/kg, i.v.). Alternatively, the closed-chest mini-pig model may be established by intracoronary injection of thrombogenic agents. Any open-chest models will not be considered since they may change the imaging conditions for the imaging techniques.
Tests can be performed on groups of subjects, for example, in ten mini swine. An exemplary validation procedure is illustrated in
in vivo Human Study. The diagnostic utility of this novel imaging method can be demonstrated in at least two types of heart diseases: chronic MI and HCM. Both MI and HCM result in myocardial fibrosis and changes in the dynamic mechanical characteristics of the heart. With the imaging modality of embodiments of the invention, is expected to compensate for the T2-induced signal loss in the fibrosis due to the decreased T1 and T2 relaxation time by improving the SNR and yielding superior contrast resolution. In the setting of chronic MI, the fibrosis is manifested in a thinned portion of the infracted myocardium. HCM is strikingly different from myocardial infarction. In this disease process the myocardium is abnormally thickened, often with accompanying fibrosis. The thickened portion of the muscle is subject to alteration in elastic mechanical forces during cardiac cycle. Additionally, the fibrotic foci are often the source of electrical disturbances in the hearts conduction system which can lead to arrhythmias. Elastodynamic biomarkers can be used as an early predictor for HCM. The improved SNR and contrast resolution of embodiments of the invention can aid in the identification of fibrotic changes which may allow appropriate patients to be treated with implantable defibrilators for the prevention of sudden death due to arrhythmia.
A comparison can be performed using 20 patients for each type of heart disease and 20 healthy volunteers as baseline. Interior MRI can be used to image the hearts of the recruited patients in a 3 T clinical MR scanner. Cardiac elastodynamic biomarkers can be extracted. The healthy group can be scanned using both clinical cine MR sequence and the inventive imaging techniques.
With the cardiac images from 60 subjects using clinical cine MR sequences and the inventive imaging technique, it is possible to evaluate the quality of cardiac images using the inventive modality by comparing to those acquired with clinical cardiac MR sequences, especially for portions of myocardium affected by infarction. Receiver operating characteristic (ROC) can be used for comparison. The inventive imaging modality is capable of providing superior information especially for the MI areas. Furthermore, scan acquisition time/spatial resolution, SNR, and contrast can be compared. Less scan time and improved SNR and spatial and contrast resolution are typically expected. Finally elastodynamic biomarkers can be correlated with selected heart diseases. It is expected that a different relationship between cardiac elastodynamic features and MI tissues or HCM can be identified.
MRI applications often require high spatial and/or temporal resolution within a region of interest (ROI) such as for perfusion studies. In theory, both spatial resolution and temporal resolution can be significantly improved using a ROI-focused MRI data acquisition scheme. However, in radial MRI, there is no such acquisition-based solution available. Traditional reconstruction methods to image the ROI by reducing the field of view produce aliasing artifacts when the dataset becomes truncated.
Embodiments of the invention provide a ROI-focused radial MRI data acquisition scheme, aided by a dedicated digital filter. Such methods can be implemented in a 4 T 90 cm bore Oxford magnet with a GE phantom and a transceiver TEM head coil. Further parameters can include 4 gauss/cm sonata gradients, 5 mm slice thickness, TE=30 ms, TR=200 ms, FOVs of 40 cm and 12 cm respectively. Methods and systems of the invention can provide for interior MRI methods that exactly reconstruct a ROI with increased spatial resolution (−4 fold) while keeping the same temporal resolution. The image artifacts from truncated projections are effectively eliminated. No crosstalk with the outside ROI region is involved using the inventive method. Such interior radial MRI methods can be used for zoomed-in and fast views of a particular ROI, which can be translated into significant advantages in clinical and pre-clinical applications of many types.
In radial MRI, to image an interior ROI (
The interior tomography methods, systems, and devices for CT imaging described above and as disclosed in U.S. Provisional Application No. 61/225,708, filed Oct. 28, 2009, which is incorporated by reference herein in its entirety, provide an exact and stable solution to this long-standing problem [12D]. In the description to follow, it is described how to extend this method to a ROI-focused data acquisition scheme for interior reconstruction of MRI.
While classic CT theory targets exact reconstruction of a whole cross-section or an entire object from complete projections, in practical applications such as nano-CT applications, we often need to focus on much smaller internal ROIs. Current CT theory cannot exactly reconstruct an internal ROI only from truncated projections associated with x-rays through the ROI because this interior problem does not have a unique solution [14D]. When applying traditional CT algorithms for interior reconstruction from truncated projection data, features outside the ROI may create artifacts overlapping inside features, rendering the images inaccurate or useless. On the other hand, over past decades lambda tomography has been developed into a branch of applied mathematics that merely recovers gradient-like features within a ROI from localized data [15-20D]. When applying lambda tomography techniques, the outcomes are not most appealing because of their non-quantitative nature. Recently, great advancement has been made for exact cone-beam reconstruction from truncated data collected in spiral/helical or saddle-curve scanning mode [21D].
The importance of performing exact image reconstruction from the minimum amount of data was recognized for a long time. The first landmark achievement is the fan-beam half-scan formula [22D]. A recent milestone is the two-step Hilbert transform method developed by Noo et al. [23D]. In 2006, Defrise et al. [24D] proposed an enhanced data completeness condition that the image on a chord in the FOV can be exactly reconstructed if one end of the chord on the compact object support is covered by the FOV. By the analytic continuation, in 2007, Wang's group published mathematical analysis and numerical results demonstrating for the first time that the interior problem can be solved in a theoretically exact and numerically stable fashion if a small sub-region within a ROI is known [12, 25-27]D; these results were then also independently reported by other researchers [28, 29]D.
Because these results are reconstruction schemes that only use projection data associated with lines through a region of interest (ROI) or volume of interest (VOI) to be reconstructed, they are called interior tomography. Given the fact that the MRI radial signal has the same mathematical model after digital filter and Fourier transform, this interior tomography technique of CT can be directly used for radial MRI.
Interior reconstruction of MRI. Let μ({right arrow over (r)}) be a smooth function on a compact support Ω⊂R2, with {right arrow over (r)}=(r1,r2) and R2 denoting the two-dimensional (2D) real space. In the MRI application, the function μ({right arrow over (r)}) is the spin amplitude under a magnetic field. For a given direction φ, we can acquire an array of frequency data F(sn,φ), n=1, 2, . . . Nf. Using the digital filter technique and Fourier transform, we obtain the spatial signal {tilde over (p)}(sn,φ), which only involves the data along the magnetic field direction. For example, when {tilde over (p)}(sn,φ) are measured along the direction of regions 2 and ROI as indicated in
For simple of statement, we denote the discrete p(sn,φ) as p(s,φ) in continuous domain which has the same mathematical model as CT in terms of Radon transform:
can be extended to φ∈R with p(s,φ+π)=p(−s,φ). For a fixed φ0, by Noo et al. [10D], the backprojection of differential data:
can be expressed as the Hilbert transform of μ along the line L through {right arrow over (r)}0 which is parallel to {right arrow over (n)}=(−sin φ0, cos φ0):
where “PV” indicates the Cauchy principal value, and HL the Hilbert Transform along the line L.
Let us denote the 2D μ({right arrow over (r)}) on a line L as f(x) and b({right arrow over (r)}0) as g(x) where x is the one-dimensional (1D) coordinate along the line L. Assume the intersection of the compact support and the line L is the interval (c1,c2) with c1<c2, which imply that f(x)=0 for x∉0 (c1,c2). Also, let us assume that the backprojection of differential data can be exactly obtained from the original projection data through the interval (c3,c4) with c3<c4. All the intervals (c3,c4) form a sub-region inside the compact support of μ({right arrow over (r)}). This sub-region usually is called the region-of-interest (ROI) or field of view (FOV). Recovering the object function inside the ROI is generally called interior reconstruction. Using the above notations, EQUATION MRI-3 can be rewritten as:
We call g(x) on the interval (c3,c4) as the truncated Hilbert Transform (THT). By Tricomi [30D], f(x) can be recovered from its Hilbert transform g(x) by:
is a known quantity. Because the radial MRI data EQUATION MRI-1 is a typical Radon transform, the exact and stable interior reconstruction of radial MRI is immediately available by the following theorem 2.1[12D]:
Theorem 2.1: c1,c2,c3,c4,c5 are five real numbers with c1<c3<c5<c4<c2 (see
The key technique to prove Theorem 2.1 is the co-called analytic continuation theory and only the analytic property of g(x) has been used. It should be pointed out the THT g(x) in EQUATION MRI-4 can be exactly computed from p(sn,φ) via Eq. (2), where necessary interpolation method will be used. That is, b({right arrow over (r)}0) is computed only from the projection data whose paths go through the point {right arrow over (r)}0. To exactly reconstruct the interior images of radial MRI from the constraints of EQUATION MRI-4, both projection onto convex set (POCS) algorithm [12D] and singular value decomposition (SVD) technique [27D] can be directly used.
Both numerical simulation and phantom experiment were performed to validate the interior reconstruction technique for radial MRI. The simulation study verified the correctness of the generalized reconstruction framework. The phantom experiment demonstrates the feasibility of the method in practical settings.
More particularly, the numerical simulation was implemented in MatLab (Mathworks Inc, MA, USA) on a regular PC (1.0 G memory, 2.8 G CPU). A modified Shepp-Logan phantom inside a circular support Ω of radius 10 cm (
Phantom Experiment. In the phantom study, radial acquisition was implemented on a 4 Tesla 90 cm bore Oxford magnet using a GE phantom with a quadrature transceiver TEM head coil and Varian NOVA console. The acquisition parameters were: Siemens 4 gauss/cm sonata gradients, 5 mm slice (one slice only), centered echo with echo time TE=30 ms, repetition time TR=200 ms. The FOV were 40, and 12 cm, respectively. For each FOV, 512 projections over 180 degrees were obtained, as shown in
A full FOV (40 cm) image was first reconstructed using the classical FBP method, as shown in
This showed the feasibility of an interior reconstruction of radial MRI by extending the recently developed interior tomography technique for CT. Both numerical simulation and phantom experiment show that the MRI imaging technique is able to image ROI with high spatial resolution. This method can be used to improve temporal resolution without changing spatial-resolution or both of them simultaneously.
A sub-region inside ROI needs to be known to obtain an exact and stable solution. This assumption is acceptable for most MRI applications. Different from CT where radiation is of concern, a full FOV low-resolution MR image can be obtained to provide the sub-region information before a high spatial and/or spatial resolution ROI is imaged.
The method can be used to reconstruct a two-dimensional MR image. The same or similar techniques can be applied for exact reconstruction of a volume of interest (VOI) of three-dimensional image, even though the projection data remain truly truncated. Clinical evaluation of the utility of this technique appears warranted. This interior radial MRI method can be translated into significant advantages in many clinical and pre-clinical applications.
Interior MRI Example.
In MRI, a spatial projection of an object is created by summing the signal amplitude along a column of the tissue; the width of the column is defined by the sampling aperture (pixel), and the thickness is governed by the slice select gradient strength.
To obtain a projection, as shown in
To reduce the FOV to the ROI in the projection readout direction, the typical method is to increase the amplitude of the frequency encoding gradient, based on the relationship between the sampling frequency bandwidth (F), the amplitude of the frequency-encoding gradient (Gf), the pixel dimension (Δx), and number of samples F=(γ/2π)·Gf·Nf·Δx if the bandwidth and number of samples are kept constant (increase spatial resolution of the ROI). However, protons outside the ROI are also excited, although their resonance frequency is above the bandwidth.
Wraparound artifacts are caused by aliasing. Shown in
Physical Phantom Data. In the phantom study, radial acquisition was implemented on a 4 Tesla 90 cm bore Oxford magnet using a phantom with a quadrature transceiver TEM head coil and Varian NOVA console. The acquisition parameters were: Siemens 4 gauss/cm sonata gradients, 5 mm slice (one slice only), centered echo with echo time TE=30 ms, repetition time TR=200 ms. The field of view (FOV) are 40, 20, 10 cm, respectively. For each FOV, 512 projections over 180 degrees were obtained. The digital receiver operated at a constant 72 kHz bandwidth. However, the actual Varian bandwidth varied slightly. This means that the projection width is not exact (but close) in the DR data. The cutoff near the edge of the FOV is very sharp. The small band at ends of the sinogram are due to the bandwidth of the digital receiver not being the same as the Varian readout bandwidth (the DR is fixed at 72 kHz, the Varian readout bandwidths were 72 kHz for FOV of 40 cm, 69 kHz for FOV of 20 cm, and 64 kHz for FOV of 10 cm, respectively).
We scanned a GE phantom in a 4 T MR scanner using different FOVs, 40 cm, 20 cm, and 10 cm, as shown in the
For the FOVs of 20 cm and 10 cm, at the each side of sinograms, “wrap around” signals still can be seen. However, they only influence a few edge points. For image reconstruction, this part can be ignored. In theory, the wrap around signal can be filtered out perfectly during data collecting. We are learning how to control the BW more precisely in order to solve this problem.
Experimental Results. Corresponding to different zooming factors, the project datasets are displayed in
Preliminary Patient Study. Normalized projection datasets within the same display window are provided in
Bioluminescence tomography (BLT) was introduced in 2002 by the PI's group [21, 64, 65]A. At the same time, our efforts also include improvement of fluorescence molecular tomography (FMT) [66]A. As the two branches of optical molecular tomography (MOT), both BLT and FMT are studied using the modality fusion approach. For example, a mouse is imaged in a light-tight setting (
Despite its popularity, the diffusion-approximation (DA) model cannot produce satisfactory OMT performance for multi-spectral/probe imaging [68, 69]A. Thus, we recently developed a phase-approximation (PA) model, first used it in the forward modeling processes, and generated very favorable results compared to the DA model [70-72]A. Then, we compared the reconstructions of light sources based on the PA and DA models, respectively. First, applying the Monte Carlo method, we computed the anisotropy weight f in the phase approximation model based on a spherical phantom of radius 10 mm. The optical parameters of absorption coefficient μa=0.20 mm−1, scattering coefficient μs=14.5 mm−1, anisotropic coefficient g=0.9, and the relative refractive index of 1.37 were assigned to the spherical phantom. The anisotropy weight in the phase approximation model was estimated through fitting the simulated measurement data generated from the Monte Carlo simulation. Then, two spherical light sources of radius 0.6 mm were embedded into the spherical phantom. The centers of the two light sources were located at (−3.5, 0.0, 0.0) and (2.5, 2.5, 0.0), respectively. A power of 10 nW was set for both light sources. The Monte Carlo simulation with the Henyey-Greenstein phase function was performed to generate the measurement data on the phantom surface. PA- and DA-based reconstructions were then performed respectively in an identical setting. As a result, the localization errors of the reconstructed light sources were <0.2 mm with the PA-based reconstruction, and >2.3 mm with the DA-based reconstruction. Source strength estimation errors were ˜5% with PA and up to 25% with DA (
Example—Bioluminescence Tomography with Gaussian Prior.
Parameterizing the bioluminescent source globally in Gaussians provides several advantages over voxel representation in bioluminescence tomography. It is mathematically unique to recover Gaussians [3F] and practically sufficient to approximate various shapes by Gaussians in diffusive medium. The computational burden is significantly reduced since much fewer unknowns are required. Besides, there are physiological evidences that the source can be modeled by Gaussians. The inventive model and algorithm significantly improve accuracy and stability in the presence of Gaussian or non-Gaussian sources, noisy data or the optical background mismatch, which has been validated through simulation and in vivo data.
Bioluminescence tomography (BLT) [1-3]F is an emerging molecular imaging modality, which can be used to monitor physiological and pathological activities at molecular levels, specially visualize primary tumor growth and tumor cell metastasis. In BLT [4-13]F, one attempts to recover both location and intensity of bioluminescent sources from boundary measurements.
Conventionally the bioluminescent source is represented in voxels. For example, in a triangulation {τi,i≦N} of the domain Q, the source in piecewise constants is:
where 1i is the basis function supported on the subdomain τi that is 1 on τi, and 0 otherwise, and qi is the average intensity value on τi to be recovered.
The accurate source reconstruction in voxels is highly challenging, which intrinsically comes from its non-uniqueness and severe illposedness [3]F. For example, even with a large number of boundary data, which is practically much less than the number of voxels, the recovery of deeply embedded objects still has a poor resolution. An attempt to achieve the high-resolution reconstruction in the voxel-based BLT was made through the modeling by radiative transfer equation (RTE) and the use of angularly-resolved data in the medium of a few mean free paths [14, 15]F, although it may take quite an effort to acquire the angularly-resolved data in practice. The popular diffusion approximation (DA) cannot model the angularly-resolved measures.
Instead of voxel representation, we model the source in Gaussians in reconstructions. A motivation is that because the light of interest is highly diffusive, a localized bioluminescent tumor does look like a Gaussian at a sufficiently large distance. Therefore it is practically sufficient to model various shapes as Gaussians in the diffusive medium, especially those deeply embedded objects. With this new representation, the reconstruction itself is unique [3]F and the BLT is reduced to the problem with a few parameters. Moreover, the new model can be justified from physiological reasons.
The outcome of the Gaussian model is formulated as a minimization problem with respect to the number of Gaussians, intensity, the center, the radius and the aligned direction of each Gaussian.
Although the number of Gaussians may be fixed in a specific reconstruction process, the exact number of actual Gaussian sources is not assumed. If the number of Gaussians in reconstruction is larger than the actual number, the source can still be recovered; otherwise, the actual number can be approached from a smaller initial guess in a combinatorial fashion through minimizing the data discrepancy. This can be illustrated in the following example.
Suppose there are three Gaussians. In the reconstruction, we can set the number of Gaussians to four. Using the inventive reconstruction algorithm, three recovered Gaussians would correspond to actual ones while the other one has the zero or nearly zero intensity values. On the other hand, if the assumed number is smaller than the actual number, e.g., only one Gaussian, the data discrepancy cannot be effectively reduced, and in this case the problem can be formulated as combinatorial optimization problem in which the number of Gaussians is increased iteratively until three so that data discrepancy can be reduced to the desired tolerance. On the other hand, a rough estimate of the actual number of Gaussians does improve both accuracy and efficiency of the reconstruction. In summary, the actual number of Gaussians is not assumed and it is practically sufficient to fix the number of Gaussians in reconstructions since the sources are localized and limited in quantity.
In reference [16]F on diffuse optical imaging, modeling the absorption coefficient as a single Gaussian, the authors also demonstrated the reconstruction improvement when using a priori information from ultrasound images for the center of the Gaussian. In this work, we do not assume a priori information from other imaging modalities although it can be used for further improvement. For example, we do not assume the center of Gaussians that is to be recovered. However, we do impose some geometric constraints to avoid the apparent non-uniqueness of geometric parameterization. For example, we constrain on the distance between any two Gaussians so that they do not overlap, or use the size of the phantom as the bound of the center. Last, although we model the source in Gaussians in the reconstruction, the actual source distribution in simulations is not necessarily Gaussian. However the reconstruction is better if the actual source is in Gaussians as well.
Forward Modeling.
RTE is the most accurate among realistic models for light propagation in tissue [17, 18]F. However, due to its large dimensionality, an efficient solver of RTE is non-trivial [19]F. Many efforts have been devoted in the development of RTE solver. Please see [20]F and references therein for numerical methods for solving the deterministic RTE, and [21]F and references therein for stochastic approaches by Monte Carlo method. The alternative modeling strategy is to simplify RTE into appropriate approximation according to particular optical regimes of interest [17, 22, 23]F. Among all the approximate models, the most popular one is DA, which is valid in the scattering dominant regime. For simplicity, we will use DA in this proof-of-concept study. Please note that the modeling error in practice comes from not only the chosen model, but also the approximation of optical properties for the underlying medium.
In the following DA with Robin boundary condition, we use φ for the light intensity, q for the bioluminescent source, μa for the absorption coefficient, μs′ for the reduced scattering coefficient, D for the diffusion coefficient, i.e., D=1/[3(μa+μs′)], {circumflex over (n)} for the outgoing normal at the medium boundary, A for the boundary constant [24]F.
−∇·(D∇φ)+μaφ=q
φ+2AD∇φ·{circumflex over (n)}=0 (EQUATION BLT-2)
Here we choose piecewise linear finite element method (FEM) to solve Eq. (2). After discretization, Eq. (2) is reduced to a linear system with the discretized light intensity Φ, the discretized source Q, and the system matrix F dependent on μa and D, i.e.,
FΦ=Q (EQUATION BLT-3)
We refer the readers to [24]F for the details of FEM solution of Eq. (2). After solving Eq. (3), we have the following boundary fluxes
f=−D∇φ·{circumflex over (n)}=φ/2A. (EQUATION BLT-4)
Source Representation by Gaussians.
As a quantitative imaging method, BLT is expected to provide the source intensity or power accurately. However, there are some intrinsic aforementioned difficulties in achieving high-resolution reconstruction with the voxel-based BLT. Therefore, instead of local voxel representation of the bioluminescent source, we consider the global representation by characteristic shapes of the tumor, e.g., the Gaussians in this study. A motivation is that because the light of interest is highly diffusive, a localized bioluminescent tumor does look like a Gaussian if it is sufficiently distant from the boundary. It turns out that this new representation with significantly reduced degrees of freedom is more robust than the voxel-based representation.
Mathematically, it was proven in [3]F: the solution of BLT is non-unique without the incorporation of effective a priori knowledge on the source distribution; in particular the uniqueness can be established when source is assumed to be composed of solid balls with the known intensities. However, using solid balls is not effective in representing general source distribution, especially features like anisotropy and fast decay of source intensity which are quite common in practice. Summation of Gaussians can be used to model general source distribution. This structural representation is flexible and improves the reconstruction stability for BLT. In particular the anisotropic Gaussian representation is an effective mathematical approximation in the sense that it parameterizes the major quantitative information of interest, such as the intensity, the center and the size of the source.
Physiologically, the Gaussian shape is indeed a natural choice. According to http://www.humpath.com/tumorigenesis, tumorigenesis is a collection of complex genetic diseases characterized by multiple defects in the homeostatic mechanisms that regulate cell growth, proliferation and differentiation. Cancer is caused by uncontrolled proliferation and the inappropriate survival of damaged cells, which results in tumor formation. Cells have developed several safeguards to ensure that cell division, differentiation and death occur correctly and in a coordinated fashion, both during development and in the adult body. Many regulatory factors switch on or off genes that direct cellular proliferation and differentiation. Damage to these genes, which are referred as tumor-suppressor genes and oncogenes, is selected for in cancer. Most tumor-suppressor genes and oncogenes are first transcribed from DNA into RNA, and are then translated into protein to exert their effects. Recent evidence indicates that small non-protein-coding RNA molecules, called microRNAs (miRNAs), might also function as tumor suppressors and oncogenes. Therefore, in the tumorigenesis stage, cancerous cells are localized, limited in quantity, irregular in shape, and with some decaying behavior away from the center. As a result, they can be modeled effectively as Gaussian sources after they are bioluminescently labeled. Other inflammation foci can be similarly modeled at earlier stages.
Therefore we represent the source q as the summation of Gaussians with the intensity ρ, the center (xc, yc, zc), anisotropic radius (rx, ry, rz) and Euler angles (φ, φ, θ), i.e.,
For simplicity, we restrict the discussion on the rotation variables in 2D, which is simplified to θ only, i.e.,
Computationally, the Gaussian source representation Eq. (5) reduces the burden significantly since we solve BLT with at most n×(1+d)(2+d)/2 unknowns in d dimensions, which is in general much smaller than that from voxel representation, e.g., N (total number of voxels) unknowns for Eq. (1).
Gaussian-Based BLT.
Modeling bioluminescent source in Gaussians, we formulate BLT as the minimization problem with respect to variables XG={pj, xc,j, yc,j, zc,j, rx,j, ry,j, rz,j, φj, φj, θj, j≦n},
Here we use the set {fi, i≦M} for the measured data, the set {Pi, i≦M} of column vectors for the discretized measuring operator, and geometric constraints R on XG, which can be regarded as the regularization term. Note that φ is now nonlinearly dependent on XG in this new formulation, although it linearly depends on q in Eq. (1) in the voxel-based representation.
In Eq. (6), we do not treat n as a variable, mainly because there is generally no practical need for that since the tumors are usually localized and limited in quantity. We can just assign a practical estimate to n that is larger enough than or comparable with the actual number.
However, in the case that such an estimate is unavailable, we formulate Eq. (6) as a combinatorial optimization problem with respect to the variable n. That is we start from an initial guess no of n, which can be smaller than the actual number, and solve Eq. (6) iteratively until the data discrepancy d reaches certain tolerances. Here the update of n is simply nk+1=nk+1. For the notation convenience, we will use X for XG whenever it is appropriate.
Then we have the following algorithm:
In Algorithm 1, ε=0.05 is an empirical number that can identify those n's that are sufficient for the reconstruction; fk denotes the measurements with the source Xk; ∥·∥ can be the simple summation of the absolute values.
Next, we turn to the geometric constraints R, which can be imposed naturally from the known geometry of the medium. Please note that we do not assume any a priori knowledge on the anatomical structure of the medium, e.g., from other imaging modalities, although they can be easily incorporated into the scheme to improve the reconstruction. All the geometric constraints are either from the shape of the medium or from some commonsense assumptions on the shape of the source to avoid some non-uniqueness in geometric representation.
The first type of constraints is the min-max constraint, the minimum and maximum for each variable of XG. For example, we can use the size of the medium as the min-max constraint for the center, and realistic values for anisotropic radius or intensity such as zeros as the lower bounds. That is:
XminG<x<XmaxG. (EQUATION BLT-7-1)
Notice that although we constraint the rotation angles, e.g., 0°≦θ<180°, the representation of ellipsoid is not unique, e.g., by simultaneously exchanging rx and ry and considering the supplementary angle of θ in 2D. This does not affect the reconstruction result.
Secondly, we restrict the shape variation along different directions so that the source is not too “narrow”, which is physiologically reasonable, i.e.,
rx<cxyry,ry<cyzrz,rz<czxrx. (EQUATION BLT-7-2)
Here the parameter cxy, cyz and czx control the size difference between any two directions for each source.
In the case with multiple Gaussians, it is important to impose the following type of constraints, minimal-separation constaints, on centers and radiuses between any two Gaussians so that they will not overlap or get too close to each other. That is for any two Gaussian i and j, in (EQUATION BLT-7-3):
c
x(rx,i+rx,j)2+cy(ry,i+ry,j)2+cz(rz,i+rz,j)2<(xi−xj)2+(yi−yj)2+(zi−zj)2.
Here cx, cy and cz are the control parameters, which can for example be set to ln 2, corresponding to full width at half maximum (FWHM). Without using constraints from Eq. (7-3), multiple inclusions may not be separated. An example is given in the result section.
All the above constraints Eq. (7) can be represented by the following abstract inequalities:
g
k(XG)<0. (EQUATION BLT-8)
Here we use the popular logarithmic penalty functions to enforce those constraints, i.e.,
Combining Eq. (6) and (9), the BLT with Gaussian representation is formulated as the minimization of the sum of the data fidelity and the logarithmic penalty functions of geometric constraints. In the next section we will develop the algorithm for solving Eq. (6) via the popular barrier method.
Note that when anatomical priors are available from other imaging modalities, such as X-ray CT or MRI, they can be incorporated naturally into the scheme Eq. (6). For example, the min-max constraints Eq. (7-1) can be tightened for centers or radiuses with the measured values from anatomical images; more accurate constraints can be imposed by assigning the appropriate constants in Eq. (7-2) and Eq. (7-3). In contrast, it is difficult to impose those global geometric features locally in the voxel-based BLT. On the other hand, the straightforward co-registration of the images from X-ray CT or MRI may not improve BLT satisfactorily since there is usually no simple one-to-one anatomical correspondence between them due to the fundamentally distinct imaging mechanisms.
Minimization by Barrier Method.
As the standard approach for minimizing nonlinear least squares, we linearize the first term (the data fidelity) in Eq. (6), and solve for the incremental change δX iteratively via the following outer loop of iterations in (EQUATION BLT-10):
Here fn is the simulated data with Xn, J is the Jacobian coming from the linearization, which depends on Xn, and the detail for the computation of J is given in the next section for the algorithm implementation.
During each iteration in the outer loop for Eq. (10), we minimize the following with b=f−fn and x=δX
Here we choose the popular barrier method [25] to enforce the constraint term R. That is we solve a sequence of the following minimization problems:
Minimizing Eq. (12) strictly enforces the geometric constraint R since the value of logarithmic penalty functions would otherwise become infinitely large. During each step the solution xn is no more than K/t-suboptimal with K as the total number of constraints. This implies xn converges to the optimal point of Eq. (11) such that Jx=b with strictly enforced geometry constraints as t→∞. The Eq. (12) can be minimized with Newton's method via the following inner loop:
∇x2L(xn,tn)δx=−∇xL(xn,tn)
x
n+1
=x
n
+sδx
tn+1=μtn (EQUATION BLT-13)
In each iterative step by Eq. (13), we compute the descent direction (Fix, find the moving step s through the backtracking line search, and then update x and t for the next iteration until the stopping criterion is satisfied. That is we solve a sequence of t-subproblems via Eq. (13) with the increasing t.
Algorithm Implementation.
The flowchart of the algorithm for solving Eq. (6) is as follow: the outer loop comes from the linearization Eq. (10) of the data fidelity term while the inner loop solves each linearization step using barrier method via Eq. (13).
In this flowchart, we use the ratio difference of the total source power E=∫qdΩ as the stopping criterion for the outer loop, i.e., assuming that the total power E is stable when the algorithm converges. As the stopping criterion for the inner loop, we use the sub-optimal ratio K/t, which naturally measures the difference between the iterative solution of Eq. (12) and the original solution with strictly enforced constraints. We set εi empirically to 0.0001 t0. Please note that the formula for t0 is for balancing the linearized data fidelity and the penalty function.
Next, we give the details of computing Jacobian and the descent direction in Algorithm 2 for the completeness.
Let us start with the computation of Jacobian J={Jij, i≦M, j≦n}, which can be derived from the Jacobian J0={J0,ij, i≦M, j≦N} with respect to the piecewise-constant voxel representation via Eq. (1). The connection between J and J0 is through the following coordinate transformation:
where qk represents the kth voxel value in Eq. (1), xj is the jth parameter in global representation by Gaussians in Eq. (5).
An efficient way to compute J0 is through the adjoint method [24]F. After computing J0, J follows immediately from (14). However, noticing that there are only a few parameters in the new source representation by Gaussians, i.e., n M N, we directly compute J without computing J0 via the direct method as follow.
From Eq. (3), we have F(∂φ/∂qk)=∂Q/∂qk, then
Through Eq. (15), we only need to compute the forward solver n times, which would be at least M times if we compute J0 first via Eq. (14). Note that ∂Q/∂qk is a sparse vector, which is nonzero only at the nodes in the kth voxel; on the other hand ∂qk/∂xj can be analytically derived from Eq. (5). Therefore, we can now compute the Jacobian J merely by computing n forward solvers, which is much more efficient than that in the voxel-based BLT. The trade-off is that we need to compute the Jacobian more than once due to the nonlinear dependence of J on XG, which needs to be computed only once in the voxel-based BLT due to the linear dependence of J0 on X. However, since n M, we still achieve a considerable gain in speed, since the new algorithm usually converges in less than 20 iterations while the ratio M/n is usually at least about a hundred. That is we usually achieve at least 5 gains in speed on the computation of Jacobians.
In the rare cases when n>M, the adjoint method is preferred, i.e.:
That is we compute M forward solvers instead.
Next we compute the first and second derivatives of logarithmic penalty functions for each geometric constraint gk(XG) as in Eq. (7), i.e., Rk=−ln [−gk(XG)]. From the straightforward computation, we have the following simple formula:
∇2Rk=(∇Rk)T·(∇Rk). (EQUATION BLT-17)
Therefore, we have
Note that each constraint Rk is evaluated at Xn+x rather than x.
The overall algorithm for BLT with Gaussian representation is more efficient than voxel-based BLT, mainly because the minimization is now with respect to only a few parameters of Gaussians instead of at least thousands of voxels in the conventional voxel-based BLT. For the current BLT algorithm with new model by Gaussians, minimization through barrier method in Step 2 is extremely fast although the algorithm seems complicated, which is understandable due to the significantly reduced number of variables; the major computation cost comes from the Jacobian J in Step 1, although it takes less computational time than that in voxel-based BLT, which is again due to the reduced number of unknowns.
The performance of Gaussian-based BLT, “G-BLT”, is compared with that of the conventional voxel-based BLT, “V-BLT”, through simulated data in single-inclusion and multiple-inclusion cases, and then validate the proposed method in mouse study.
Reconstruction with Single Inclusion.
We first evaluate the inventive algorithm with single anisotropic Gaussian inclusion, and then perform the single-inclusion reconstructions with various shapes, different noise levels, and different approximation errors in optical background to compare the performance of G-BLT and V-BLT. For the presentation purpose, we only show 2D comparisons. The results and the conclusions are similar for the 3D case. In particular, 3D results will be demonstrated with in vivo data.
All the 2D reconstructions are performed on a circular phantom with the diameter of 20 millimeters. 120 data are collected all around the phantom at the boundary with equal angular distance in between. The reconstruction mesh has 2162 triangular elements and 1130 nodes. To avoid the inverse crime, the data are generated with different meshes that are about four times as large as the reconstruction mesh. For example, there are 8664 triangular elements and 4425 nodes for the case with circular inclusion as shown in
We assume the homogeneous optical background with μa=0.01 mm−1 and μs′=1 mm−1. Without further mentioning, we use mm as the unit for the length, mm−1 as the unit for absorption or scattering coefficients, nano Watts (nW) as the unit for the total source power, and nW/mm2, nW/mm3 as the unit for the source intensity in 2D and 3D respectively. The unit of angle is in degree.
In G-BLT, as the initial guess, for simplicity we assume there is one Gaussian inclusion with ρ=0.1, xc=0, yc=0, rx=1, ry=1 and θ=90°. The reconstruction with multiple inclusions as the initial guess will be considered in the next section. Please notice that the nonzero value has to be assigned for ρ, otherwise the Jacobian with respect to other parameters will be zero according to Eq. (14). For the constraint variables, we set ρmax=10, ρmin=0.01, xmax=ymax=10, xmin=ymin=−10, rx,max=ry,max=5, rx,min=ry,min=0.1, θmax=180°, θmin=0°, and cxy=5.
For the V-BLT, we solve the problem via the standard Levenberg-Marquardt method with L2 regularization [24]F. That is to minimize:
where the Jacobian J0 is with respect to the piecewise-constant voxel representation by Eq. (1), and λ is the regularization parameter.
Note that one may optimize the choice of regularization term in Eq. (19). However we found out that the results did not improve drastically when using other regularization strategies [14, 15]F, which may be due to the severely illposed nature of the inverse problem in the optical regime modeled by diffusion approximation Eq. (2). In the following, we restrict our discussion based on L2 regularization as in Eq. (19).
Inclusions with Gaussian Shapes.
Since it is physiologically reasonable to approximate the bioluminescence sources by Gaussians, we first consider reconstructions where we assume the source is a single Gaussian. All the tests have the same Gaussian up to a rotation, with parameters specified in Table 1. The reconstructed results with both G-BLT and V-BLT are plotted in
From
Inclusions with Various Shapes.
We evaluate the algorithm through the simulated data with single inclusions of various shapes. The parameters for various single inclusions are specified in Table 2F. The reconstructed results with both G-BLT and V-BLT are plotted in
Before doing the reconstruction, we simply compare the measurements from Gaussian and non-Gaussian sources as shown in
From
One minor issue regarding G-BLT is that the reconstructed radiuses along x- and y-axis seem not to match those in true phantoms, whish is possibly due to the Gaussian approximation error of non-Gaussian phantom. For example, with rectangular inclusion, the recovered rx=1.752 and ry=1.797, while the true rx=1 and ry=2. Although rx<ry in the reconstruction, this scale difference is much smaller than the true difference.
Notice that in this section and the rest of reconstructions for non-Gaussian sources, we did not consider 9 as reconstruction variable. G-BLT does not recover the shapes for non-Gaussian sources as exactly as for Gaussian sources even with rotation.
Different Noise Level.
We evaluate the algorithm with respect to different noise level. 5%, 10%, 20%, 30% percentage of Gaussian noise is added to the measurements f respectively, e.g., f(1+5% Randn), where Randn represents the random Gaussian distribution with zero mean and unit variance. The reconstructed results with both G-BLT and V-BLT are plotted in
Again, from
Mismatch of Optical Background.
We evaluate the algorithm with different mismatch in optical background. The background is perturbed with ±10%, ±30%, ±50%, ±70% percentage of error respectively, e.g., μa(1+10%) and μs′(1+10%). Notice that we either increase or decrease both absorption and scattering coefficients since empirically we find that the mismatch may cancel if one is increased while the other is decreased. Therefore, we change both in the same direction to simulate the worst-case errors. Note that the reconstruction results with V-BLT are not presented since it does not localize the inclusion satisfactorily even in the case without optical background error.
The results from G-BLT are presented in
Reconstruction with Multiple Inclusions.
Now we evaluate the algorithm in the presence of multiple inclusions by considering first Gaussian inclusions and then non-Gaussian shapes.
Despite of the fact that there are two inclusions in the test on Gaussian sources (
The same constants as in the single-inclusion simulations are used. In order to separate the multiple objects, we find out the minimal-separation constraints are usually necessary. This is also intuitively correct since the source representation may otherwise be non-unique, given the severely illposed nature of BLT. Here we set cx=cy=ln 2, corresponding to that the minimal distance between any two objects is at least the average of FWHM.
Inclusions with Gaussian Shapes.
The detailed setting is given in Table 6F and
Inclusions with Non-Gaussian Shapes.
The detailed setting is given in Table 7F and
Combinatorial Optimization.
In this section, we give one example of combinatorial optimization (Algorithm 1) with the number of Gaussians n as a variable.
In the phantom (
The detailed setting and the recovered parameters are given in Table 8F.
It is interesting to notice that although the reconstruction with more than or equal to the actual number of Gaussians is able to recover the actual number of Gaussians, the reconstruction with more Gaussians than the actual number may degrade the image quality due to the apparent non-unique representation of the geometry. For example, two recovered sources in
In Vivo Validation.
After evaluating the propose G-BLT in 2D with various settings, we have established the superiority of G-BLT over V-BLT. Now we validate G-BLT in 3D with in vivo experimental data. The details of experimental setups and mouse experiments are given in [4]F. We also refer the readers to the above reference for the reconstruction parameters, such as the used mesh and the values for optical parameters.
In the previous voxel-based reconstruction [4]F, with a permissible region selected according to the high value clusters in the data, two sources were localized with a stronger one of the power 39.8 nW/mm3 (right) and a smaller one of the power 1.5 nW/mm3 (left). When the mouse was dissected after the experiment, two tumors were found on both adrenal glands, respectively, as shown in
Now we use G-BLT, Gaussian-based BLT, to reconstruct instead of V-BLT. Please note that we do not select the permissible region.
As the initial guess, we assume there are four Gaussian inclusions, i.e., n=4, with ρ1=ρ2=ρ3=ρ4=10, x1,c=x2,c=x3,c=X4,c=0, y1,c=y2,c=y3,c=y4,c=0, z1,c=5, z2,c=10, z3,c=15, z4,c=20, r1,x=r2,x=r3,x=r4,x=1, r1,y=r2,y=r3,y=r4,y=1 and r1,z=r2,z=r3,z=r4,z=1.
The constants in the geometric constraints are: ρmax=1000, ρmin=0.1, xmax=ymax=12, xmin=ymin=−12, zmax=27, zmin=0, rx,max=ry,max=6, rx,min=ry,min=0.5, and cxy=cyz=czx=5, cx=cy=cz=ln 2. Here the min-max values on the geometry correspond to the physical size of the domain; the maximum of the intensity is estimated from the data, which is roughly 50 times of the maximal value in the data.
As documented in Table 9F, 2 out of 4 recovered inclusions, with the locations just corresponding to the kidneys, have significantly larger values, i.e., the one with the peak intensity of 16.593 nW/mm3 (right) and the other one with the peak intensity of 7.692 nW/mm3 (left). In addition, the peak intensity of the other 2 inclusions (Inclusion 3 and 4) is less than 5% of the maximum (Inclusion 1), which can be from experimental or numerical noise. The reconstructed bioluminescent volumes (2rx·2ry·2ry) are 506 mm3 (right) and 901 mm3 (left) respectively. The discrepancy between the reconstructed volumes and the measured ones is not surprising since the anatomical volume may not correspond to the bioluminescent volume.
Thus, the inventors introduce a stable way to recover bioluminescent source through novel source representation by Gaussians. It is mathematically unique and computationally cheap. There are strong physiological evidences that the actual bioluminescent sources can be modeled by Gaussians, which will be further studied in the future work. The resultant problem with the fixed number of Gaussians is formulated as a nonlinear least-square minimization with appropriate geometric constraints, which is solved in turn by barrier method; on the other hand, the problem with the varying number of Gaussians can be formulated as a combinatorial optimization problem. The superiority of the proposed method has been established through simulations and in vivo experimental data. In particular, when the source itself can be approximated by Gaussians, the inventive method is able to accurately recover the intensity, centers, radiuses and rotation angles of the Gaussians.
Due to the significantly reduced number of variables, the inventive method is robust to the measurement noise and the mismatch in optical background; the computational burden is also reduced to the minimization problem with respect to a few parameters instead of at least thousands of parameters in the conventional voxel-based BLT.
It is also possible to incorporate RTE for more accurate modeling, simplified representation for optical heterogeneity, such as piecewise constants, to be simultaneously reconstructed for correcting optical background, and multi-spectral data for better stability.
Although the source is represented as Gaussians in this study, other shape functions may be appropriate for other purposes. An interesting topic to pursue is the multi-modality imaging since the structural priors can be easily incorporated into the proposed method for more accurate quantitative estimation.
The present invention has been described with reference to particular embodiments having various features. It will be apparent to those skilled in the art that various modifications and variations can be made in the practice of the present invention without departing from the scope or spirit of the invention. One skilled in the art will recognize that these features may be used singularly or in any combination based on the requirements and specifications of a given application or design. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. It is intended that the specification and examples be considered as exemplary in nature and that variations that do not depart from the essence of the invention are intended to be within the scope of the invention.
Therefore, the present invention is well adapted to attain the ends and advantages mentioned as well as those that are inherent therein. The particular embodiments disclosed above are illustrative only, as the present invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular illustrative embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the present invention. While compositions and methods are described in terms of “comprising,” “containing,” or “including” various components or steps, the compositions and methods can also “consist essentially of” or “consist of” the various components and steps. All numbers and ranges disclosed above may vary by some amount. Whenever a numerical range with a lower limit and an upper limit is disclosed, any number and any included range falling within the range is specifically disclosed. In particular, every range of values (of the form, “from about a to about b,” or, equivalently, “from approximately a to b,” or, equivalently, “from approximately a-b”) disclosed herein is to be understood to set forth every number and range encompassed within the broader range of values. Also, the terms in the claims have their plain, ordinary meaning unless otherwise explicitly and clearly defined by the patentee. The indefinite articles “a” or “an,” as used in the claims, are defined herein to mean one or more than one of the element that it introduces. If there is any conflict in the usages of a word or term in this specification and one or more patent or other documents that may be incorporated herein by reference, the definitions that are consistent with this specification should be adopted.
Throughout this application, various publications are referenced. The disclosures of these publications in their entireties are hereby incorporated by reference into this application in order to more fully describe the features of the invention and/or the state of the art to which this pertains. The references disclosed are also individually and specifically incorporated by reference herein for the material contained in them that is discussed in the portion of this disclosure in which the reference is relied upon.
This application relies on the disclosure of and claims the benefit of the filing date of U.S. Provisional Application Nos. 61/256,099, filed Oct. 29, 2009; 61/260,242, filed Nov. 11, 2009; and 61/260,577, filed Nov. 12, 2009; the disclosures of each of which are incorporated by reference herein in their entireties.
This work was partially supported by the National Institutes of Health under NIH/NIBIB Grants RO1HL098912, EB011785, EB002667, EB004287 and EB00728A; NIH/NHLBI Grant HL098912; NSF Grant DMS0811254; and NIH/NCI Grant CA135151. The U.S. Government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
61256099 | Oct 2009 | US | |
61260242 | Nov 2009 | US | |
61260577 | Nov 2009 | US |