The present invention relates to motion correction in medical image data, and more particularly to magnetic resonance imaging (MRI) based respiratory motion correction for PET/MRI image data.
Modern positron emission tomography (PET) imaging techniques have led to widespread use of PET, including in oncology, neuroimaging, cardiology, and pharmacology. However, PET image data often suffers from image degradation due to patient motion during the PET image data acquisition. Motion can occur due to normal breathing, heartbeat, and gross patient movement during the image acquisition. Among the different types of motion, respiratory motion typically has the largest impact on thoracic and abdomen imaging due to its large magnitude and the larger variation of breathing patterns.
Various techniques have been used to minimize the effects of respiratory motion in image acquisition, including breath holding, respiratory gating, and advanced reconstruction techniques. Due to the long acquisition time of PET (typically one to three minutes per bed position), breath holding techniques are difficult to apply. Respiratory gating strategies divide a breathing cycle into several phases, which can be measured by a navigation signal (e.g., using a respiratory belt). List-mode PET data are then clustered into different breathing phases based on the navigation signal to “freeze” to motion. However, such respiratory gating techniques assume that respiratory motions in the same breathing phase are identical across different breathing cycles, which is not true due to the fact that breathing motion patterns can be irregular, especially when the patient is under stress, anxiety, and/or pain.
PET/MRI image acquisition systems allow for simultaneous acquisition of PET and MR data with accurate alignment in both temporal and spatial domains. However, 4D MR image data with a high spatial resolution typically has a very low temporal resolution, which limits its usefulness for motion estimation. Recently, some techniques have been proposed to take advantage of simultaneous PET/MRI acquisition for PET respiratory and cardiac motion correction. However, due to the tradeoff between spatial and temporal resolution of MR imaging, all of the proposed methods rely on gating for 3D MRI acquisition. Accordingly, these methods suffer from the same drawbacks as other respiratory gating techniques.
The present invention provides a method and system for MRI based motion correction in a PET image. Embodiments of the present invention can be used for motion correction of a PET image acquired in a PET/MRI image acquisition system. Embodiments of the present invention acquire a series of dynamic 2D MR images during the PET image data acquisition, and register the 2D MR images against a static 3D MRI to estimate a motion field. The motion field can then be used in PET image reconstruction for motion correction of the PET image data.
In one embodiment of the present invention, a static 3D magnetic resonance (MR) image of a patient is received. PET image data of the patient is received. A series of 2D MR images of the patient acquired at a plurality of time points simultaneous to acquisition of the PET image data is received. A 3D+t motion field is estimated by registering the series of 2D MR images acquired at the plurality of time points to the static 3D MR image. A motion corrected PET image is generated based on the estimated 3D+t motion field using motion corrected PET reconstruction.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention provides a method and system for MRI based motion correction in a PET image. Embodiments of the present invention are described herein to give a visual understanding of the motion correction method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
Embodiments of the present invention provide a method of PET motion correction that utilizes a series of 2D MR images with a respiratory motion model for respiratory motion estimation. The use of 2D MR images for motion estimation instead of 3D MR images is advantageous due to the fact that 2D MRI acquisition can achieve a higher temporal resolution and thus does not require gating. According to an embodiment of the present invention, a series of dynamic 2D MR images are acquired during PET data acquisition, and the 2D MR images are registered against a static 3D MR image to estimate a motion field. The motion field can then be applied in the PET image reconstruction for motion correction.
At step 204, PET image data of the patient is received. The PET image data is acquired using a PET scanner. In an advantageous implementation, the PET data is acquired using a PET scanner of a combined MRI/PET image acquisition device. The PET image data may be received directly from the PET scanner or the PET image data may be received by loading previously stored PET image data.
At step 206, a series of 2D MR images is received. The 2D MR images are acquired simultaneously to PET image acquisition. In particular, during the PET image acquisition, a 2D MR image is acquired at each of a plurality of time points, resulting in the series of 2D MR images. For example, a 2D MR image can be acquired at 100 ms intervals throughout the PET image acquisition. The 2D MR images can be acquired at different locations and/or orientations at different time points to capture as much information of patient motion as possible. For example, the 2D MR images can be slices in the transverse plane acquired by sweeping back and forth along the head-foot direction of the patient. It is to be understood that a variety of possible locations and orientations are possible for the 2D MRI acquisition and the present invention is not limited to any particular scan plan for acquiring the 2D MR images. The 2D MR images are acquired using an MR scanner. In an advantageous implementation, the 2D MR images are acquired using an MR scanner of a combined PET/MRI image acquisition device simultaneously to the PET image data being acquired using the combined PET/MRI image acquisition device. The 2D MR images may be received directly from the MR scanner or the 2D MR images may be received by loading a stored series of 2D MR images.
At step 208, a motion field is estimated at each time point at which a 2D MRI image was acquired by registering the 3D MR image and each 2D MR image. Given the static 3D MR image and the series of dynamic 2D MR images, a 3D+t motion field is estimated at each time point ti at which a 2D MR image is acquired. However, a single 2D MR image only provides sparse information of the 3D motion field and therefore it would be difficult to use a single 2D MR image alone to estimate an accurate 3D motion field. Due to the fact that 2D MR images can be acquired at a much higher rate than respiratory motion, the 3D+t motion field has a strong correlation between neighboring frames and therefore can be jointly estimated from a series of sparse 2D MR images. Thus, according to an advantageous embodiment of the present invention, the motion estimation is formulated as a registration problem between a static 3D MR image and a series of dynamic 2D MR images.
The static 3D MR image is denoted as V(x3d) and the dynamic 2D MR image sequence is denoted as I(x2d,ti), where x3d and x2d are 3D and 2D coordinates, respectively. The location of the 2D MR slice at ti is represented by a mapping D(ti): →3, which maps 2D coordinates in the 2D MR image to the 3D space of the 3D MR image. The 3D+t motion field is denoted as u(x3d,ti), which indicates each voxels' displacement with respect to the reference image V at time ti. For ease of presentation, U(V,ti) is used to denote the volume transformed by u(x3d,ti):
U(V,ti)(x3d)=V(x3d−u(x3d,ti)).
The 3D+t motion field is estimated by minimizing and energy function:
E(u(x3d,ti))=Eimage(u(x3d,ti))+λ·Emodel(u(x3d,ti)),
where Eimage and Emodel are image and model energy terms, respectively, and λ is a weight of the model energy. The minimization of the energy function searches for the motion field at each time step that achieves the best combination of a transformed volume resulting that matches the respective 2D MR image at that time step and the motion field matching a predicted motion field based on a respiratory model. According to a possible implementation, once the 3D+t motion field is calculated for each time step, the 3D+t motion field may be interpolated to generate a continuous 3D+t motion field over the time period of the PET image acquisition.
The image energy Eimage compares the transformed volume and the 2D MR image at each time step. In an advantageous implementation, the image energy Eimage can be defined as a negative intensity-based similarity measure between the 2D MR image I(x2d,ti) and the corresponding slice extracted from the transformed volume U(V,ti):
Eimage(u(x3d,ti))=−Σiδ(U(V,ti),I(x2d,ti)),
where δ(•,•) is an intensity based similarity measure. For example, choices for δ(•,•) include, but are not limited to, Mutual Information, Cross Correlation, and Pattern Intensity measures.
The model energy Emodel compares the motion filed field to a predicted motion field using a respiratory model. In an advantageous implementation, the model energy Emodel is the dissimilarity between the 3D+t motion field u(x3d,ti) and a respiratory motion model u0(x3d,p), where pε[0,1) is the phase of breathing. The respiratory motion model describes a typical motion pattern for different phases of breathing. The respiratory model provides an additional constraint to improve accuracy of the motion field calculation using the sparse 2D MR images. A navigation signal, such as a respiratory belt, is used to detect a breath phase p(ti) corresponding to each time point ti, and the model energy can be calculated as:
Emodel(u(x3d,ti))=Σi(u(x3d,ti),u0(x3d,p(ti))),
where D(•, •) measures the difference between the two deformation fields.
The target of the respiratory motion model is to describe a typical breathing pattern for a human. Mathematically, the respiratory motion model is described as a 3D+p motion field with respect to a 3D MRI atlas, denoted as û(x3d,p), where p is a breathing phase. The standard motion model û(x3d,p) can be generated by statistical modeling, biomechanical modeling, or elastic modeling. The embodiment described herein focuses on statistical modeling for explanation purposes, but the present invention is not limited to any specific modeling technique.
A statistical model for the respiratory model is trained from a group of training data, which is typically 3D+t MR or CT data of the chest and/or abdomen, covering at least one breathing cycle in the time domain. Dynamic motion fields are then extracted from the training data using image registration techniques. The extracted 3D+t motion fields are subject-specific to the training data, denoted as uk(x3d,p). To generate a standard (or average) motion model, a reference frame is selected for a certain breathing phase (e.g., max inhale) from each training data and each reference frame is registered to an atlas to generate a respective warping function φk. The subject-specific model fields are then warped to the atlas and averaged to generate a standard motion model:
To apply the model to a patient, the 3D MR image of the patient is registered to the atlas to obtain a warping function φ0. The inverse of the warping function is then applied on the standard model û(x3d,p) to generate a subject-specific motion model:
u0(x3d,ti)=φ0−1·û(x3d,p).
If the image acquisition time is not critical, the subject-specific motion model u0(x3d,ti) can also be obtained by performing a gated 3D+t MR scanning before the PET data acquisition to reconstruct a whole breathing cycle. The 3D MR images at different breathing phases can then be registered to a reference frame to obtain a subject-specific motion model for the patient.
Returning to
At step 212, the motion corrected PET image is output. For example, the motion corrected PET image can be output by displaying the motion corrected PET image on a display device of a computer system. The motion corrected PET image can be displayed alone or together with the 3D and/or 2D MR images. The motion corrected PET image can also be combined with the 3D or 2D MR images resulting in a PET/MRI fusion image, which can be displayed on a display device of a computer system. The motion corrected PET image can also be output by storing the motion corrected PET image on a memory or storage of a computer system.
The above-described methods for MRI-based motion correction for PET images can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 61/828,976, filed May 30, 2013, the disclosure of which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4905699 | Sano et al. | Mar 1990 | A |
6292684 | Du et al. | Sep 2001 | B1 |
6894494 | Stergiopoulos et al. | May 2005 | B2 |
7561909 | Pai et al. | Jul 2009 | B1 |
7689019 | Boese | Mar 2010 | B2 |
7729467 | Kohlmyer et al. | Jun 2010 | B2 |
8411915 | Wischmann | Apr 2013 | B2 |
8417007 | Yui | Apr 2013 | B2 |
8427153 | Hu et al. | Apr 2013 | B2 |
8575933 | Hu et al. | Nov 2013 | B2 |
8768034 | Liu et al. | Jul 2014 | B2 |
8824757 | Kolthammer et al. | Sep 2014 | B2 |
20050113667 | Schlyer | May 2005 | A1 |
20050123183 | Schleyer et al. | Jun 2005 | A1 |
20070088212 | Takei et al. | Apr 2007 | A1 |
20070249911 | Simon | Oct 2007 | A1 |
20080175455 | John | Jul 2008 | A1 |
20080246776 | Meetz | Oct 2008 | A1 |
20080273780 | Kohlmyer et al. | Nov 2008 | A1 |
20090037130 | Feiweier | Feb 2009 | A1 |
20100329528 | Hajnal | Dec 2010 | A1 |
20110044524 | Wang | Feb 2011 | A1 |
20130197347 | Moghari et al. | Aug 2013 | A1 |
20130303898 | Kinahan et al. | Nov 2013 | A1 |
Entry |
---|
J. Ouyang, et al. “Magnetic Resonance-Based Motion Correction for Positron Emission Tomography Imaging,” Semin Nucl Med, vol. 43, No. 1, Jan. 2013, pp. 60-67, available online: Nov. 22, 2012. |
M. von Siebenthal, et al. “4D MR imaging of respiratory organ motion and its variability,” Phys. Med. Biol, vol. 52, Feb. 16, 2007, pp. 1547-1564. |
L. Gattinoni, et al. “What has computed tomography taught us about the acute respiratory distress syndrome?” Am J Respir Crit Care Med. Nov. 1, 2001;164(9):1701-11. |
J Ouyang, et al., “Magnetic Resonance-Based Motion Correction for Positron Emission Tomography Imaging,” Seminars in Nuclear Medicine, vol. 43, No. 1, Jan. 2013, pp. 60-67, available online: Nov. 22, 2012. |
AD Copeland, et al., “Spatio-Temporal Data Fusion for 3D+T Image Reconstruction in Cerebral Angiography,” IEEE Transactions on Medical Imaging, vol. 29, No. 6, Jun. 2010. pp. 1238-1251. |
JD Green, et al., “Comparison of X-Ray Fluoroscopy and Interventional Magnetic Resonance Imaging for the Assessment of Coronary Artery Stenoses in Swine,” Magnetic Resonance in Medicine, vol. 54, pp. 1094-1099, 2005. |
Number | Date | Country | |
---|---|---|---|
20140355855 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
61828976 | May 2013 | US |