In the example of
As shown, the hybrid scanner 10 has X-ray CT detectors 12 and NM (PET or SPECT) detectors 14 disposed within a single gantry 16, and wherein a patient bed 18 is movable therein to expose a selected region of the patent to either or both scans. Image data is collected by each modality and then stored in a data storage medium, such as a hard disk drive, for subsequent retrieval and processing.
At step 205, the CT and NM image volumes are co-registered. Co-registration of multi-modality images is well known in the art; see, e.g. U.S. Published Patent Application No. 2006/0004274 A1 to Hawman, incorporated herein by reference; 2006/0004275 A1 to Vija et al., incorporated herein by reference; 2005/0094898 A1 to Xu et al., incorporated herein by reference. Accordingly, image co-registration will not be further described herein. However, it is noted that for hybrid scanners, the image co-registration step may be omitted where the coordinate space for both CT and NM modalities is the same. For example, for registration purposes the NM image volume may be considered a reference (i.e., unchanged) volume and the CT image volume may be considered an object (i.e., changed) volume, and vice versa.
At step 206, organ templates of the object of interest (e.g., the left ventricle (LV) of the heart) are derived from the reconstructed CT image data by generating a mask containing non-zero pixel values only for spatial coordinates corresponding to areas including the object, and zero pixel values everywhere else. The mask volume is then re-formatted into a volume having the same voxel (i.e., volume element) and matrix dimensions as the NM volume. The non-zero CT mask voxels are then assigned a predefined uniform value or number that is similar to the NM values for the object (e.g., in the case of cardiac imaging, the non-zero CT mask voxels each may be assigned the mean LV value of the corresponding NM image data).
At step 207, the re-formatted, uniform value CT mask templates are forward-projected from the CT object volume to a “reference” NM projection space. The reference NM projection space is based on the device model of the corresponding NM device, which includes the NM detector response model, patient-specific attenuation data, and scatter model. Additional parameters may be included in the model such that the reference projection space may also take into account other phenomena such as statistical or “Poisson” noise, and pharmacodynamic or pharmacokinetic properties of the particular radiopharmaceutical or biomarker used in the NM imaging application.
Next, at step 208, the forward-projected CT mask templates in the NM reference projection space are convolved with the original NM projections as acquired at step 202 to produce a convolution matrix for each projection. To avoid detection of false maximums, the convolution operation may be limited to a predetermined search area, such as a predefined area surrounding the object of interest. At step 209, the maximum value of the convolution matrix is determined, and its spatial location is identified in order to detect whether object motion has occurred. For instance, where the maximum value of the matrix is located at the origin (i.e., pixel (0,0)), no motion has occurred and the object positioning within the NM projection space is considered to be accurate. Where the location of the maximum value is at a pixel other than the origin (0,0), this indicates that object motion has occurred in the NM projection space, and processing advances to step 210.
At step 210, the displacement of the NM projection data caused by the detected motion is estimated. Motion estimation can be performed by a number of different methods generally known in the art, based on the interpolation of maximum position displacement from the origin of the convolution matrix, to obtain a displacement vector. See, e.g., U.S. Pat. No. 5,973,754 to Panis, U.S. Pat. No. 5,876,342 to Chen et al., U.S. Pat. No. 5,635,603 to Karmann, U.S. Pat. No. 4,924,310 to von Brandt, and U.S. Pat. No. 4,635,293 to Watanabe et al., all incorporated herein by reference. Accordingly, no further explanation of motion estimation is provided herein.
At step 211, the NM projection data are corrected for the effects of object motion by application of the displacement vector obtained in step 210. It is noted that a predefined threshold may be used for the displacement vector, such that corrections are performed only when the displacement vector exceeds such predefined threshold. Next, at step 212, the NM images are again reconstructed for the NM image volume using the motion-corrected and motion-free NM projection data obtained in step 211. The operation is repeated for each projection acquisition angle and/or temporal instance. Additionally, the entire operation of image data reconstruction, optional registration, template creation, forward projection, motion detection and estimation, and correction of projection data can be repeated iteratively until a minimum displacement vector magnitude (or other type of convergence criterion such as sinusoidal function conformance in sonogram space, maximized image content of the object of interest) or a combination of convergence criteria is obtained.
While embodiments of the invention have been described in detail above, the invention is not intended to be limited to the exemplary embodiments as described. It is evident that those skilled in the art may now make numerous uses and modifications of and departures from the exemplary embodiments described herein without departing from the inventive concepts. For example, in addition to correction of NM projection data for object motion within the projection space, the present invention also can be applied to NM partial volume and volume of distribution correction in a sonogram space, overlying visceral activity in cardiac PET and SPECT, and improvements in attenuation correction of NM studies.