The following generally relates to imaging and more particularly to compensating for motion in helical scans and is described with particular application to computed tomography (CT).
A CT scanner generally includes an X-ray tube mounted on a rotatable gantry that rotates around an examination region about a z-axis. The X-ray tube emits radiation that traverses the examination region and a subject or object positioned therein. An X-ray sensitive radiation detector array subtends an angular arc opposite the examination region from the X-ray tube, detects radiation that traverses the examination region, and generates a signal indicative thereof. A reconstructor processes the signal and reconstructs volumetric image data indicative of the examination region.
Subject motion during scanning leads to image artifacts such as blurring and/or other image artifacts in the reconstructed volumetric image data. Depending on the severity of the artifacts, the subject may need to be re-scanned, which increases subject dose, and ionizing radiation can cause damage to cells. Instructing the subject to hold their breath during a thorax or adnominal scan can reduce periodic motion due to the respiratory cycle. Furthermore, there are motion compensated reconstruction algorithms that compensate for some periodic motion such as that due to the cardiac cycle and/or the respiratory cycle.
Involuntary motion such as, e.g., coughing, hiccups, or bowel motion, may also occur during a helical scan and likewise can lead to blurring in the reconstructed volumetric image data. The motion pattern for involuntary motion is non-periodic. Unfortunately, the subject may not be able to prevent such motion and motion compensated reconstruction algorithms for periodic motion are not well-suited for compensating for non-periodic motion.
In addition, motion within the object can lead to distortion of the shape of the scanned object in the volumetric image data. The distortion depends on the direction of the motion. For example, motion transverse to the z-axis movement of the table supporting the object leads to shear strain in the x-z-view in the reconstructed image, motion in the direction of the table movement leads to compression in the y-z-view in the reconstructed image, and motion in the direction opposite of the table movement leads to stretching in the y-z-view in the reconstructed image.
As such, there is an unresolved need for another approach for another motion compensated reconstruction approach (e.g., one that mitigates at least the above-noted blurring and/or distortion).
Aspects described herein address the above-referenced problems and/or others.
In one aspect, an imaging system includes an X-ray source configured to emit X-ray radiation, a two-dimensional detector array, including a plurality of rows of detectors, configured to detect X-ray radiation and generate a signal indicative thereof, and a reconstructor configured to process the signal and reconstruct volumetric imaged data corrected for arbitrary motion. The reconstructor is configured to generate at least two temporal motion state images, including a first temporal motion state image when a slice location of interest is located in a first sub-portion of the two dimensional detector array with projection data from a first subset of detector rows, and a second temporal motion state image when the slice location of interest is located in a second different sub-portion of the two dimensional detector array with projection data from a first different subset of detector rows. The reconstructor is further configured to generate a distortion vector field with the at least the first and second temporal motion state images, wherein the distortion vector field represents motion; and generate a motion compensated volumetric image data when the slice location of interest is centered on the two-dimensional detector array with the distortion vector field.
In another aspect, a computer readable medium is encoded with computer executable instructions which, when executed by a processor, causes the processor to: obtain projection data for a helical scan of a subject; reconstruct, for a particular time and image slice location of interest, a first temporal motion state image at an earlier time on the detector array and offset from the central row in a first direction with projection data from a first subset of detector rows; reconstruct, for the particular time and image slice location, a second temporal motion state image at a later time on the detector array and offset from the central row in a second direction with projection data from a second different subset of detector rows; estimate a distortion vector field between the first and second temporal motion state images, and construct motion compensated volumetric image data with a motion compensated reconstruction algorithm using the distortion vector field to compensate for arbitrary motion.
In another aspect, imaging method includes constructing three-dimensional images of different motion states from a single helical scan by applying different aperture weighting functions to an output of different subsets of detector rows of a detector of an imaging system, calculating a distortion vector field in-between the different temporal images using an image registration algorithm, and reconstructing a motion compensated image which compensates for arbitrary motion using the distortion vector field.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
The following describes a motion compensation approach which compensates for voluntary, involuntary, periodic, non-periodic, and/or other motion. In general, with the approached described herein, the set of rows of detectors is split into two or more sub-sets (e.g., a front and a rear, a front, a center and a rear, etc.) in the z-direction. A time difference of the resulting images is induced by using these sub-sets for the reconstruction. This is in contrast to approaches in which several images, different in time, are each generated from data from all of the rows (the entire detector).
The system 100 includes a generally stationary gantry 102 and a rotating gantry 104. The rotating gantry 104 is rotatably supported by the stationary gantry 102 by a bearing (not visible) or the like and rotates around an examination region 106 about a z-axis, which is the axis of rotation. A radiation source 108, such as an X-ray tube, is supported by and rotates with the rotating gantry 104, and emits X-ray radiation.
A radiation sensitive detector array 110 subtends an angular arc opposite the radiation sources 108 across the examination region 106 and detects radiation traversing the examination region 106 and generates a signal (projection data) indicative thereof. The illustrated radiation sensitive detector array 110 includes a two-dimensional (2-D) array with a plurality or rows arranged with respect to each other along a direction of the z-axis.
A reconstructor 112 reconstructs the signal and generates volumetric image data indicative of the- examination region 106. The illustrated reconstructor 112 is configured to utilize, at least, a motion-compensated reconstruction algorithm 114 from the reconstruction algorithm memory 116. As described in greater detail below, the motion compensated reconstruction algorithm 114 reconstructs, for scans, two or more temporal motion state images representing different motion states for a particular image slice and then uses the two or more temporal motion state images during the reconstruction of the particular image slice. The reconstruction can mitigate motion such as voluntary motion and/or involuntary motion, e.g., due to coughing, hiccups, bowel motion, including periodic and/or non-periodic motion.
The reconstructor 112 can be implemented via hardware and/or software. For example, the reconstructor 112 can be implemented via a processor (e.g., a central processing unit or CPU, a microprocessor, a controller, etc.) configured to execute computer executable instructions stored, encoded, embedded, etc. on computer readable medium (e.g., a memory device), which excludes transitory medium, where executing the instructions causes the processor to perform one or more of the acts described herein and/or another act.
In the illustrated example, the reconstructor 112, the reconstruction algorithm memory 116 and the motion reconstruction algorithm 114 are shown as part of the imaging system 100. In another embodiment, the reconstructor 112, the reconstruction algorithm memory 116 and the motion reconstruction algorithm 114 are separate from the imaging system 100. In either instance, the reconstructor 112 and/or the reconstruction algorithm memory 116 and the motion reconstruction algorithm 114 can be local to or remote from the imaging system 100.
A support 118, such as a couch, supports a subject in the examination region 106 and can be used to position the subject with respect to x, y, and/or z axes before, during and/or after scanning A computing system serves as an operator console 120, and includes an output device such as a display configured to display the reconstructed images and an input device such as a keyboard, mouse, and/or the like. Software resident on the console 120 allows the operator to control the operation of the system 100, e.g., identifying a reconstruction algorithm, etc.
In this example, the motion reconstruction algorithm 114 includes a motion state reconstruction module 202, a distortion vector field determiner module 204, and a motion compensated reconstruction processor 206. Generally, the motion state reconstruction module 202 reconstructs temporal motion state images which correspond to at least an earlier time point and a later time point relative to a current time when the source 108 is at a certain slice position. One or more other temporal motion state images for another point such as a central time point, time points between the earlier/later time points and the central time point, time points between other time points, etc. can also be reconstructed. The motion distortion field determiner module 204 computes a distortion vector field from an image registration of the temporal motion state images. The motion compensated reconstruction processor 206 employs the distortion vector field during the reconstruction of the particular slice position. The vector fields can be stored in memory along with the motion corrected slices.
The modules 202, 204 and 206 are now described in greater detail in connection with
In this example, the motion state reconstruction module 202 uses a segmented aperture weighting to reconstruct the temporal motion state images. The motion state reconstruction module 202 reconstructs a first temporal motion state image for the first half 308 using a first weighting function and a second image for the second half 310 using a second weighting function. The relative pitch is the relation between the pitch d and the projected detector height hdet 600 of
Where the pitch is greater than one (1) and less than two (2), the aperture weighting function widths will be larger than half the detector height since relative pitch calculated for the fraction of the detector used for reconstruction of the motion state images is smaller than two. Thus, the weighting functions will overlap at the detector center and there will be temporal overlap, and each image will have contributions from both of the halves 308 and 310 of
The number and the shape of the weighting functions 402 and 404 are not limiting. An example of such weighting is discussed in Koken et al., “Aperture weighted cardiac reconstruction for cone-beam CT,” Phys. Med. Biol. 51 (2006) 3433-3448.
Returning to
How the different aperture weighting functions relate to time differences in the motion state images is described next.
The distortion vector field is used to correct for motion artifacts in the motion compensated reconstruction. A comparable example is discussed in Stevendaal et al., “A motion-compensated scheme for helical cone-beam reconstruction in cardiac CT angiography,” Med. Phys. 35, 3239 (2008). This reference describes, how to take a given distortion vector for an image voxel into account in the reconstruction in order to compensate for impacts of the object motion on the reconstructed image. However, in Stevendaal a periodic/cyclic motion of the subject is assumed. Its aim is to generate an image representing the object at a certain point of time or more precisely at a certain heart phase. In contrast, the image generated with the method described herein yields a three-dimensional (3-D) image, where each image slice at a certain z-coordinate represents the object at the time the x-ray focal spot had the same z-position.
Returning to
Instead of identifying the motion state images and the corresponding distortion vector fields with row positions of centers of the aperture functions 608 and 610 (
An example motion compensated reconstruction is described in patent U.S. Pat. No. 8,184,883 B2, filed Nov. 14, 2006, and entitled “Motion compensated CT reconstruction of high contrast objects,” the entirety of which is incorporated herein by reference. The approach described herein mitigates motion due to voluntary and/or involuntary periodic and/or non-periodic motion, including motion due to coughing, hiccups, or bowel motion. It can be used to recover scans in which the subject coughed, breathed, and/or had other motion, and thus avoid a need to re-scan and subject the patient to additional dose. This may be valuable particularly for young children or lung screening patients. The motion compensated reconstruction algorithm described herein can be used with other motion compensated reconstruction algorithms.
The ordering of the following acts is for explanatory purposes and is not limiting. As such, one or more of the acts can be performed in a different order, including, but not limited to, concurrently. Furthermore, one or more of the acts may be omitted and/or one or more other acts may be added.
At 802, a spiral scan is performed.
At 804, at least two temporal motion state images are generated for a particular slice location and time at two different times, as described herein.
At 806, a distortion vector field is determined from the at least two temporal motion state images, including a first temporal motion state image at an earlier point in time relative to the time, and a second temporal motion state image at a later point in time relative to the time, as described herein.
At 808, an image is generated with the acquired data using the distortion vector field to mitigate motion artifact, as described herein.
At 810, the motion compensated image is displayed.
The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium, which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.
The volumetric image data can also be corrected for the distortion due to the direction of motion within the object relative to movement of the subject support 118 (
In
where Δhhet represents a distance between aperture weighing functions measured on a physical detector of the array 110, rFA represents a distance between an x-ray focal spot of the source 108 and a rotation axis 602 of the imaging system 100, rFD represents a distance between the x-ray focal spot and the detector, and vt represents a subject support speed. rFA and rFD are are known and constant for an imaging system, and vt can be obtained from the table speed scan parameter in the plan of a scan.
That is, for a transversal slice with z-coordinate z, the distortion vector field {right arrow over (m)}j(x, y, z) represents the motion in the object from the point of time t (z) where the x-ray focal spot had the same z-location and the time t(z)−jΔtD, which is the mean acquisition time of short scan image number j. Since the short scan image number “0” is acquired with aperture function centered on the detector and the therefore also centered at the focal spot z-position, it is used as reference image for the distortion vector field estimation and the distortion vector fields related to it vanishes. In order to transform the motion corrected image I(x,y,z) into an undistorted image representing the object around a certain z-coordinate z0 at the time point t0, the reconstructor 112, in one non-limiting instance, executes the below described algorithm.
In one non-limiting example, z0 is the z-coordinate of an image slice and represents the object at the time t0, and z′ is the z-coordinate of a neighboring slice in the motion corrected image and represents the object at the time Δt′, where Δt′=(z′−z0)/vt. To correct the motion within the time interval Δt′, i.e., to undo the motion within this time interval, the reconstructor 112 determines the corresponding distortion vector field related to −Δt′ by interpolating the distortion vector fields used for the motion compensated reconstruction. For example, the reconstructor 112 can employ a linear interpolation by finding j so that jΔtD≤−Δt′<(j+1)ΔtD and then calculating the interpolated vector field as shown in EQUATION 2:
Other linear and/or a non-linear interpolation can alternatively be used.
Similarly, the reconstructor 112, for time differences |Δt′|>n·ΔtD, performs an extrapolation. As a result, a three-dimensional distortion vector field {right arrow over (m)}j(x,y,z′) is constructed for all image slices with z-coordinates z′
This vector field is referred to herein as a “distortion correction vector field.” In one instance, this approach is limited to this z-range around z0 since only here the distortion vector field is estimated with interpolation and/or extrapolation. The parameter z0 can be chosen freely, thus the reconstructor 112 can calculate an undistorted image for the neighborhood of any chosen location. The undistorted image is then generated by warping the motion compensated image I(x,y,z) with the distortion correction vector field {right arrow over (m)}zo:Iundistored({right arrow over (x)})=I({right arrow over (x)}+{right arrow over (m)}zo({right arrow over (x)})), where {right arrow over (x)}=(x,y,z).
The distortion, and thus also the undistorted image, will change with z0, so the limits of the correction in +z and −z-direction, i.e.,
should be indicated in the image viewer. When starting the image viewer in coronal, sagittal, or 3-D mode, the motion corrected image is initially displayed.
The amount of the distortion due to object motion for the region can be calculated from the distortion vector fields, as described herein. In this case it is the distortion vector fields which should be used, since they are available without additional processing.] This may be used to indicate in which image regions distortions are present and where the described correction method should be applied. An example is shown in
where the maximum norm of the distortion correction vector field is scaled with the corresponding absolute time difference |j|ΔtD in order to convert it to a measure for the maximum speed within the object. Instead of using only one distortion vector field {right arrow over (m)}j for the metric one may also use several, for example by averaging Dmaxj for all j≠0.
The console 120 can compute this metric and recommend the distortion correction if the metric (speed) exceeds a predetermined threshold. An example threshold is 10% of the table speed. Additionally or alternatively, the user manually or the console 120 can automatically activate the distortion correction whenever a geometrical measurement at a specific image location, such as volume of a nodule or extent of a lesion, is requested.
In one embodiment, both values of a measurement, corrected and uncorrected, are displayed. The console 120 can also compare these values and attach a reliability value to the measurement, e.g. indicating an unreliable measurement in case of large discrepancy and/or large local motion. In another embodiment, and as shown in
are transformed with a same distortion correction vector field as the outermost slices of the undistorted image, i.e., by using the distortion correction vector fields
The ordering of the following acts is for explanatory purposes and is not limiting. As such, one or more of the acts can be performed in a different order, including, but not limited to, concurrently. Furthermore, one or more of the acts may be omitted and/or one or more other acts may be added.
At 1302, a spiral scan is performed.
At 1304, volumetric image data is generated using a vector field to mitigate motion artifact, as described herein and/or otherwise.
At 1306, a z-axis location of interest is determined for the volumetric image data, as described herein and/or otherwise.
At 1308, a predetermined region about the z-axis location is corrected for distortion with a distortion correction vector field, as described herein.
At 1310, a motion corrected image with the distortion corrected region is displayed.
The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium, which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.
In helical CT, other weighting schemes may be used and adapted to achieve the same effect as the aperture weighting as described here for the reconstruction of the motion state images. An example is illustrated in Grass, et al., “Helical cardiac cone beam reconstruction using retrospective ECG gating,” Phys. Med. Biol. 48 (2003) 3069-3084. In Grass, the illumination window is introduced, which describes for each voxel the time period it is exposed to x-rays in the helical scan. Let Φf and Φl be the first and the last projection in which a voxel is illuminated. For sake of convenience it is assumed here that the voxel located on the gantry axis of rotation. Then, for example, using only projections from the first half of the illumination window, i.e., from projections Φf to ½(Φf+Φl) by designing the projection based weighting function accordingly yields the same voxel value in the reconstructed image as the aperture weighting function 402 taking into account only line integral values measured with the front half of the detector. In general, by using a dedicated voxel dependent weighting scheme the same motion state images can be achieved as with the detector aperture weighting described before. Therefore, detector aperture weighting here refers to weighting schemes which can be translated to or have the same impact as the aperture weighting functions described here.
The approach described herein may also be applied iteratively, i.e., the distortion vector fields computed as described above may be used for motion compensated reconstruction of a second set of motion state images. When the motion compensation works perfectly, these second set of motion state images will not show any differences, since all object motions are cancelled out. However, an incomplete motion compensation will lead to differences in the second set of motion state images. The second set distortion vector fields determined from this second set of motion state images describes the remaining object motion not compensated in the first iteration. Therefore, the sums of the first distortion vector fields and the second distortion vector fields describe the object motion better and can be used for an improved motion-compensated reconstruction. They can also be used for the reconstruction of a third set of motion state images serving as input for a third iteration.
The approach described herein can be applied with spectral CT (e.g., a photon counting detector, multi-layer detector, etc. and/or phase contrast CT. For these, preprocessed projection data (e.g., projection data quantifying iodine or other contrast agents) may be used for the estimation of the distortion vector fields and the latter may be used for all image types in the motion compensated reconstruction.
With the approach describe herein, the type of object motion corrected is arbitrary (periodic, non-periodic, etc.). The motion state images are generated from different subsets of detector rows. The motion vector field dependence in reconstruction depends on the detector row hit by the x-ray path corresponding to the line integral through the voxel being reconstructed or difference of z-coordinate of voxel being reconstructed and the z-coordinate of the focal spot (i.e., the origin of the x-ray path). The resulting image displays each object slice at its state when the focal spot had the same z-coordinate. This is in contrast to a conventional approach (e.g., Stevendaal) in which the type of object motion corrected is cyclic, the motion state images are generated from different sets of whole projections acquired at different times/heart phases, the motion vector field dependence in reconstruction depends on time/heart phase, and the resulting image displays an object at one point of time/a certain heart phase.
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be constructed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/072672 | 9/11/2017 | WO | 00 |