The following generally relates to reducing image artifacts and finds particular application to cone beam computed tomography (CT). However, it is also amenable to other medical imaging applications and to non-medical imaging applications.
A variety of exact reconstruction algorithms exist for cone beam computed tomography. Such algorithms are able to reconstruct attenuation image data of scanned structure of a subject or object substantially without cone beam artifacts such as streaks and/or intensity drops. Unfortunately, such algorithms are based on having complete projection data, and some scans such as circular trajectory axial cone beam scans generate incomplete projection data in that some of the regions of the scanned field of view are not adequately sampled in that there is at least one plane that intersects some region of the scanned field of view but does not intersect the source trajectory. As a consequence, the reconstructed images may suffer from cone beam artifact resulting from the strong z-gradients in the scanned structure.
One technique for reducing cone beam artifacts includes subtracting artifacts directly from the reconstructed image. Such a technique may include performing a first pass reconstruction to generate first image data, segmenting the first image data into several tissue types such as water, air, bone, etc., forward-projecting the segmented image data back into the acquisition geometry, performing a second pass reconstruction on the forward projected data to generate second image data, generating a difference image based on the segmented image data the second image data, and subtracting the difference image data from the acquisition image data using a suitable multiplicative factor and/or an additional possible filtering step to generated corrected image data. Unfortunately, the choice of the multiplicative factor as well as the choice of additional filtering is not straightforward.
Another technique includes extracting knowledge about gradients in the object or subject from the image data, and using that information to re-simulate cone-beam artifacts in a second pass in order to later eliminate the artifact from the image data by subtraction. Unfortunately, this correction is generally limited to a central sub-region of the scanned field-of-view, and the artifact generally increases with distance from the central plane.
With respect to applications involving scanning a moving object such as a human or animal, contrast based gated rotational acquisitions often are based on sparse or incomplete angular sampling in which data is not available or is missing for a portion of the angular sampling interval. The sparse angular sampling may limit the image quality of the reconstruction. For example, when using a single circular arc acquisition with a concurrently acquired ECG signal, the gating of the projection data leads to artifact such as streaks in the reconstruction volume. The artifacts may be overcome by performing multiple circular arc acquisitions. Unfortunately, this leads to longer acquisition times and an increased patient dose.
Aspects of the present application address the above-referenced matters and others.
According to one aspect, a method includes generating simulated complete projection data based on acquisition projection data, which is incomplete projection data, and virtual projection data, which completes the incomplete projection data and reconstructing the simulated complete projection data to generate volumetric image data.
According to another aspect, a method includes supplementing acquisition image data generated from incomplete projection data with supplemental data to expand a volume of a reconstructable field of view and employing an artifact correction to correct a correctable field of view based on the expanded reconstructable field of view.
According to another aspect, a system includes a projection data completer that generates simulated complete projection data based on acquisition incomplete projection data and virtual projection data that completes the acquisition incomplete projection data, and a reconstructor that reconstructs the simulated complete projection data to generate volumetric image data indicative thereof.
According to another aspect, a system includes an image data supplementor that supplements acquisition image data generated from incomplete projection data with supplemental data to expand a volume of a reconstructable field of view and a correction unit that employs an artifact correction algorithm to correct a correctable field of view that is based on the expanded reconstructable field of view.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
A radiation source 110 is supported by and rotates with the rotating gantry 104 around the examination region 106 about a z-axis. The radiation source 110 travels along a source trajectory such as a circular or other source trajectory and emits radiation that traverses the examination region 106. A collimator 112 collimates the emitted radiation to produce a generally conical, fan, wedge, or other shaped radiation beam.
A radiation sensitive detector array 114 detects photons that traverse the examination region 106 and generates projection data indicative thereof. A reconstructor 116 reconstructs the projection data and generates volumetric image data indicative of the examination region 106.
A patient support 118, such as a couch, supports the patient for the scan. A general purpose computing system 120 serves as an operator console. Software resident on the console 120 allows the operator to control the operation of the system 100. Such control may include selecting a protocol that employs an incomplete projection data artifact correction algorithm to correct for artifact associated with reconstructing an incomplete projection data.
The scanner 100 can be used to perform various acquisitions. In one instance, the scanner 100 is used to perform an acquisition in which incomplete projection data is generated. An example of such an acquisition includes, but is not limited to, a circular trajectory, axial cone beam scan.
In one embodiment, a projection data completer 122 completes the incomplete projection data with virtual projection data to generate complete projection data. As described in greater detail below, this can be achieved by expanding the incomplete projection data in radon space through extrapolation or otherwise to generate missing data that completes the incomplete projection data. This resulting simulated complete projection data can be used to generate an image with an image quality about the same as an image quality of an image generated with complete projection data obtained during acquisition.
In an alternative embodiment, a data supplementor 124 supplements the image data generated with the incomplete projection data. As described in greater detail below, this can be achieved based on a model indicative of the scanned object or subject in which the model is registered to the image data and used to determine structure absent in the image data. A correction unit 126 corrects the supplemented image data for incomplete projection data artifacts such as cone beam artifacts. Supplementing the data as such allows for a correction of a larger portion of the field of view relative to a configuration in which the image data is not supplemented.
A forward projector 204 forward projects the segmented image data into a suitable geometry, including a geometry that is different from the acquisition geometry. A geometry bank 206 includes N different virtual geometries 208 and 210, where N is in integer. A suitable virtual geometry includes a virtual geometry having a source trajectory that completes the incomplete projection data. For instance, a suitable trajectory includes a trajectory that when combined with the acquisition trajectory intersects every plane that intersects the field of view. A suitable geometry may also be dynamically determined, automatically and/or based on user input, when needed.
By way of non-limiting example, where the acquisition geometry includes a circular trajectory, a suitable virtual geometry may include a geometry with a line trajectory, another circle trajectory that is orthogonal to the plane of the acquisition trajectory, a spiral trajectory, a saddle trajectory, etc.
Returning to
The generated simulated complete projection data is reconstructed by the reconstructor 116 or otherwise based on the weights using a suitable reconstruction algorithm. An example of a suitable reconstruction algorithm includes, but is not limited to, an exact reconstruction algorithm such as an exact reconstruction algorithm for piecewise differentiable source trajectories. Other exact reconstruction algorithms can alternatively be used.
As noted above, by completing the incomplete projection data, the image quality of an image generated with the simulated completed incomplete projection data is about the same as the image quality of an image generated with complete projection data during acquisition.
It is to be appreciated that virtual projection data can be also be generated for other applications. For instance, the above approach can be used to generate virtual data such as cardiac phase data for a cardiac scan for phases outside of the gating window and/or other virtual data for other applications.
The data generator 402 registers or fits the model with the acquisition image data to map the anatomical or structural information in the model to the anatomical or structural information in the acquisition image data, which also maps the anatomical or structural information in the model that is missing in the acquisition image data to the acquisition image data. The registration may include an iterative approach in which the registration is adjusted until a similarity measure and/or other criteria is satisfied. Optionally, an operator may also manually adjust the registration. When registered, anatomical or structural information in the model that is not in the acquisition image data can be generated for the acquisition image data based on the registered model.
The model used by the data generator 402 may be either an existing model pre-stored in a model bank 404 or a model dynamically generated by a model generator 406. Such a model can be general or specific to the scanned object or subject. An example general model may be an abstraction based on a priori knowledge of what the object or subject should look like. The a priori knowledge may include information obtained from literature, previous procedures performed on the object or subject (e.g., a mean or actual passed representation), other similar object or subjects, and/or other information. The abstraction may be graphical and/or a mathematical equation. An example of a specific model includes a model based on information about the scanned object or subject.
One or more of the models stored in the model bank 404 may have been up/down loaded from an external model source to the model bank 404, generated by the model generator 406, and/or otherwise provided.
The model generator 406 can use various approaches to generate a model. For example, the illustrated model generator 406 includes one or more machine learning algorithms 408, which allows the model generator 406 to leverage information such as historical information, patterns, rules, etc. to generate models through computational and statistical methods using classifiers (inductive and/or deductive), statistics, neural networks, support vector machines, cost functions, inferences, etc. By way of example, the algorithms 408 may use input such as size, shape, orientation, location, etc. of the object or a similar object, anatomy of the subject or one or more different subjects, previously generated models, and/or other information to generate a model for the scanned object or subject. Moreover, the model generator 406 may use an iterative approach wherein the model is refined over two or more iterations.
A model refiner 410 can be used to generate a model specific to the scanned object or subject based on a general model and information corresponding to the object or subject. For instance, the model refiner 410 can use information indicative of a size, shape, orientation, location, etc. of the object or anatomy to modify the general model to be more specific to the object or subject. In one embodiment, the model refiner 410 is omitted or not used.
A correction unit 126 corrects the supplemented image data for artifact. In this example, the correction unit 126 employs a multi-pass reconstruction technique such as a subtraction based reconstruction technique. One such technique includes segmenting supplemented image data into several tissue types such as water, air, bone and/or one or more other tissue types associated with a high gradient tissue interface, forward projecting the segmented image data into the acquisition geometry, reconstructing the forward-projected segmented image data to generated second image data, generating an image data that emphasizes the artifact based on the difference between the segmented image data and the second image data, and subtracting the difference image data from the supplemented image data to correct the supplemental imaged data for incomplete data artifact. Other artifact correction techniques may alternatively be used.
Without the supplemental data, the correction unit 126 may only be able to correct a sub-portion of the acquisition image data. This is illustrated in connection with a circular trajectory axial cone beam scan and
Initially referring to
For sake of clarity,
In another embodiment, virtual projection data is generated for a moving object. For explanatory purposes, the following is described in connection with the heart, for example, generating virtual projection data for a cardiac phase in connection with a cardiac application. However, it is to be understood that the object can be any object that moves while being imaged. This embodiment is described with respect to
Initially referring to
At 1504, first image data is generated for the cardiac phase, which may be a relatively low motion (resting) or other phase of the cardiac cycle. In this example, the first image data is a three dimensional reconstruction generated using a gated filtered backprojection reconstruction algorithm using a first gating window. In one instance, the first gating window represents a pre-set range around the phase of interest, and may be weighted such that the data closer to a center region of the window contributes to a greater extent than the data farther from the center region.
At 1506, the first image data is segmented into two or more different types of tissue. Suitable tissue types include, but are not limited to, air, water, bone, contrast-agent, etc. As noted above, in one instance the selected tissue types often correspond to the structures creating artifacts, for example, structures that generate high z-gradients. Likewise, the segmentation can be based on a histogram analysis of the image data, a threshold technique that separates the Hounsfield scale into a plurality of disjoint intervals, and/or other segmentation technique. In one non-limiting instance, the projections of the segmentation are filtered, for example, with a Gaussian, median or other filter, which may smooth the reconstruction.
At 1508, the segmented image data is forward projected. The segmented image data can be forward projected into the acquisition geometry or a virtual geometry. In one embodiment, the segmented image data is forward projected into views of the cardiac phase that have not been measured, thereby extrapolating or filling in missing data in the acquisition angular sampling interval.
At 1510, the newly generated projection data is reconstructed to generate second image data. In this example, the second image data is a three dimensional reconstruction generated using a gated filtered backprojection reconstruction algorithm using a second gating window.
At 1512, the first and second image data are combined to form third image data. In one instance, the first and second image data are combined as a function of Equation 1:
Third image data=A(first image data)+B(second image data), Equation 1:
wherein A and B are weighting functions. The weights functions A and B can be variously selected. In one instance, A=B=0.5. In another instance, the weights A and B are not equal. In yet another instance, the sum of the weights does not equal one. It is to be appreciated that the resulting third image data may have less artifacts than the first image data and increased signal to noise and contrast to noise ratios. The above may be implemented by way of computer readable instructions, which, when executed by a computer processor(s), causes the processor(s) to carry out the acts described herein. In such a case, the instructions are stored in a computer readable storage medium such as memory associated with and/or otherwise accessible to the relevant computer.
It is to be appreciated that the approaches herein are applicable to other imaging applications, including, but not limited to CT system operated in circular mode, a C-arm system that acquires incomplete data along a planar source trajectory, and/or any other imaging application in which an incomplete set of projection data is generated.
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be constructed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2009/051812 | 5/4/2009 | WO | 00 | 10/27/2010 |
Number | Date | Country | |
---|---|---|---|
61050801 | May 2008 | US | |
61084783 | Jul 2008 | US | |
61087194 | Aug 2008 | US |