Embodiments disclosed herein generally relate to iterative reconstruction in computed tomography (CT) imaging using system optics modeling.
The X-ray beam in most computed tomography (CT) scanners is generally polychromatic. Yet, third-generation CT scanners generate images based upon data according to the energy integration nature of the detectors. These conventional detectors are called energy-integrating detectors and acquire energy integration X-ray data. On the other hand, photon-counting detectors are configured to acquire the spectral nature of the X-ray source, rather than the energy integration nature. To obtain the spectral nature of the transmitted X-ray data, the photon-counting detectors split the X-ray beam into its component energies or spectrum bins and count the number of photons in each of the bins. The use of the spectral nature of the X-ray source in CT is often referred to as spectral CT. Since spectral CT involves the detection of transmitted X-rays at two or more energy levels, spectral CT generally includes dual-energy CT by definition.
Spectral CT is advantageous over conventional CT because spectral CT offers the additional clinical information included in the full spectrum of an X-ray beam. For example, spectral CT facilitates in discriminating tissues, differentiating between tissues containing calcium and tissues containing iodine, and enhancing the detection of smaller vessels. Among other advantages, spectral CT reduces beam-hardening artifacts, and increases accuracy in CT numbers independent of the type of scanner.
Conventional attempts include the use of integrating detectors in implementing spectral CT. One attempt includes dual sources and dual integrating detectors that are placed on the gantry at a predetermined angle with respect to each other for acquiring data as the gantry rotates around a patient. Another attempt includes the combination of a single source that performs kV-switching and a single integrating detector, which is placed on the gantry for acquiring data as the gantry rotates around a patient. Yet another attempt includes a single source and dual integrating detectors that are layered on the gantry for acquiring the data as the gantry rotates around a patient. All of these attempts at spectral CT were not successful in substantially solving issues, such as beam hardening, temporal resolution, noise, poor detector response, poor energy separation, etc., for reconstructing clinically viable images.
Iterative reconstruction (IR) can be incorporated into a CT scanner system, such as one of the CT scanners described above. IR compares a forward projection, through an image estimate, to the measured data. Differences are used to update the image estimate. Measured data includes the true system optics, which blurs the data, as well as physical effects, such as scatter and beam hardening. When the reprojected data and measured data match, a good estimate of the true solution is obtained as a reconstructed image. Conventionally, reconstruction assumed a point source, a point detector, point image voxels, and snapshot acquisition, which is called pencil beam geometry.
For low-dose applications, data fidelity implies also matching the noise, which is not desirable. Therefore, most systems use a “cost function” inserted into the iterations in order to reduce noise while maintaining true features.
System optics modeling (SOM) includes knowing (1) the extent of the source and how its emissivity varies with position, (2) the size of the detector element, (3) the relative geometry (system magnification) of the source and detector elements, (4) image voxel size and shape, and (5) the rotation of the gantry during each data sample.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Embodiments described herein are directed to an iterative reconstruction with system optics modeling using filters. In one embodiment, a CT imaging apparatus has processing circuitry that is configured to obtain projection data collected by a CT detector during a scan of an object. The processing circuitry is also configured to perform iterative reconstruction of the projection data to generate a current image. The iterative reconstruction includes filtering forward-projected data during backprojection to model system optics. The processing circuitry is also configured to combine the current image with a previously-obtained image to generate an updated image.
In one embodiment, a method of performing image reconstruction includes obtaining projection data collected by a CT detector during a scan of an object. The method also includes performing iterative reconstruction of the projection data to generate a current image. The performing step includes filtering forward-projected data during backprojection to model system optics. The method also includes combining the current image with a previously-obtained image to generate an updated image. In another embodiment, a computer-readable medium has computer-executable instructions embodied thereon, that when executed by a computing device, causes the computing device to perform the above-described method.
The multi-slice X-ray CT apparatus further includes a high-voltage generator 109 that generates a tube voltage applied to the X-ray tube 101 through a slip ring 108 so that the X-ray tube 101 generates X-rays. The X-rays are emitted towards the subject S, whose cross sectional area is represented by a circle. The X-ray detector 103 is located at an opposite side from the X-ray tube 101 across the subject S for detecting the emitted X-rays that have been transmitted through the subject S. The X-ray detector 103 further includes individual detector elements or units.
With continued reference to
The above-described data is sent to a preprocessing device 106, which is housed in a console outside the radiography gantry 100 through a non-contact data transmitter 105. The preprocessing device 106 performs certain corrections, such as sensitivity correction on the raw data. A storage device 112 stores the resultant data, which is also called projection data at a stage immediately before reconstruction processing. The storage device 112 is connected to a system controller 110 through a data/control bus 111, together with a reconstruction device 114, input device 115, and display device 116.
The detectors are rotated and/or fixed with respect to the patient among various generations of the CT scanner systems. The above-described CT system is an example of a combined third-generation geometry and fourth-generation geometry system. In the third-generation system, the X-ray tube 101 and the X-ray detector 103 are diametrically mounted on the annular frame 102 and are rotated around the subject S as the annular frame 102 is rotated about the rotation axis RA. In the fourth-generation geometry system, the detectors are fixedly placed around the patient and an X-ray tube rotates around the patient.
In an alternative embodiment, the radiography gantry 100 has multiple detectors arranged on the annular frame 102, which is supported by a C-arm and a stand.
Conventional IR approaches have several exemplary problems when implemented with a CT scanner system, such as one or more of the CT scanner systems described above. In a first example, standard reconstruction by filtered backprojection requires adaptive filtering for low-dose acquisitions. However, such systems have difficulty maintaining resolution, especially within soft-tissue boundaries.
In a second example, since IR is often used for low-dose CT, cost functions are used, which can be based on total variation, anisotropic diffusion, bilateral filters, etc. However, all of these approaches have difficulty distinguishing between noise dots and small features, such as secondary and tertiary blood vessels in CTA scans. Another problem is the maintaining of the sharpness of boundaries of soft tissue organs. One conventional system solves this problem by incorporating the system optics model (SOM), which attempts to include the true beam width of the image system in the image reconstruction process. However, in such a system, the processing time is too large.
In a third example, the standard method for incorporating the SOM is to average multiple measurements around an isocenter. This provides the benefit of an image region near the isocenter and a reasonable approximation away from the isocenter. However, an example of an average taken over many micro-rays could include each detector sensor being broken into a 5×5 array of micro-sensors, the focal spot being broken into a 7×3 array of focal spots (each with its own emissivity function), and 5 micro-views being used to account for signal integration during rotation for each recorded view signal. IR already increases the computational load by an order of magnitude, and standard SOM increases the load an additional 5×5×7×3×5=2625 times. Further, if the number of microarrays is reduced, the benefits of using a SOM could be lost.
The disclosed embodiments described herein maintain and even improve image sharpness while using a cost function for noise reduction, without a substantial increase in computational complexity beyond IR without SOM. In one embodiment, the system optics are modeled as a spatially and view-variant low-pass filter. The filter can be applied in either the reprojection (i.e., forward projection) step or in the image domain on the estimated image prior to the reprojection. This process is much less computationally intensive than implementing micro-rays and micro-views as well as distance driven methods.
In an isocenter embodiment, the filter becomes spatially invariant and thus, has very fast processing. The system optics model accounts for the blur in the CT imaging chain. In this embodiment, the blur is measured, rather than modeled, using a calibration phantom consisting of a very small (1 mm or less in diameter), high-density, high-Z sphere suspended in a very low-density foam or other low-density supporting device and placed near the isocenter. This isocenter restriction is necessary because the blur depends on the location within the field-of-view.
Note that the cause of the blur, beam width, cross-talk, etc. is not important since it is being measured. Further, the isocenter approximation implies that one should get the benefit of the SOM for the image region near the isocenter and a reasonable approximation away from isocenter. The isocenter is often the most diagnostically important region of the image.
After the point spread function (PSF) is extracted from the measurements, which need only be done once at the factory, the PSF is used as a convolution kernel disposed after the forward projector in the IR loop. Note that the convolution operation is relatively inexpensive computationally compared with the other processing steps.
Certain variations upon the isocenter embodiment described above are given. In one embodiment, the measurement and use of the PSF is combined with distance-driven backprojection to incorporate backprojection blur caused by image voxel size on the detector cells. The PSF can be from a single measurement or better, averaged from multiple measurements around the isocenter so that the measurements are not biased due to a particular position, where the test sphere projects onto a “special” area of a detector cell. Note also that, instead of measurement, the PSF can be estimated using the micro-ray approach. However, this estimator does not include blurring sources such as cross-talk. In addition, the convolution of the PSF is more efficient if applied after log operations.
According to one embodiment, a data domain Iterative Reconstruction-System Optics Modeling (IR-SOM) Low-Pass Filter (DLPF) concept is disclosed.
Further, the DLPF can be modeled as a convolution of two filters, a Gaussian filter and a TopHat filter, as shown in
The shape of the Gaussian filter component is illustrated by the curve shown in
wherein P=0.1 (full-width tenth max).
The discrete Gaussian filter (DGS) is given by:
where Δc=Channel spacing in channels, nominally 1.0, which results in:
with a number of points,
The shape of the TopHat filter component is illustrated by the curve shown in
The discrete TopHat filter (DTH) is given by:
DTHxv,yv,β[i]=1.0 0≤i<NPtsTH
with a number of points,
where DLPF′ is the un-normalized filter directly after convolution.
A calculation of the Gaussian width, DWGS will be given with reference to
DWGS(xv,yv,β)=ChFPSOM(xv,yv,β)·GSScale
wherein ChFPSOM (xv, yv, β) is the footprint of the voxel at (xv, yv, β) due to the SOM source, and GSScale is an empirically determined parameter (e.g., 0.67). The SOM source footprint is given by:
ChFPSOM(xv,yv,β)=ChU−ChL
A rectangular source in the x-y plane is defined by center point (xs, ys) and corner points (xs1, ys1) through (xs4, ys4). For each corner point (xsn, ysn), a ray emanating from the source point will be tangent to the voxel (xv, yv) at points (xt, yt) and will intersect the detector at channel Ch. The SOM source footprint is the maximum and minimum channel positions determined from all four source corner points. For γv≥0, ChL is defined by ray S1tL, and ChU by rays from S3tU. For γv<0, ChL is defined by ray S4tL and ChU by ray S2tU.
A calculation of the TopHat width, DWTH will be given with reference to
DWTH(xv,yv,β)=ChFPROT(xv,yv,β)·THScale
wherein ChFPROT (xv, yv, β) is the footprint of the voxel at (xv, yv, β) due to the rotation of a point source during the integration time, and THScale is an empirically determined parameter (e.g., 0.9). The rotation footprint is given by:
ChFPROT(xv,yv,β)=max(ChUvs,ChUve)−min(ChLvs,ChLve)
In
where
Lv=√{square root over ((xsrc−xv)2+(ysrc−yv)2)}
The image domain filter ILPF is calculated in the same manner as the DLPF described above, the only difference being DWTH, DWGS, DTH, DGS, DLPF, and NPtsDLPF are replaced with their image-domain counterparts, IWTH, IWGS, ITH, IGS, ILPF, and NPtsILPF. Δc has also been replaced with Δxy, where Δxy is the voxel spacing in mm.
where
FB[n]=BLI{IMG(ixp[n],jxp[n])}
A verification scheme to show that the data domain and the image domain implementations are equivalent will be given with reference to
Simulated PSF projection data for a thin wire located at various x, y positions is generated for PSF-SOM conditions, as well as PSF-PB conditions, as illustrated in
The method also includes performing iterative reconstruction of the projection data by filtering forward projected data during backprojection to model system optics in step S1820.
The method also includes subtracting the filtered forward projected data from the projection data to generate a current image in step S1830.
The method also includes combining the current image with a previously-obtained image to generate an updated image in step S1840.
The method 1800 can also include projecting a voxel at a given location onto one or more X-ray detectors at a given view angle, and obtaining a PSF for the projected voxel. The method 1800 can also include convolving a projected point voxel with a data-domain low-pass filter for the given location. The data-domain low-pass filter can be modeled as a full-width tenth max width of the Gaussian blur function convolved with a full-width tenth max width of the TopHat blur function.
A computer-readable medium having computer-executable instructions embodied thereon, can cause a computing device to perform the above-described method.
In the disclosed embodiments, a CT scanning apparatus, such as the apparatus described above with reference to
Embodiments of the IR approach described herein have better low-dose image quality compared to filtered backprojection. Embodiments described herein also have better edge and feature preservation and in some cases, improved spatial resolution, compared to standard IR. Embodiments described herein incorporate non-linear, spatial variant de-convolution into iterative reconstruction-based algorithms, which are much more computational efficient than conventional methods.
The above-described embodiments can be implemented, in part, using a memory, a processor, and circuitry of a computing system, such as the computing system illustrated in
Further, the claimed embodiments may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1900 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
CPU 1900 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 1900 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 1900 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
The computing system in
The computing system further includes a display controller 1908, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 1910, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 1912 interfaces with a keyboard and/or mouse 1914 as well as a touch screen panel 1916 on or separate from display 1910. General purpose I/O interface 1912 also connects to a variety of peripherals 1918 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard.
A sound controller 1920 is also provided in the computing system, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1922 thereby providing sounds and/or music.
The general purpose storage controller 1924 connects the storage medium disk 1904 with communication bus 1926, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the computing system. A description of the general features and functionality of the display 1910, keyboard and/or mouse 1914, as well as the display controller 1908, storage controller 1924, network controller 1906, sound controller 1920, and general purpose I/O interface 1912 is omitted herein for brevity as these features are known.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosures. The novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosures. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosures.
Number | Name | Date | Kind |
---|---|---|---|
6266388 | Hsieh | Jul 2001 | B1 |
8233586 | Boas | Jul 2012 | B1 |
9524567 | Brokish | Dec 2016 | B1 |
9836872 | Erhard | Dec 2017 | B2 |
20050058240 | Claus | Mar 2005 | A1 |
20050286749 | De Man | Dec 2005 | A1 |
20060257010 | George | Nov 2006 | A1 |
20070280404 | Nielsen | Dec 2007 | A1 |
20080240335 | Manjeshwar | Oct 2008 | A1 |
20090016485 | Nakanishi | Jan 2009 | A1 |
20090060121 | Ziegler | Mar 2009 | A1 |
20100183203 | Ye | Jul 2010 | A1 |
20120086850 | Irani | Apr 2012 | A1 |
20120155728 | DeMan | Jun 2012 | A1 |
20130177225 | Zamyatin | Jul 2013 | A1 |
20130294665 | Rao | Nov 2013 | A1 |
20140212018 | Hein | Jul 2014 | A1 |
20140369581 | Fu | Dec 2014 | A1 |
20150036902 | Zamyatin | Feb 2015 | A1 |
Entry |
---|
Capel, David “Super-resolution: Maximum Likelihood and Related Approaches”, Chapter 5 of “Image Mosaicing and Super-Resolution”, 2004, 81-136. |
Zamyatin et al., “Practical Hybrid Convolution Algorithm for Helical CT Reconstruction”, 2006, IEEE Transactions on Nuclear Science, vol. 53, No. 1, 167-174. |
Hein et al. “System optics in both backprojection and forward projection for model-based iterative reconstruction.” Medical Imaging 2012: Physics of Medical Imaging. vol. 8313. International Society for Optics and Photonics, 2012. |
Fu et al. “Modeling and estimation of detector response and focal spot profile for high-resolution iterative CT reconstruction.” 2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (2013 NSS/MIC). IEEE, 2013. (Year: 2013). |
Katsura et al. “Model-based iterative reconstruction technique for radiation dose reduction in chest CT: comparison with the adaptive statistical iterative reconstruction technique.” European radiology 22.8 (2012): 1613-1623. (Year: 2012). |
Prakash et al. “Radiation dose reduction with chest computed tomography using adaptive statistical iterative reconstruction technique: initial experience.” Journal of computer assisted tomography 34.1 (2010): 40-45. (Year: 2010). |
Seibert, James Anthony. “Iterative reconstruction: how it works, how to apply it.” Pediatric radiology 44.3 (2014): 431-439. (Year: 2014). |
Yu et al. “Fast model-based X-ray CT reconstruction using spatially nonhomogeneous ICD optimization.” IEEE Transactions on image processing 20.1 (2011): 161-175. (Year: 2011). |
Jean-Baptiste Thibault, et al., “A three-dimensional statistical approach to improved image quality for multislice helical CT”, Medical Physics, vol. 34 No. 11, Nov. 2007, pp. 4526-4544. |
Number | Date | Country | |
---|---|---|---|
20160300369 A1 | Oct 2016 | US |