The present invention relates generally to the reconstruction of medical images. More specifically, the present invention relates to a new method for improving the image quality and the efficiency of reconstruction by using hybrid filtering.
The present invention includes the use of various technologies referenced and described in the documents identified in the following LIST OF REFERENCES, which are cited throughout the specification by the corresponding reference number in brackets:
The quality and efficiency of a reconstructed image created by a computed tomography (CT) device are important to the overall effectiveness of the CT device. The algorithm used in reconstructing the image impacts quality and efficiency.
Detector Geometry and Data Structure
Let g(s, Θ)) represents the total attenuation (or line integral) along a ray from the source position a(s) in direction Θ). This function represents the data acquired through the CT scan before reconstruction has taken place. As is illustrated by
There are many detector types with different geometries: equi-angular, equi-spaced, non-equi-spaced, flat, cylindrical, spherical, tilted, rotated, PI-masked, etc.
Convolution
For clarity, operator notation will be used. Each operator, convolution or backprojection, is defined in general, independently from its argument. Operators are formatted in bold italic to distinguish from functions. Relevant operators are defined as follows.
The Fourier transform is defined as
G(s, ω))=FT[g(s, γ)],
in which the Fourier transform is applied on the second variable γ. The Hilbert filtering operator (H) is defined as
H[g(s, γ)]=g(s, γ)*h(γ)
H[g(s, γ)]=FT−1[G(s,ω)H(ω)) ], where
h(γ)=−1/(πγ) and H(ω)=i sign ω.
The ramp filtering operator (Q) is defined as
Q[g(s, γ)]=g(s, γ)*q(γ)
Q[g(s, γ)]=FT−1[G(s,ω))Q(ω)) ], where
q(γ)=FT−1[Q(ω)] and Q(ω)=|ω|.
The modified Hilbert filtering operator (Hm) (modified meaning that the kernel is h(sin γ) rather than h(γ)) is defined as
Hm[g(s, γ)]=g(s, γ)*h(sin γ).
The modified ramp filtering operator (Qm) is defined as
Finally, the modified Ramp filtering operator with DC shift (Qm0) is defined as
Qm0[g(s, γ)]=FT−1[G(s, ω)Qm(ω))−G(s, ω)δ(ω)]
Qm0[g(s, γ)]=g(s, γ)*q(sin γ)−G(s, γ)/2π,
where δ(ω) is the delta-function, given by FT[δ]=1.
Ramp filtering is traditionally used in filtered backprojection (FBP) algorithms. FBP is the main reconstruction method in computed tomography (CT). Compared to Hilbert filtering, it has the following advantages: (1) better resolution (sharper image) for the same noise level, and (2) more control over resolution to noise trade-off in reconstruction. The second advantage is very important for clinical applications. In the present invention, the major contribution to a reconstructed image comes from ramp-filtered data. Ramp-filtering preserves all high-frequency components of the image. Hilbert-filtering, which is a low-frequency correction, helps reducing cone-beam artifacts.
Backprojection
In general, the backprojection operator acts on data g(s, Θ)), wherein the operator maps the data into image space. This is denoted by the following equation:
Here g(s, “) is first evaluated at Θ=Θ(s, x), wherein Θ(s, x) is a ray from a(s) that crosses x. Then the equation integrates all such rays along the source trajectory over the projection range Λ.
To find Θ(s, x) the following equations are used:
where zα(s) is the z-coordinate of α(s). Backprojection is usually applied to filtered data.
There are two methods for weighting the backprojection operator. The backprojection operator can be either weighted by the inverse of L or the inverse of L2. These two methods are shown in the two following equations.
Compared to 1/L, a backprojection weight of 1/L2 results in worse noise and point-spread-function (PSF) uniformity throughout the image.
The backprojection range is denoted by Λ. Let the superscript in the BPJ operator notation denote the backprojection range as indicated below:
Redundancy Compensation Weighting
A weighting factor, w(s, γ), may be used to correct for redundancy in the data. It is not necessary to scan over the entire region for fan-beam data. For fan-beam geometry, g(s, γ)=g(s+π+2γ, −γ). For fan-beam data, a whole scan will result in each ray being counted twice. Thus, the data would need to be weighted by the function w(s, γ)=½. The 2D data sufficiency condition [14] is satisfied when any line crossing the FOV intersects the reconstruction segment at least once. The 2D data sufficiency condition is satisfied when the reconstruction range Λ≧π+2 arcsin (rFOV/R). The set of projections that covers Λ=π+2 arcsin (rFOV/R) is referred to as a “minimum complete data set” [16] for fan-beam data. The following terminology has been adopted in the art and will be used here:
From the relation g(s, γ)=g(s+π+2γ, −γ), where −γm≦γ≦γm, it can be seen that only a π+2γm reconstruction range is sufficient for exact fan-beam reconstruction and the data g(s, γ), 0<s<2γm−2γ, is redundant with the data in the region π−2γ<s<+90 2γm. Here γm=arcsin (rFOV/R) is the maximum fan angle allowed by the detector. Reference [16] suggests that in Parker weighting, the data of the minimal complete data set should be weighted in such a way that the discontinuity is as uniformly distributed as possible. Reference [16] proposed the following weighting function:
γm=arcsin (rFOV/R) is the maximum fan angle allowed by the detector. The weight is zero if not defined, e.g., wP(s, γ)=0 if s<0 or s>π+2γm. Let the superscript P in the BPJ operator notation denote a backprojection range of π+2γm:
Generalized Parker Weighting (MHS Weighting)
Parker weighting is a particularized case of MHS weighting. In MHS weighting, γm should be at least arcsin (rFOV/R). By virtually increasing γm, a larger reconstruction range can be obtained and this allows for better noise properties [18, 19]. By replacing the physical maximum fan angle γm with the virtual maximum fan angle Γ in the equation for wP(β, γ), the following equation is obtained:
Let the superscript MHS in the BPJ operator notation denote a backprojection range of π+2Γ:
Over-Scan weighting
Since g(s, γ)=g(s+2π,γ), over-scan weighting is used for the backprojection range Λ=2πn+Δ, where n=1, 2, . . . , 0<Δ<2π. The weight function is given below.
In over-scan weighting, MHS weighting and Parker weighting, instead of trigonometric functions sin2 or cos 2, a polynomial 3x2−2x3 [1] or some other smooth function can be used Let the superscript OS in the BPJ operator notation denote a backprojection range of 2πn+Δ:
Noo's Weighting
Noo's weighting has the advantage that it allows the use of an arbitrary reconstruction range Λ=(s0, s1), where so and si are starting and ending points of the reconstruction segment. This weight can be used for ROI reconstruction with a reconstruction range less than half-scan. Such a reconstruction range is called a short scan. The weight function is given below:
where Δs is the smoothing interval. Noo suggested taking Δs=10° [15]. Note that Noo's weighting, with large Δs (≈50°), is equivalent to Parker weighting. Noo's weighting allows the using of an arbitrary backprojection range Λ as shown in
Quasi Cone-Beam Weighting
Fan-beam weighting can be extended to cone-beam data [21]. Once a fan-beam weight w(β, γ) is calculated, it is weighted as a function of the cone angle and normalized to obtain a quasi cone-beam weighting function WQ3D(β, γ, α). This weight must be defined based on the validity (accuracy) of the data, which is represented by the validity weight
The two cone-angles (α1 and α2) define the turning points of the validity curve. The validity weight wVal(α) is combined with a fan-beam weighting function w(β, γ) and then normalized in order to compensate for redundant samples. The validity weight wVal(α) can be arbitrary; however, it makes sense if the parameters t1 and t2 are chosen such that wVal(α) assigns full weight to valid (measured) ray-sums, less weight to invalid (unmeasured) ray-sums and lets the transition be smooth. Therefore, the quasi cone-beam weighting function is:
where summation is performed over all complementary positions, such that
Tam Window Weighting
The Tam window is a part of the detector bounded by the upper and lower projections of the helix from the focal spot a(s).
The weighting function for a 3π Tam window is given by:
Let the superscript π in the BPJ operator notation denote backprojection for a π Tam window only, and the superscript TW in the BPJ operator notation denote backprojection for a Tam window, π or 3π, depending on the helical pitch:
Note that both Parker and Noo's weighting functions were originally introduced for fan-beam data and are cone-angle independent. Use of these weighting functions results in cone-beam artifacts. Tam window weighting provides 3D weighting and is shift-invariant or projection angle independent, i.e., the same weight is applied for each projection.
The advantages of Tam window weighting are: (1) true cone-beam weight, (2) shift-invariance, i.e., the weighting function WT(γ, ν) is the same for all projections, independently of z-position, and (3) simplicity of implementation. The disadvantages of Tam window weighting are: (1) no redundant data is used (some part of the measured data is not used), (2) the Tam window is fixed (hence only two helical pitches are optimal, corresponding to π and 3π), and (3) different image pixels are reconstructed using different backprojection ranges, which leads to a less spatially uniform image.
In
Extended Tam Window Weighting
A Tam window can be extended in the z-direction by using smoothing functions. The extended Tam window is illustrated in
Direction of Filtering Lines
In actual implementation, if the detector rows are not parallel to the filtering direction, filtering requires rebinning from the detector grid to the filtering grid. Each rebinning involves interpolation, which results in a smoother image and possible loss of details. The use of HTan and QTan operators (as well as others) allows for the reduction in cone-beam artifacts for the price of increased reconstruction time and reduced resolution.
Order of Weighting and Convolution
The order of performing weighting and convolution is very important in practice. Redundancy weight w(s(x3), γ) depends on the reconstructed slice z-position, x3. If weighting has to be performed before convolution, i.e., filtering applies to weighted data subsets, then for each slice, all projections in the reconstruction range need to be re-weighted and re-convolved. If, on the other hand, weighting follows after convolution, then the data can be convolved only once for all image slices, and then each slice only needs to be re-weighted. Hence, in the latter case, the number of convolutions required for reconstruction is greatly reduced.
Reconstruction Algorithms
Ramp filtering is traditionally used in FBP algorithms that are used to reconstruct medical images. The original FBP algorithm for fan-beam data has the following form:
which in operator notation becomes:
f(x)=BPJL
The development of the FBP algorithm was first made [13] for the equi-spaced collinear detectors case and later extended [4] for the case of equiangular rays [11]. It uses ramp filtering with 1/L2 weighting. It has been shown that backprojection weight of 1/L2, compared to 1/L, results in worse noise and PSF uniformity throughout the image. Originally designed for a full scan trajectory, the original algorithm was later extended to short scan trajectories [16]:
[FBP-P] f(x)=BPJL
and later [3] for circular cone-beam geometry:
[FDK] f(x)=BPJL
The weighting function for the short scan trajectory [FBP-P] is applied before convolution, i.e., filtering was applied to weighted data subsets. Because of this, each slice of all the projections in the reconstruction range needed to be re-weighted and re-convolved. An algorithm where weighting is done after convolution, would be more efficient. This is because the data would only be convolved one time for all data image slices, and then re-weighted. Hence, if weighting is applied after convolution, the number of convolutions required for reconstruction is greatly reduced. Thus, algorithms applying the weight function after convolution would provide a large computational advantage.
The Feldkamp algorithm was extended to helical trajectory [24] and later to arbitrary scan segments [18] as follows:
[GFDK] f(x)=BPJL
A flowchart for this algorithm is shown in
The algorithms discussed above are limited to circular or helical source trajectories. An algorithm that is not limited to circular or helical trajectories would be advantageous. This would provide for a more versatile algorithm as it could be applied to other trajectories, e.g., saddle trajectories.
Katsevich [5, 7] introduced an exact cone-beam algorithm of the FBP type.
Instead of conventional ramp filtering it employs modified Hilbert transform of the partial derivative of the cone-beam data. A flowchart for this algorithm is shown in
A fan-beam reconstruction algorithm based on Hilbert transform reconstruction was later introduced [15]:
Reference [15] points out that for exact reconstruction, only projections that cover Λ=π are required. This opens a possibility to less-than-a-short-scan reconstruction. For a short scan reconstruction the set of projections over the whole FOV is required (π+2γm). Also, the data sufficiency condition is relaxed. The other advantage of the [NDCK-FB] algorithm is that weighting of redundant fan-beam data is performed after convolution. This makes the [NDCK-FB] algorithm much more efficient compared to any Feldkamp-type algorithm since data does not have to be re-convolved for each slice.
Another advantage of Noo's algorithm was discovered during the inventors' evaluation of the noisy water cylinder phantom. It turns out that noise variance is more uniform throughout the image compared to the Feldkamp algorithm. The PSF is also less space variant for Noo's algorithm. This can be explained by the fact that the backprojection weight is the inverse distance, not the inverse distance squared, and the so-called magnification effect is reduced.
Hilbert-transform-based algorithms introduced stronger smoothing compared to algorithms using ramp filtering [8] due to an additional numerical differentiation step. Kudo [8] proposed an algorithm for fan and cone-beam data that consists of both ramp and Hilbert filtering:
In reference [10] the two algorithms [NDCK-FB] and [KNDC-FB] were generalized to cone-beam case with a flat detector as follows:
There are several disadvantages to Kudo's algorithm. First, it is similar to Feldkamp's algorithm with its disadvantages of inverse square weight and only being workable with a particular weighting function. Second, Kudos' algorithm involves taking the partial derivative of the weight function, which makes this algorithm less appealing for practical purposes. Except for Katsevich's algorithms, the above algorithms are exact for a fan-beam scan on circular trajectory and are approximate for a cone-beam scan.
Known reconstruction algorithms suffer from several disadvantages. None of the above discussed algorithms have all of the following aspects that have been shown to be beneficial: (1) 1/L backprojection weight, (2) the redundancy compensation weight function being applied after convolution, (3) the algorithm being independent of the type of weight function used, (4) the algorithm being independent of the source trajectory, and (5) the incorporation of hybrid filtering (both ramp-type filtering and Hilbert-type filtering).
Accordingly, to overcome the problems of the reconstruction algorithms of the related art, the present invention seeks to provide a method, system, and computer program product for determining an image data value at a point of reconstruction in a computed tomography image of a scanned object.
Accordingly, there is provided a method, system, and a computer program product for determining an image data value at a point of reconstruction in a computed tomography image of a scanned object comprising: (1) obtaining projection data of the scanned object; (2) filtering the obtained projection data with a one-dimensional ramp-type filter to generate ramp-filtered data; and (3) applying a backprojection operator with inverse distance weighting to the ramp-filtered data to generate the image data value at the point of reconstruction in the CT image.
Further, according to an embodiment of the present invention, the above method further comprises: applying projection subtraction to the obtained projection data to generate subtracted data; applying a Hilbert-type filter to the subtracted data to generate Hilbert-filtered data; applying projection addition to the Hilbert-filtered data and the ramp-filtered data to generate filtered data; and applying redundancy weighting to the filtered data to generate weighted data, wherein the step of applying the backprojection operator is applied to the weighted data to generate the image data value at the point of reconstruction in the CT image.
According to another aspect of the present invention there is provided an X-ray computed tomography (CT) system for determining an image data value at a point of reconstruction, comprising: (1) a CT scanning unit configured to generate projection data of a scanned object, the scanning unit including an X-ray source configured to generate X-rays and a detector having detector elements configured to produce the projection data; and (2) a processor, including: a filtering unit configured to apply a ramp-type filter to the projection data to generate ramp-filtered data; and a backprojecting unit configured to apply a backprojection operator with inverse distance weight to the ramp filtered data to generate the image data value at the point of reconstruction.
Further, according to an embodiment of the present invention, the above system further comprises: a projection subtraction unit configured to apply a projection subtraction to the projection data to generate subtracted data; a Hilbert filtering unit configured to apply a Hilbert-type filter to the subtracted data to generate Hilbert-filtered data; a projection addition unit configured to apply projection addition to the Hilbert-filtered data with the ramp-filtered data to generate filtered data; and a weighting unit configured to apply redundancy weighting to the filtered data to generate weighted data, wherein the backprojecting unit is configured to apply a backprojection operator with inverse distance weight to the weighted data generated by the weighting unit.
Other methods, systems, and computer program products of the present invention will become apparent to one or ordinary skill in the art upon examination of the following drawings and detailed description of the preferred embodiments.
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The present invention is directed to exact algorithms for fan-beam data and quasi-exact algorithms for cone-beam data. These algorithms are used in the reconstruction of images taken with a CT device. Embodiments of the present invention are given below.
2π Projection Range (Full-Scan) Formula for Fan-Beam Data:
Flexible Projection Range, Super-Short-Scan, Short-Scan or Over-Scan Formula for Fan-Beam Data:
2π Projection Range (Full-Scan) Formula for Cone-Beam Data:
Flexible Projection Range, Super-Short-Scan, Short-Scan or Over-Scan Formula for Cone-Beam Data:
Cone Beam Data Reconstruction from Tam Window:
The above embodiments of the present invention are given for an equi-angular detector. The algorithms of the present invention are not restricted only to an equi-angular detector, but work with other detector geometries, such as a collinear detector, an equi-spaced detector, a non-equi-spaced detector, a flat detector, a cylindrical detector, a spherical detector, a tilted, and a PI-masked detector.
Further, the algorithms of the present invention work independent of the source trajectory. Source trajectories need not be limited to circular or helical trajectories. For example, saddle trajectories can be applied.
In step 901, the data is filtered with a ramp-type filter to generate filtered data. This ramp-type filter may be a ramp filter, a modified ramp filter, or a modified ramp filter with a DC offset.
In step 902, a 1/L weighted backprojection is applied to the ramp filtered data to produce an image data value in the CT image.
In step 903, a determination is made whether there are other image data values to be reconstructed in the CT image. If there are other image data values, step 902 is repeated until there are no other image data values to be reconstructed in the CT image.
In step 904, the image values outputted from step 902 are used to generate the CT image by arranging the image data values according to the points of reconstruction.
In step 1001, projection subtraction is applied to the projection data to generate subtracted data. Projection subtraction is the application of the partial derivative term in the above algorithm of the present invention.
In step 1002, a Hilbert filter is applied to the subtracted data to generate Hilbert-filtered data. In step 1002, either a Hilbert filter or a modified Hilbert filter may be applied.
In step 1003, a ramp filter is applied to the projection data to generate ramp-filtered data.
In step 1004, the Hilbert-filtered data and the ramp-filtered data are combined to generate filtered data.
In step 1005, a redundancy weighting function is applied to the filtered data. The redundancy weighting function w(s, γ) is not specified in the proposed reconstruction algorithms, and can be chosen freely from Parker weighting, generalized Parker weighting (MHS), Noo's weighting, over-scan weighting, quasi cone-beam weighting, and Tam window weighting {wP(s,γ), WMHS(s, γ), wN(s, γ), wWS(s, γ), W3D(s, γ), wT(γ, ν)}.
In step 1006, the weighted data is subjected to a backprojection operator with an inverse distance weighting to generate an image data value.
In step 1007, a determination is made whether there are other image data values to be reconstituted in the CT image. If there are other image data values, steps 1005-1007 are repeated until there are no other image data values in the CT image.
In step 1008, the image data values are outputted to generate a CT image by arranging the image data values according to the points of reconstruction.
In step 1001, projection subtraction is applied to the projection data to generate subtracted data. Projection subtraction is the application of the partial derivative term in the above algorithm of the present invention.
In step 1002, a Hilbert filter is applied to the subtracted data to generate Hilbert-filtered data. In step 1002, either a Hilbert filter or a modified Hilbert filter may be applied.
In step 1003, a ramp filter is applied to the projection data to generate ramp-filtered data.
In step 1004, the Hilbert-filtered data and the ramp-filtered data are combined to generate filtered data.
In step 1005b, a redundancy weighting function is applied to the filtered data for a particular slice represented by a common axial (z) coordinate. The redundancy weighting function w(s, γ) is not specified in the proposed reconstruction algorithms, and can be chosen freely from Parker weighting, generalized Parker weighting (MHS), Noo's weighting, over-scan weighting, quasi cone-beam weighting, and Tam window weighting {wP(s,γ), WMHS(s, γ), wN(s, γ), wOS(s, γ), w3D(s, γ), wT(γ, ν)}.
In step 1006b, the weighted data is subjected to a backprojection operator with an inverse distance weighting based on the given z-coordinate of the slice to generate an image data value.
In step 1007b, a determination is made whether there are other z coordinates corresponding to other image slices to be reconstituted in the CT image volume. If there are other slices to be reconstructed, steps 1005b-1007b are repeated until there are no image slices to be reconstructed in the CT image volume.
In step 1008b, the image data values in each slice are outputted to generate a CT image volume by arranging the image data values according to the points of reconstruction in each slice.
In step 1001, projection subtraction is applied to the projection data to generate subtracted data. Projection subtraction is the application of the partial derivative term in the above algorithm of the present invention.
In step 1002, a Hilbert filter is applied to the subtracted data to generate Hilbert-filtered data. In step 1002, either a Hilbert filter or a modified Hilbert filter may be applied.
In step 1003, a ramp filter is applied to the projection data to generate ramp-filtered data.
In step 1004, the Hilbert-filtered data and the ramp-filtered data are combined to generate filtered data.
In step 1006c, the filtered data is subjected to a backprojection operator with an inverse distance weighting using a Tam window to generate an image data value. In particular, backprojection is restricted to filtered data contained within the Tam window only, as discussed above.
In step 1007, a determination is made whether there are other image data values to be reconstituted in the CT image. If there are other image data values, steps 1005-1007 are repeated until there are no other image data values in the CT image.
In step 1008, the image data values are outputted to generate a CT image by arranging the image data values according to the points of reconstruction.
The embodiments described here can be applied with circular, helical, or saddle trajectories. The algorithms are independent of the geometry of the detector, i.e. equi-angular, equi-spaced, non-equi-spaced, flat, cylindrical, spherical, tilted, rotated, PI-masked, etc. can be used. The formulas are independent from the type of filtering lines used; horizontal, tangential, rotated, Katsevich or any other reasonable family of filtering lines. Super-short scan, short scan, over-scan, and any trajectory satisfying the completeness condition can be used.
Comparison of Present Invention to Other CT Image Reconstruction Algorithms
A reconstruction algorithm can be applied to a different source trajectory, a different detector geometry, use a different filtering direction, and use a different weighting function. However, reconstruction flow and major steps are unique for each algorithm. To better see the main features of the algorithms, operator notation will be used. Equations for helical cone-beam geometry with redundancy weighting, which represent the most practical interest, will be compared. All formulas will be rewritten for equi-angular detector geometry. Flowcharts illustrating the algorithms below are shown in FIGS. 13A-D and an algorithm according to the present invention is shown in
[an embodiment of present invention]
Spatial uniformity depends on the backprojection weight. Algorithms with 1/L backprojection weight posses better spatial uniformity. Hence NDCK, Katsevich and the algorithm of the present invention produce more spatially uniform images than GFDK and KRND. In Katsevich's algorithm, on the other hand, different image pixels have a different projection range, which results in less spatial uniformity.
The discussion in this section is irrelevant for [Katsevich] since it does not have the cone-beam artifacts. All other algorithms are exact only for a fan-beam geometry with circle trajectory (i.e. in 2D). They can also provide reasonably good results for relatively small cone angles and helical pitches, by extending to 3D. However, the algorithms do not perform equally well in 3D. Cone-beam artifacts appear as shading on one side of the object and glaring on the other. [NDCK] and [KRND] are proposed with tangential filtering, which is known to reduce, but not eliminate, the cone-beam artifacts. Since tangential filtering (as well as rotated filtering) can be also applied to [GFDK] and the algorithm of the present invention, it will not be consider an advantage of only [NDCK] and [KRND]. [NDCK] and the algorithms of the present invention show less cone-beam artifact because they contain θ/θs term, which is the difference between neighboring projections. This compensates for the helical data inconsistency. Note that cone-beam artifact is not present in Katsevich's algorithm, which also has θ/θs term.
One of the main reasons for cone-beam artifact is using fan-beam redundancy weighting for helical cone-beam data. Using a Tam window redundancy weighting function helps significantly reduce cone-beam artifact. Table 1 shows which weights can be used with different algorithms.
Note that only [NDCK] and the present invention work with all weights.
Note that only [NDCK] and the present invention work with all weights.
When comparing CT reconstruction algorithms to each other, factors such as the speed of volume reconstruction, whether there is a flexible reconstruction range and the simplicity of software implementation are important.
The speed of volume reconstruction is primarily defined by how many operations are performed in the slice reconstruction loop. In FIGS. 13A-D, the slice reconstruction loop for [GFDK] contains filtering, which means that the same projection is re-convolved many times for each image slice. All other algorithms are more efficient: each projection is convolved only once (for [NKDC] and [Katsevich]) or twice (for [KRND] and an embodiment of the present invention in
Flexible reconstruction range means that any subset of the source trajectory, whose projection onto xy-plane satisfies 2D data sufficiency condition, can be used for accurate reconstruction. [NKDC], [KRND] and the present invention by construction have flexible reconstruction range. [Katsevich] does not have flexible reconstruction range since it uses Tam window weighting. Flexible reconstruction range also means a possibility of super-short scan. The algorithms that have this possibility are: [NKDC], [KRND] and the present invention.
Simplicity of implementation is crucial for a commercial CT reconstruction algorithm. One important criterion is that each step (filtering, weighting, backprojection) can be represented as a separate module. Also, angular differentiating in [NKDC] depends on how it is implemented and can become quite complicated. In the present invention, the only differentiation, θ/θs, is a simple projection subtraction.
Numerical data differentiation results in resolution loss. In [GFDK], [KRND] and the present invention there is no loss in resolution, since the major part of the image is reconstructed from ramp-filtered data, which undergoes no derivative steps.
Table 2 shows the main features of the algorithms under consideration:
Legend:
* Yes, or 1;
◯ Neutral, or .5;
— No, or 0.
X-ray controller 8 supplies a trigger signal to high voltage generator 7. High voltage generator 7 applies high voltage to x-ray source 3 with the timing with which the trigger signal is received. This causes x-rays to be emitted from x-ray source 3. Gantry/bed controller 9 synchronously controls the revolution of rotating ring 2 of gantry 1 and the sliding of the sliding sheet of bed 6. System controller 10 constitutes the control center of the entire system and controls x-ray controller 8 and gantry/bed controller 9 such that, as seen from the subject, x-ray source 3 executes so-called helical scanning, in which it moves along a helical path. Specifically, rotating ring 2 is continuously rotated with fixed angular speed while the sliding plate is displaced with fixed speed, and x-rays are emitted continuously or intermittently at fixed angular intervals from x-ray source 3.
The output signal of two-dimensional array type x-ray detector 5 is amplified by a data collection unit 11 for each channel and converted to a digital signal, to produce projection data. The projection data that is output from data collection unit 11 is fed to reconstruction processing unit 12. Reconstruction processing unit 12 uses the projection data to find backprojection data reflecting the x-ray absorption in each voxel. In the helical scanning system using a cone-beam of x-rays as in the first embodiment, the imaging region (effective field of view) is of cylindrical shape of radius o) centered on the axis of revolution. Reconstruction processing unit 12 defines a plurality of voxels (three-dimensional pixels) in this imaging region, and finds the backprojection data for each voxel. The three-dimensional image data or tomographic image data compiled by using this backprojection data is sent to display device 14, where it is displayed visually as a three-dimensional image or tomographic image.
For the purposes of this description we shall define an image to be a representation of a physical scene, in which the image has been generated by some imaging technology. Examples of imaging technology could include television or CCD cameras or X-ray, sonar or ultrasound imaging devices. The initial medium on which an image is recorded could be an electronic solid-state device, a photographic film, or some other device such as a photostimulable phosphor. That recorded image could then be converted into digital form by a combination of electronic (as in the case of a CCD signal) or mechanical/optical means (as in the case of digitizing a photographic film or digitizing the data from a photostimulable phosphor).
All embodiments of the present invention conveniently may be implemented using a conventional general purpose computer or micro-processor programmed according to the teachings of the present invention, as will be apparent to those skilled in the computer art. Appropriate software may readily be prepared by programmers of ordinary skill based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. In particular, the computer housing may house a motherboard that contains a CPU, memory (e.g., DRAM, ROM, EPROM, EEPROM, SRAM, SDRAM, and Flash RAM), and other optional special purpose logic devices (e.g., ASICS) or configurable logic devices (e.g., GAL and reprogrammable FPGA). The computer also includes plural input devices, (e.g., keyboard and mouse), and a display card for controlling a monitor. Additionally, the computer may include a floppy disk drive; other removable media devices (e.g. compact disc, tape, and removable magneto-optical media); and a hard disk or other fixed high density media drives, connected using an appropriate device bus (e.g., a SCSI bus, an Enhanced IDE bus, or an Ultra DMA bus). The computer may also include a compact disc reader, a compact disc reader/writer unit, or a compact disc jukebox, which may be connected to the same device bus or to another device bus.
Examples of computer readable media associated with the present invention include compact discs, hard disks, floppy disks, tape, magneto-optical disks, PROMs (e.g., EPROM, EEPROM, Flash EPROM), DRAM, SRAM, SDRAM, etc. Stored on any one or on a combination of these computer readable media, the present invention includes software for controlling both the hardware of the computer and for enabling the computer to interact with a human user. Such software may include, but is not limited to, device drivers, operating systems and user applications, such as development tools. Computer program products of the present invention include any computer readable medium which stores computer program instructions (e.g., computer code devices) which when executed by a computer causes the computer to perform the method of the present invention. The computer code devices of the present invention may be any interpretable or executable code mechanism, including but not limited to, scripts, interpreters, dynamic link libraries, Java classes, and complete executable programs. Moreover, parts of the processing of the present invention may be distributed (e.g., between (1) multiple CPUs or (2) at least one CPU and at least one configurable logic device) for better performance, reliability, and/or cost. For example, an outline or image may be selected on a first computer and sent to a second computer for remote diagnosis.
The invention may also be implemented by the preparation of application specific integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
The source of image data to the present invention may be any appropriate image acquisition device such as an X-ray machine or CT apparatus. Further, the acquired data may be digitized if not already in digital form. Alternatively, the source of image data being obtained and process ed may be a memory storing data produced by an image acquisition device, and the memory may be local or remote, in which case a data communication network, such as PACS (Picture Archiving Computer System), may be used to access the image data for processing according to the present invention.
Numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described.