The subject matter disclosed herein relates generally to systems and methods for x-ray imaging. More particularly, the subject matter disclosed herein relates to non-rotating, stationary gantry and methods for imaging a subject or object of interest. X-ray imaging is widely used in many areas including medical diagnostics and treatment, industrial inspection and testing and security screening, such as luggage screening at airports.
Stationary Gantry Computed Tomography (SGCT) systems exhibit features that are typically not present in more conventional scanners. In an SGCT system, the relative position of the source and the detector change depending upon the location of the source. Additionally, SGCT systems may have a ring of sources that are fired non-sequentially, which leads to the absence of “source trajectory” in the conventional sense.
With Stationary Gantry CT systems, it is impossible to collect complete tomographic data that is sufficient for theoretically exact reconstruction. In this case a typical approach to reconstruction is based on the appropriate generalization of the Feldkamp, Davis, Kress (FDK) algorithm. The key simplifying assumption in the FDK-type algorithms is that the object be z-independent. Here z is the axis of rotation understood in a general sense. The assumption of z-independence is frequently violated and leads to strong cone beam artifacts. These artifacts may lead to incorrectly reconstructed CT numbers, which, in turn, may lead to incorrect interpretation of the reconstructed CT images. One way to overcome this problem is by using an iterative reconstruction algorithm. This, however, is not practical in applications where high throughput and/or low computational load are required. Hence the need arises to come up with a more accurate reconstruction algorithm that is computationally efficient, reduces the cone beam artifact, and provides more accurate CT results.
Accordingly, what is need in the art is an improved system and method for image reconstruction that addresses the unique features present in SGCT-type systems.
In various embodiments, the present invention provides an improved system and method for stationary gantry computed tomography (SGCT) image reconstruction. The present invention provides a novel No View Differentiation (NVD) algorithm, which can be applied to SGCT-type systems.
The family of reconstruction algorithms, in accordance with the various embodiments of the present invention, involve a flexible weight function. Although the algorithms use view differentiation, their mathematically equivalent formulations that are of the NVD type can be obtained by integrating by parts. Weight optimization in the reconstruction algorithms is also used to address SGCT-specific tasks that do not arise with conventional scanners.
In one embodiment, the present invention provides a computer-implemented method for reconstructing images from tomographic cone beam data, which includes receiving tomographic cone beam data of an object collected by a scanner, analyzing the tomographic cone beam data at each point of a plurality of reconstruction points, wherein the tomographic cone beam data comprises one or more deficiencies, selecting a weight function for a non-iterative image reconstruction algorithm that overcomes the one or more deficiencies in the tomographic cone beam data as related to the plurality of reconstruction points and applying the non-iterative image reconstruction algorithm using the selected weight function for each of the plurality of reconstruction points to reconstruct the image of the object from the tomographic cone beam data.
The one or more deficiencies addressed by present invention may be a result of irregular view sampling, variation in source noise, variation in local spatial resolution and variation due to non-periodic source trajectory. Additionally, the one or more deficiencies in the tomographic cone beam data are a result of a changing position of an x-ray source of the scanner relative to a position of an x-ray detector of the scanner during a scan of the object. The weight function is selected to overcome these deficiencies.
In an additional embodiment, the present invention provides an image reconstruction system for reconstructing images from tomographic cone beam data. The system includes at least one data processor for receiving tomographic cone beam data of an object collected by a scanner and for analyzing the tomographic cone beam data at each point of a plurality of reconstruction points, wherein the tomographic cone beam data comprises one or more deficient. The data processor is further for selecting a weight function for a non-iterative image reconstruction algorithm that overcomes the one or more deficiencies in the tomographic cone beam data as related to the plurality of reconstruction points and for applying the non-iterative image reconstruction algorithm using the selected weight function for each of the plurality of reconstruction points to reconstruct the image of the object from the tomographic cone beam data.
In another embodiment, the present invention provides one or more non-transitory computer-readable media having computer-executable instructions for performing a method of running a software program on a computing device for reconstructing images from tomographic cone beam data, the computing device operating under an operating system, the method including issuing instructions from the software program comprising, receiving tomographic cone beam data of an object collected by a scanner. The method further includes issuing instructions comprising, analyzing the tomographic cone beam data at each point of a plurality of reconstruction points, wherein the tomographic cone beam data comprises one or more deficiencies, selecting a weight function for a non-iterative image reconstruction algorithm that overcomes the one or more deficiencies in the tomographic cone beam data as related to the plurality of reconstruction points and applying the non-iterative image reconstruction algorithm using the selected weight function for each of the plurality of reconstruction points to reconstruct the image of the object from the tomographic cone beam data.
The present invention provides an improved system and method for image reconstruction that addresses the unique features present in SGCT-type systems. The freedom of choice in the weight function accommodates for the peculiarity of the SGCT scanners and improves the quality of the reconstructed images.
For a fuller understanding of the invention, reference should be made to the following detailed description, taken in connection with the accompanying drawings, in which:
In various embodiments, the present invention provides a new image reconstruction method, which can be applied to the Stationary Gantry Type Computed Tomography (SGCT) systems. The method is referred to throughout this document as No View Differentiation (NVD) reconstruction algorithm. In various embodiments, a family of algorithms is described that can be cast into the NVD form. These algorithms use redundancy weighting, which can be used to optimize image quality.
With reference to
With reference to
We start in the setting of 2D tomography. Let ƒ ({right arrow over (x)}), {right arrow over (x)}=(x1, x2), denote a function, and {right arrow over (ƒ)}(α, p) denote its Radon transform.
Consider reconstruction of a given slice, e.g. x3=0, from the SGCT data. Two key features of the SGCT system are (1) Each view is collected at a different x3 value, and (2) The fan angle is fairly large. From (1) we conclude that differentiation between x-rays collected at different source positions should be avoided. From (2) we conclude that the approximation inherent in reconstruction, which uses ramp filtering of fan beam projections, may lead to noticeable artifacts.
Let {right arrow over (y)} denote the source position, and Θ=(cos θ, sin θ) denote the fan angle. Consider the following integral
Rewrite the result of (1) in a more general form. Pick some direction Θ0=(cos θ0, sin θ0). Then (1) implies:
Aside from the factor t−{right arrow over (y)}·Θ0 in the numerator, the right side of (2) resembles the ramp filtered Radon transform of ƒ.
Suppose there are two points on the source trajectory such that {right arrow over (y)}2={right arrow over (y)}1+DΘ0, where D=|{right arrow over (y)}2−{right arrow over (y)}1|. Clearly,
p
0
:={right arrow over (y)}
1·Θ0⊥=y2·Θ0⊥, (3)
and
{right arrow over (y)}
1
=t
1Θ0+p0Θ0⊥,{right arrow over (y)}2=t2Θ0+p0Θ0⊥ (4)
for some t1 and t2. We apply (2) two times: with ({right arrow over (y)}1, Θ0) and ({right arrow over (y)}2, −Θ0). This gives:
Adding the two equations in (5) gives
Equation (6) is an exact way of computing the ramp-filtered Radon transform of ƒ from the data at two sources. Notice from (2) that the derivative and the Hilbert transform are applied only within each view (i.e., not between views).
Using (6) we will now obtain the final inversion formula. The conventional parallel-beam inversion formula is given by
where denotes the Hilbert transform:
In view of (2), denote:
Combining (2), (6), and (9), we obtain:
where {right arrow over (y)}1 and {right arrow over (y)}2 are points on the source trajectory determined from the following conditions:
{right arrow over (x)}={right arrow over (y)}
1
+t
1
Θ={right arrow over (y)}
2
−t
2
Θ,t
1
,t
2>0. (11)
In particular, the chord [{right arrow over (y)}1, {right arrow over (y)}2] is parallel to Θ and contains {right arrow over (x)}. Denote
D({right arrow over (x)},θ)=|{right arrow over (y)}2−{right arrow over (y)}1=t1+t2. (12)
Clearly,
G({right arrow over (y)}1,θ)=G({right arrow over (y)}2,θ+π),D({right arrow over (x)},θ)=D({right arrow over (x)},θ+π). (13)
Substituting (9) gives the first inversion formula:
Integration with respect to the polar angle θ is not convenient from the computational point of view. Instead, integration with respect to the parameter along the source trajectory is preferred. Suppose {right arrow over (y)}(s) is some parametrization of the source trajectory. By a geometric argument,
where cos α>0 is the angle between {right arrow over (y)}′(s) and ({right arrow over (x)}−{right arrow over (y)}(s))⊥. Thus,
Substituting (17) into (15) we obtain
where S is the parametric interval that describes the source trajectory. Equation (18) is the final inversion formula. It holds under the assumption that the source trajectory is convex.
For a flat panel detector, integration with respect to the parameter along the detector plane is more convenient than integration along the polar angle γ. To do so, define
u=R tan γ, (19)
where u is the coordinate along the detector (with the origin at the central ray position), and R is the distance from {right arrow over (y)} to the detector plane. Then, after changing variables, the final equation for the flat panel detector is
Here u0({right arrow over (x)}, s) is the u-coordinate of the point, where the ray with vertex at {right arrow over (y)}(s) passing through z hits the detector plane.
As expected, reconstruction formulas (18), (20) involve no view differentiation. For this reason, the resulting algorithm is called No View Differentiation (NVD) reconstruction.
In general, the derivation above is one of possible ways to derive one possible 2D NVD algorithm. Certainly, other derivation approaches and other 2D NVD algorithms exist. This multitude of reconstruction algorithms is due to the fact that the 2D tomographic data are redundant (there exist so-called range conditions), and different algorithms treat the redundant data in different ways.
Here we describe another family of NVD algorithms, which may have an advantage over the one previously described. In terms of our notation (cf. (18)), the formula (26) of F. Noo, M. Defrise, R. Clackdoyle, et al., “Image reconstruction from fan-beam projections on less than a short scan, Physics in Medicine and Biology 47, 2525-2546 (2002) reads:
where θ=θ({right arrow over (x)}, s) solves the equation
Also, w({right arrow over (x)}, θ) is the weight that controls the utilization of redundant information. It can be almost any function that satisfies the normalization condition:
w({right arrow over (x)},θ)+w({right arrow over (x)},θ+π)=1 (23)
for all reconstruction points {right arrow over (x)} and all angles θ∈[0, 2π). Integrating by parts in (21) with respect to s, we can remove the view-derivative, thereby obtaining a family of NVD algorithms.
Let us consider the role of w in more detail. Mathematically, w({right arrow over (x)}, θ) is the weight with which the filtered data at the source {right arrow over (y)}(s) contributes to the image reconstructed at {right arrow over (x)}. Here {right arrow over (y)}(s) is such that it satisfies the last equation in (21). Pick any {right arrow over (x)} and θ∈[0, 2π), and find the pair of sources {right arrow over (y)}(s1), {right arrow over (y)}(s2) such that {right arrow over (x)} is on the chord [{right arrow over (y)}(s1), {right arrow over (y)}(s2)], and
The data at {right arrow over (y)}(s1) and {right arrow over (y)}(s2) contribute the same information to the image at {right arrow over (x)}, in the context of helical CT, the points {right arrow over (y)}(s1) and {right arrow over (y)}(s2) are called π-partners. Hence, the freedom in the choice of w gives us one possible way for accomodating the peculiarities of the scanner and improving image quality. For example, it may turn out that view sampling in a neighborhood of {right arrow over (y)}(s1) is finer than view sampling in a neighborhood of {right arrow over (y)}(s2). Therefore, we can increase w({right arrow over (x)}, θ) (the weight for the contribution of {right arrow over (y)}(s1)), and decrease w({right arrow over (x)}, θ+π) (the weight for the contribution of {right arrow over (y)}(s2)). Other factors that may affect the weight can include, for example:
In the SGCT case of the present invention, the reconstruction method is applied to cone beam data, in which neighboring sources collect views at fairly different locations along the axial (x3) direction. Hence, differentiation between views may lead to artifacts that would not be present if all the views were along a smooth curve. In a similar fashion, even though integration by parts leads to a mathematically equivalent expression, the result may be a numerically non-equivalent formula due to various cone beam approximations.
Denoting w({right arrow over (x)}, s):=w({right arrow over (x)}, θ({right arrow over (x)}, s)) and integrating by parts in (21) we obtain
Similarly to (20), the analogue of (25) for the flat detector is
For image reconstruction for the SGCT, we select w that reduces streaks due to irregular view sampling. In irregular view sampling, the angles between neighboring line segments formed between a particular reconstruction point and a source are not equiangular. The situation where some of the angles are large and some are small, results in irregular view sampling, which results in undesirable artifacts in the reconstructed images.
To address the problem with irregular view sampling, we compute an availability map for every image plane. Let p denote the 2D index of an image pixel in the plane, and j denote the source index. The availability map ms(p, j) contains binary information for each pixel-source pair (p, j). The map shows whether the image pixel p is visible (i.e., projects on the detector) from the source j:
Using the availability map described in (27), a periodic Gaussian mixture density function in the angular domain ρ(p, θ) is first computed for each image pixel p. The Gaussian mixture approach uses the discrete availability map and obtains a smooth function on the unit circle parametrized by θ:
where θ(p,j)=θ({right arrow over (x)}p, sj). To make sure that the weight w satisfies the normalization condition (23), we use the formula:
where tw adjusts the weight bias between pi-partners. The pi-partners 300 relative to the source trajectory are illustrated with reference to
Even though in some cases one cannot talk about the source trajectory in the classical sense (e.g., in the case when stationary sources are fired non-sequentially), the sources themselves may be arranged along a curve. Furthermore, when sources are projected onto a plane, they also may appear to lie on a curve. Therefore, in any of these more general senses (i.e., ignoring the firing order, for example), we can still talk about source trajectory.
In irregular view-sampling, given a continuous section of the source trajectory and a reconstruction point x, not for all sources in that section the point x is visible on the detector. There can be a subset of sources distributed in an irregular manner over that section for which the point x is not visible on the detector (“invalid” sources). For some sections of the trajectory, the number of these invalid sources can be small, and for other sections the number of these invalid sources can be large. Therefore, we can say that some portions of the source trajectory (as projected onto a plane) are well sampled, and some portions of the source trajectory are poorly sampled. Accordingly, the weight function w is designed to overcome the artifacts resulting from this irregular view-sampling.
Even though a CT detector is usually a 2D surface, when projected onto a proper plane the detector will appear as a curve. In this sense we may say that the detector is located on a curve.
The detector in the SGCT scanner has a shape, which does not allow for filtering in native detector coordinates. The requirement of shift-invariant convolution filtering imposes a restriction on the shape of the detector. Two most common shapes that satisfy the restriction are circular (centered on the source) and flat. Also, the data need to lie on a uniform grid in order to implement the convolutions using fast Fourier transform (FFT). Hence, to compute the Hilbert transforms in (18) and (20) using FFT, one must project the data onto a virtual detector, and then interpolate the projected data to a uniform grid. The following steps are performed to apply the Hilbert transform to the data projected onto the virtual detector: For each source location,
Three different virtual detector geometries have been tested:
Accurate ramp filtering for the circular detector using FFT requires zero-padding of the data so that the total number of points becomes the smallest multiple of 2 greater than 2Neff, where Neff is the number of effective detector pixels. Let the projected ROI be confined within [−Γ0, Γ0], and the corresponding zero-padded array be confined within [−Γ, Γ]. For simplicity, we will first discuss how to perform Hilbert filtering. Applying the ramp kernel is completely analogous.
To avoid aliasing, one approach to computing the Hilbert kernel for the circular detector in the discrete frequency domain is given by:
where Si(·) is the sine integral function, and u′ is defined as
Notice that in (20) the range of angles over which the convolution is computed (after zero-padding) should be strictly inside [−π, π]. Otherwise the Hilbert kernel on the circle, 1/sin(γ), will have an additional singularity at the point γ=π. On the other hand, the condition that the number of uniform grid points and the number of the original data points over the projected ROI be equal may lead to the uniform grid becoming overly large. Indeed, suppose the number of the original data points is Neff=600 in the range [−π/3, π/3]. Then the number of the uniform grid nodes over the same range should also be 600. So the uniform grid needs to be enlarged to 1024 (the closest higher power of 2), and then doubled again to 2048 due to zero padding. This means that the span of the uniform grid can be more than three times (2048/600≈3.4) larger than the span of the projected ROI, thereby easily violating the requirement to stay inside [−π, π]. Hence we are required to use a more dense uniform grid points that is a multiple of 2 than is optimal, thereby reducing image quality.
Another and better way to address this angle range issue is to separate the kernel length from the zero-padded domain length. Let Γƒ=2Γ0 be the half length of the finite support Hilbert kernel (this Hilbert kernel will be called “truncated Hilbert kernel” hereinafter), then the original equation (30) is modified as:
Using (32) we can set the number of points on the uniform grid within the projected ROI to Neff without violating the maximum range restriction. The advantage of (32) over (30) is that we truncate the Hilbert kernel to the minimal length that is sufficient to avoid aliasing (Γƒ=2Γ0) instead of using the standard length [−Γ, Γ].
The ramp kernel is obtained similarly to H(j) by the formula
Next we discuss how to perform integration with respect to s. Two novel points, that do not usually arise in conventional CT reconstruction, need to be addressed: (1) The source trajectory in the SGCT scanner has discontinuous tangent vector {right arrow over (y)}′(s), and (2) Due to the non-sequential source firing pattern, the sampling of the source trajectory is non-uniform. As an example, we will consider the inversion formula (20). The formula (18) can be considered in an analogous fashion.
Fix a reconstruction point {right arrow over (x)}. Let sjk, k=0, 1, . . . , be the subset of sources for which the point {right arrow over (x)} is “visible”. Thus, the corresponding views are used for backprojection to {right arrow over (x)}. Denote:
Then, (20) can be written as follows
Here ϕk (s) is a piecewise-linear interpolating kernel, which vanishes outside the interval [sj
ϕ(sj
SGCT's source trajectory is piecewise-linear, with changes in direction happening only at some sj (but not between two consequtive sj). Consider the last integral in (35). Since source sampling is expected to be fairly sparse, there is no guarantee that sj
The sum on the right in (37) is over all pieces of the source trajectory located between the sources sj
Here sj* is the mid-point of the interval [sj, sj+1]. Since ϕ is a piecewise linear function, the integral on the right can be easily evaluated by applying the formula for the area of the trapezoid:
Combining (39) with (34) gives an appropriate way for dealing with the discontinuity of {right arrow over (y)}′ when computing the integral with respect to s in (20). The weighted NVD (26) can be computed in the same manner. Define
Then, (26) can be written as follows
The second issue to be addressed is the direction of filtering. Again, for simplicity, consider a flat virtual detector. The curved detector can be considered similarly. We propose to filter along the u coordinate in the uz coordinate system. More precisely, each row of the physical detector (which belongs to the appropriate plane z=const) is represented by a line of data points in the uz plane. Here z is the coordinate on the detector surface, which is along the direction perpendicular to detector rows. The derivative+Hilbert filtering in (20) will be performed along each such line separately. Consider now the backprojection step. Pick a reconstruction point ({right arrow over (x)}, x3) and a source position sj. Of course, the source sj needs to be valid/active for the given point. To find the backprojection value, we interpolate in the uz plane. The value u({right arrow over (x)}, sj) is found by projecting {right arrow over (x)} onto the virtual detector. Note that the value of u is independent of the x3-coordinate of the reconstruction point. The value of z(({right arrow over (x)}, x3),sj) is found by projecting ({right arrow over (x)}, x3) onto the actual (or, physical) detector.
In an exemplary reconstruction with the 2D algorithm, the system is assumed to be monoenergetic, noiseless, and with the ideal focal spot. The phantom is assumed motionless, consisting of single material which has the unit attenuation coefficient.
To better illustrate the artifacts in each image, the lower and upper compressed gray scale images in the ranges [−0.2 0.2] and [0.8 1.2] are also provided for the flat virtual detector with smooth transition, see
Finally, horizontal and vertical profiles along 5 different lines are plotted in
In the following discussion, only the original 2D NVD results with noiseless data are shown and discussed. Results with noisy data reconstructed using the weighted NVD, NVD3D, and the advanced NVD3D are available.
Our first observation is that spatial resolution is both location- and direction-dependent. In the future, this information can be used to guide and optimize post-processing.
Ideally, the best detector surface is the one that makes the projected detector pixels equispaced on the virtual detector. On the other hand, none of the surfaces that lead to convolution filtering can create uniform grid data without interpolation (see
With a virtual flat detector, there is no restriction on the total range of the uniform grid. The reason is that in this case the filtering step uses convolution with the Hilbert kernel on the line, 1/u (cf. (20)). Based on our numerical experiments, the optimal uniform grid stepsize is obtained from the condition that over the projected ROI range, the number of uniform grid points and the number of the original, projected data points Neff are equal. Introducing the truncated ramp kernel on the virtual circular detector also allows us to select the uniform grid with the optimal stepsize, which reduces interpolation errors. This can be seen by comparing
As can be seen from
By comparing the error image in
There is another factor that needs to be mentioned. Since parameters of the uniform grid are different for each source position (as opposed to conventional rotating gantry based CT), the ramp kernel needs to be computed for each source. The operation complexity of computing the ramp kernel in (33) is O(N2), which makes the circular virtual detector method much slower than the flat virtual detector methods (unless the filter is precomputed). In the flat detector case, computation of the Hilbert kernel does not involve numerical integration and is much faster. Considering the computational cost and overall accuracy, it appears that the flat virtual detector with smooth transition is the best choice.
FFT is an essential part of analytic CT reconstruction that speeds up the inversion process considerably. Therefore, enabling FFT with non-uniform data is very important. Those methods are called non-uniform FFT (NUFFT).
This exemplary embodiment assumes rectangular shapes of the source and detector, and x-ray transmission that is monoenergetic, exhibits no scatter and is noiseless. Additionally, each ray is assumed to be an average of 5×5 pencil beams over each detector pixel. Two phantoms, cylinders (2D) and ellipsoids (3D), having a unit attenuation coefficient are considered in the exemplary reconstruction.
In the first pass, image pixels are projected onto the physical detectors. 2D projection information is provided and the source availability for each image pixel is then stored in an array.
In the second pass, use the source availability data stored in the first pass to compute the angular Gaussian mixture density for each pixel location and compute the weighting function using the following formula:
w({right arrow over (x)},s)=ρn({right arrow over (x)},s)/(ρn({right arrow over (x)},s)+ρn({right arrow over (x)},s*)), (43)
where s* is the pi-partner of s, as previously illustrated in
In a third pass, the FDK, cosine-weighted, 1D-filtered, 3D-back-projection is performed on the flat virtual detector domain, wherein the back-projection for each view is weighted based upon the source availability and the weighting function stored in the array during the first and second pass.
The ray availability in nine different image locations, plotted as radial distance along the ray angle centered at each image location, is illustrated in
w({right arrow over (x)},s)=ρn({right arrow over (x)},s)/(ρn({right arrow over (x)},s)+ρn({right arrow over (x)},s*)),n=3,
w({right arrow over (x)},s)+w({right arrow over (x)},s*)=1(normalization condition). (45)
The invented method incorporating a weighting function may be represented as follows:
Wherein, one ramp kernel plus one Hilbert kernel convolution (that can be applied e.g., in the frequency domain) are multiplied by some weights. Each of the integrals with respect to y is a filtering step, and each of the integrals with respect to s is a backprojection step. The overall algorithm is of the Filtered Back-Projection (type): filtering is followed by backprojection.
Cylindrical phantom reconstruction results using the same data are illustrated in
The error map of the reconstruction results of
Ellipsoids phantom reconstruction results using the same data are illustrated in
Ellipsoids phantom reconstruction results using the same data are illustrated in
Ellipsoids phantom reconstruction results using the same data are illustrated in
The error map of the reconstruction results of
The error map of the reconstruction results of
The error map of the reconstruction results of
As illustrated by the exemplary embodiment, implementation of the new formula with optimized weighting results in significant reduction of streaks cased by angular undersampling.
In various embodiments, the present invention provides a new image reconstruction NVD algorithm, which can be applied to the SGCT-type systems. We also present a family of reconstruction algorithms, which involve a fairly flexible weight function. Even though these algorithms use view differentiation, integrating by parts we can obtain their mathematically equivalent formulations that are of the NVD type. The SGCT system has features that are typically not present in more conventional scanners: e.g., non-sequential firing of sources (leading to the absence of “source trajectory” in the conventional sense), stationary detector, relative positions of the source and detector change from one source to the next, etc. Hence, weight optimization can be designed to address SGCT-specific tasks that do not arise with conventional scanners (and, therefore, not anticipated in the state of art in the field). Then we show how any of these 2D algorithms can be extended to reconstruction from cone-beam data, leading to NVD3D and advanced NVD3D algorithms.
The present invention may be embodied on various computing platforms that perform actions responsive to software-based instructions. The following provides an antecedent basis for the information technology that may be utilized to enable the invention.
The computer readable medium described in the claims below may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, The present invention may be embodied on various computing platforms that perform actions responsive to software-based instructions. The following provides an antecedent basis for the information technology that may be utilized to enable the invention.
The computer readable medium described in the claims below may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire-line, optical fiber cable, radio frequency, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C#, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It will be seen that the advantages set forth above, and those made apparent from the foregoing description, are efficiently attained and since certain changes may be made in the above construction without departing from the scope of the invention, it is intended that all matters contained in the foregoing description or shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
It is also to be understood that the following claims are intended to cover all of the generic and specific features of the invention herein described, and all statements of the scope of the invention which, as a matter of language, might be said to fall therebetween. Now that the invention has been described,
This application claims priority to U.S. Provisional Patent Application No. 62/663,802 entitled, “System and Method for Stationary Gantry Computed Tomography (CT) Image Reconstruction”, filed on Apr. 27, 2018, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62663802 | Apr 2018 | US |