Flexible generation of digitally reconstructed radiographs

Information

  • Patent Application
  • 20050238223
  • Publication Number
    20050238223
  • Date Filed
    September 29, 2004
    20 years ago
  • Date Published
    October 27, 2005
    19 years ago
Abstract
A system and corresponding method for flexible generation of digitally reconstructed radiograph (DRR) images on a graphics processing unit (GPU) are provided, the system including a processor, an imaging adapter in signal communication with the processor for receiving volumetric data, an integration unit in signal communication with the processor for integrating the volumetric data into incremental line integrals, and a modeling unit in signal communication with the processor for modeling composite line integrals from the incremental line integrals; and the corresponding method including receiving three-dimensional (3D) volumetric data into a graphics pipeline, integrating the 3D volumetric data into incremental line integrals along a plurality of viewing rays, modeling composite line integrals from the incremental line integrals, and adding the incremental line integrals for each composite line integral to form pixels of the DRR image.
Description
BACKGROUND

Digitally Reconstructed Radiographs (DRRs) are simulated two-dimensional (2D) X-ray or portal transmission images, which are computed from three-dimensional (3D) datasets such as computed tomography (CT), megavoltage computed tomography (MVCT), 3D imaging of high contrast objects using rotating C-arms, and the like. DRRs have many uses in the diagnosis, therapy and treatment workflows, such as in patient positioning for radiotherapy, augmented reality, and/or 2D to 3D registration between pre-surgical data and intra-surgical fluoroscopic images, for example.


DRRs are commonly generated by casting rays through the volumetric datasets and by integrating the intensity values along these rays, which is typically accomplished after passing the intensities through a lookup table that models ray-tissue interactions. Unfortunately, this process is prohibitively slow for real-time or near real-time applications.


Existing methods for DRR generation typically trade off processing speed versus accuracy, and/or require extensive pre-processing of the data. LaRose, for example, proposed an advanced DRR generation algorithm that also used hardware acceleration. Unfortunately, the LaRose algorithm had the drawback of using 2D texture-based volume rendering, which sacrificed accuracy.


Accordingly, what is needed is a system and method for flexible generation of digitally reconstructed radiographs. The present disclosure addresses these and other issues.


SUMMARY

These and other drawbacks and disadvantages of the prior art are addressed by a system and method for flexible generation of digitally reconstructed radiographs.


A system for flexible generation of digitally reconstructed radiograph (DRR) images on a graphics processing unit (GPU) is provided, including a processor, an imaging adapter in signal communication with the processor for receiving volumetric data, an integration unit in signal communication with the processor for integrating the volumetric data into incremental line integrals, and a modeling unit in signal communication with the processor for modeling composite line integrals from the incremental line integrals.


A corresponding method for flexible generation of DRR images on a graphics processing unit (GPU) is provided, including receiving three-dimensional (3D) volumetric data into a graphics pipeline, integrating the 3D volumetric data into incremental line integrals along a plurality of viewing rays, modeling composite line integrals from the incremental line integrals, and adding the incremental line integrals for each composite line integral to form pixels of the DRR image.


These and other aspects, features and advantages of the present disclosure will become apparent from the following description of exemplary embodiments, which is to be read in connection with the accompanying drawings.




BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure teaches a system and corresponding method for flexible generation of digitally reconstructed radiographs, in accordance with the following exemplary figures, in which:



FIG. 1 shows a schematic diagram of a system for flexible generation of digitally reconstructed radiographs in accordance with an illustrative embodiment of the present disclosure;



FIG. 2 shows a flow diagram of a method for flexible generation of digitally reconstructed radiographs in accordance with an illustrative embodiment of the present disclosure; and



FIG. 3 shows a schematic diagram of non-uniform sampling distances due to the projection of a proxy-geometry in accordance with an illustrative embodiment of the present disclosure.




DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The present disclosure describes a novel way to generate Digitally Reconstructed Radiographs (DRRs) of high quality in real-time. DRRs are simulated two-dimensional (2D) transmission images that are computed from three-dimensional (3D) datasets. Exemplary embodiments utilize the programmability offered by graphics processing units (GPUs) to efficiently perform integration along the viewing rays and model many of the physical effects in conventional radiographic image formation. The integral is computed incrementally by summing up rendered slices of the 3D volume to form a composite integral, after converting the values on each slice to total linear attenuation coefficients through post-classification on the GPU.


The DRR projection images may be computed from volumetric data such as magnetic resonance (MR) or computed tomography (CT) images, for example. An exemplary approach includes computation of line integrals through a 3D volume, and connection of the source location to each pixel in the imaging plane.


As shown in FIG. 1, a system for flexible generation of digitally reconstructed radiographs according to an illustrative embodiment of the present disclosure is indicated generally by the reference numeral 100. The system 100 includes at least one processor or central processing unit (“CPU”) 102 in signal communication with a system bus 104. A read only memory (“ROM”) 106, a random access memory (“RAM”) 108, a display adapter 110 having a graphics processing unit (“GPU”) 140 in signal communication with a local RAM 142, an I/O adapter 112, a user interface adapter 114, a communications adapter 128, and an imaging adapter 130 are also in signal communication with the system bus 104. Thus, the display adapter 110 is a separate processing unit that includes the GPU 140, a separate and independent bus system (not shown) and a local RAM 142. A display unit 116 is in signal communication with the system bus 104 via the display adapter 110. A disk storage unit 118, such as, for example, a magnetic or optical disk storage unit is in signal communication with the system bus 104 via the I/O adapter 112. A mouse 120 and a keyboard 122 are in signal communication with the system bus 104 via the user interface adapter 114. A magnetic resonance imaging device 132 is in signal communication with the system bus 104 via the imaging adapter 130.


An integration unit 170 and a modeling unit 180 are also included in the system 100 and in signal communication with the display adapter 110 and the GPU 140. While the integration unit 170 and the modeling unit 180 are illustrated as coupled to the at least one display adapter 110 and GPU 140, these components are preferably embodied in computer program code stored in at least one of the memories 106, 108, 118 and 142, wherein the computer program code is executed by the GPU 140. As will be recognized by those of ordinary skill in the pertinent art based on the teachings herein, alternate embodiments are possible, such as, for example, embodying some or all of the computer program code in registers located on the GPU 140. Given the teachings of the disclosure provided herein, those of ordinary skill in the pertinent art will contemplate various alternate configurations and implementations of the integration unit 170 and the modeling unit 180, as well as the other elements of the system 100, while practicing within the scope and spirit of the present disclosure.


Turning to FIG. 2, a method for flexible generation of digitally reconstructed radiographs according to an illustrative embodiment of the present disclosure is indicated generally by the reference numeral 200. The method 200 includes a start block 210 that passes control to an input block 212. The input block 212 receives 3D volume data, and passes control to a function block 214. The function block 214 pipelines the 3D volume into a graphics processing unit, and passes control to a function block 216. The function block 216 integrates the pipelined data of the 3D volume into a dense set of incremental line integrals at various angles and positions, and passes control to a function block 218. The function block 218 models each composite line integral from a set of incremental line integrals, and passes control to a function block 220. The function block 220, in turn, adds up the values of the appropriate incremental line integrals as stored in a precomputed look-up table, and passes control to an end block 222.


Turning now to FIG. 3, an apparatus for flexible generation of digitally reconstructed radiographs according to an illustrative embodiment of the present disclosure is indicated generally by the reference numeral 300. The apparatus 300 shows the effect on sampling distance when sampling along rays in different directions. Here, exemplary rays project from an origin O in view directions v0 and v1 through a planar proxy geometry. The angular difference between the view directions v0 and v1 is the included angle α. The v0 ray has sampling distances d0 through the planar proxy geometry, while the v1 ray has sampling distances d1 through the planar proxy geometry.


An exemplary embodiment uses high level shading languages as known in the art, along with the precision, performance and level of programmability of graphics processing units (GPUs), such as the Nvidia NV30/NV35 architecture, for example, such that complex operations can be performed on the graphics adapter. These GPUs are utilized to perform the operations needed for fast DRR reconstruction. Thus, the GPU 140 is used with local RAM 142 to generate the DRRs where the 3D dataset is stored as a 3D texture, and the GPU is used to implement the DRR generation method.


A high-precision graphics pipeline for DRR generation is used, and the integral along the viewing rays is computed as a weighted sum of an appropriate collection of rendered geometric elements, such as planes, shells, and the like, which are surface-textured by means of 3D texturing.


Interpolated texture values are converted to total linear attenuation coefficients through post-classification on the GPU. By using the effective energy of the radiation source, exemplary embodiments model Compton and photoelectric interactions. This conversion may be performed through function evaluation, table lookup or texture lookup; and it allows for the generation of images resembling megavolt images with substantially pure Compton interaction, diagnostic X-ray images with substantially pure photoelectric interaction, or the combination at any ratio in between.


Weighting factors may be used to correct for non-uniform sampling distances due to the projection of the proxy-geometry, such as in the perspective projection of planes as indicated in FIG. 3. Simulation and/or correction of geometric distortions may be computed through function evaluation or texture lookup. Simulation and/or correction of intensity distortions may be computed through function evaluation or texture lookup.


Deformation of the volumetric dataset to support deformable registration and/or visualization of deformed dataset may be generated through function evaluation or texture lookup. Off-screen rendering, such as rendering to textures or P-buffer, may be used to make computed DRRs available for further processing, such as for image gradient computation, and to achieve independence from limitations of display devices such as limited resolution, precision, and the like.


Simulation of the scattering effect of the beam on the resultant simulated projection image may be performed using a pre-defined point spread function (PSF). The affect of the point spread function can be applied on individual re-sampled textures and/or the final integrated result. The implementation may take advantage of the GPU to gain computational speed.


Similar methodologies can be adopted to generate simulated transmissive projection images from the sources with various energies and/or wavelengths. Knowledge of the spatial attenuation of the media from which the source beams pass through at the appropriate range of energies, and the spectra of the source and the scattering properties of the media and the source, are used for modeling. For example, an application embodiment may include efficient generation of the near infrared optical simulation images of a subject with known a priori attributes. Preferably, the integrals should be uniformly spaced in 3D space to generate a substantially perfect DRR image.


In an exemplary embodiment, the line integral fragments are stored in the form of textures within a GPU, and the algorithm is implemented using the graphics hardware capabilities of the GPU. The values of the incremental line integrals may also be stored as textures within the GPU for hardware accelerated DRR generation.


The positions and sizes of subvolumes of the volumetric data set are preferably adapted to the properties of the volumetric data. The hierarchy of blocks with various sizes and overlap amounts are pre-computed and used for DRR reconstructions. Such algorithms can be used for rendering transparent volumes. In addition, each line integral may be pieced together by interpolating the pre-computed incremental line integrals first among the neighboring subvolumes, and second among the neighboring directions within a subvolume.


These and other features and advantages of the present disclosure may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present disclosure may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.


Most preferably, the teachings of the present disclosure are implemented as a combination of hardware and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more graphics processing units (GPU), a random access memory (RAM), and input/output (I/O) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a GPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.


It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present disclosure is programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present disclosure.


Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present disclosure is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present disclosure. All such changes and modifications are intended to be included within the scope of the present disclosure as set forth in the appended claims.

Claims
  • 1. A method for flexible generation of digitally reconstructed radiograph (DRR) images on a graphics processing unit (GPU), the method comprising: receiving three-dimensional (3D) volumetric data into a graphics pipeline; integrating the 3D volumetric data into incremental line integrals along a plurality of viewing rays; modeling composite line integrals from the incremental line integrals; and adding the incremental line integrals for each composite line integral to form pixels of the DRR image.
  • 2. A method as defined in claim 1 wherein the integral along a viewing ray is a weighted sum of rendered geometric elements that are surface-textured by means of 3D texturing.
  • 3. A method as defined in claim 2, further comprising: interpolating the texture values of the rendered geometric elements; and converting the interpolated texture values into total linear attenuation coefficients through post-classification on the GPU.
  • 4. A method as defined in claim 3, further comprising: modeling at least one of Compton and photoelectric interactions by using the effective energy of the radiation source; and converting the interpolated texture values into total linear attenuation coefficients through at least one of function evaluation, table lookup and texture lookup to generate images reflecting at least one of Compton interaction and photoelectric interaction.
  • 5. A method as defined in claim 1, further comprising correcting for non-uniform sampling distances due to the projection of a proxy-geometry by means of weighting factors.
  • 6. A method as defined in claim 1, further comprising at least one of simulation and correction of geometric distortions.
  • 7. A method as defined in claim 6 wherein the correction is computed through at least one of function evaluation and texture lookup.
  • 8. A method as defined in claim 1, further comprising at least one of simulation and correction of intensity distortions.
  • 9. A method as defined in claim 8 wherein the correction is computed through at least one of function evaluation and texture lookup.
  • 10. A method as defined in claim 1, further comprising: deformation of the volumetric dataset to support at least one of deformable registration and deformable visualization of the deformed dataset.
  • 11. A method as defined in claim 10 wherein the deformation is generated through at least one of function evaluation and texture lookup.
  • 12. A method as defined in claim 1, further comprising: off-screen rendering to at least one of textures and the P-buffer to make computed DRRs available for further processing and achieve independence from limitations of display devices.
  • 13. A method as defined in claim 12 wherein the further processing comprises computation of an image gradient.
  • 14. A method as defined in claim 1, further comprising simulating the scattering effect of a beam on the resultant simulated projection image by using a pre-defined point spread function (PSF).
  • 15. A method as defined in claim 14 wherein the effect of the point spread function is applied on at least one of individual re-sampled textures and the final integrated result.
  • 16. A method for flexible generation of simulated transmissive projection images on a graphics processing unit (GPU), the method comprising: receiving three-dimensional (3D) volumetric data into a graphics pipeline from sources having a plurality of energies and wavelengths; integrating the 3D volumetric data into incremental line integrals along a plurality of viewing rays; modeling composite line integrals from the incremental line integrals; and adding the incremental line integrals for each composite line integral to form pixels of the DRR image.
  • 17. A method as defined in claim 16, modeling comprising encoding knowledge of the spatial attenuation of the media from which the source beams pass through at the appropriate range of energies, and the spectra of the source and the scattering properties of the media and the source.
  • 18. A method as defined in claim 17, further comprising generation of the near infrared optical simulation images of a subject with known a priori attributes.
  • 19. A system for flexible generation of at least one of digitally reconstructed radiograph images and simulated transmissive projection images, comprising: a processor; an imaging adapter in signal communication with the processor for receiving volumetric data; an integration unit in signal communication with the processor for integrating the volumetric data into incremental line integrals; and a modeling unit in signal communication with the processor for modeling composite line integrals from the incremental line integrals.
  • 20. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform program steps for flexible generation of at least one of digitally reconstructed radiograph (DRR) images and simulated transmissive projection images on a graphics processing unit (GPU), the program steps comprising: receiving three-dimensional (3D) volumetric data into a graphics pipeline; integrating the 3D volumetric data into incremental line integrals along a plurality of viewing rays; modeling composite line integrals from the incremental line integrals; and adding the incremental line integrals for each composite line integral to form pixels of the DRR image.
CROSS-REFERENCE TO RELATED APPLICATION

The present application claims the benefit of U.S. Provisional Application Ser. No. 60/564,149 (Attorney Docket No. 2004P06565US), filed Apr. 21, 2004 and entitled “Flexible DRR Generation using Programmable Computer Hardware”, which is incorporated herein by reference in its entirety.

Provisional Applications (1)
Number Date Country
60564149 Apr 2004 US