The present invention relates to digital data processing and, particularly, to the visualization of three-dimensional and higher-dimensional images in two-dimensions, e.g., on a two-dimensional display screen. The invention has application, by way of non-limiting example, in medical imaging, microscopy, and geosciences, to name but a few.
Three-dimensional (3D) volumetric images (also referred to as stacks of 2D images) occur in many disciplines like medicine, geosciences, or microscopy. Such 3D images can be acquired using machines like computer tomographs, magnetic resonance imaging devices, or confocoal microscopes, or they can be the result of computation.
The visual perception of natural 3D density distributions, like clouds or fire, involves projection onto a 2D plane, the retina. This process can be mimicked using a computer in order to compute a 2D projection of a 3D image and to display that on the computer screen, thus simulating the perception of true physical 3D objects. The 3D image, represented as a scalar function on a 3D volume, can be visualized in a number of ways, for example, by color contours on a 2D slice or by a polygonal approximation to a contour surface. A set of such visualization techniques, commonly known as direct volume rendering, produces a 2D projected image directly from the volume data without intermediate constructs.
Direct volume rendering of a 3D image typically requires some model of the optical properties of that volume, e.g., how the data volume emits, reflects, and scatters light. That model is utilized to compute a 2D projection image, e.g., by evaluating the integrated effects of the optical properties along viewing rays corresponding to pixels in the 2D image. Such evaluations can be very computation intensive, especially for large 3D volume image data.
An object of this invention is to provide improved methods and apparatus for digital data processing and, more particularly, by way of non-limiting example, for image visualization.
More particular objects of the invention are to provide such methods and apparatus as facilitate visualization of three- and higher-dimensional images. A related object is to provide such methods as facilitate such visualization in two-dimensions, e.g., as on a two-dimensional display screen.
Yet further objects of the invention are to provide such methods and apparatus as can be implemented at lower cost. And, a related aspect of the invention is to provide such methods and apparatus as can be implemented using standard, off-the-shelf components.
The above objects are among those attained by the invention that provides, in some aspects, methods and apparatus for three-dimensional (and higher-dimensional) volume rendering that exploit the functions of chips or chip sets, boards and/or processor configurations known as “graphics processing units” (GPUs)—or coprocessors providing comparable functions and/or having comparable architectures—to implement fast and accurate volume rendering machines.
Related aspects of the invention provide such methods and apparatus that utilize the programmability of these GPUs (or like coprocessors) by means of so-called pixel shaders and/or vertex shaders to enhance performance (rendering speed) and/or image quality.
In one aspect, the invention provides improvements in a digital data processor of the type that renders a three-dimensional (3D) volume image data into a two-dimensional (2D) image suitable for display. The improvements include a graphics processing unit (GPU) that comprises a plurality of programmable vertex shaders that are coupled to a plurality of programmable pixel shaders. One or more of the vertex and pixel shaders are configured to determine intensities of a plurality of pixels in the 2D image as an iterative function of intensities of sample points in the 3D image, through which a plurality viewing rays associated with those pixels are passed. The pixel shaders compute, for each ray, multiple iteration steps of the iterative function prior to computing respective steps for a subsequent ray.
In a related aspect, one or more of the vertex shaders compute a viewing ray for each pixel in the 2D image based on input parameters, such a view point and a view direction. One or more of the pixel shaders then determine the intensities of one or more sample points in the 3D image along each ray passed through that image. A pixel shader can determine the intensity of a sample point in the 3D image along a given ray by interpolating intensity values of a plurality of neighboring 3D data points.
In another aspect, at least one of the pixel shaders determines, for a plurality of computed sample points along a portion of a ray, whether those sample points lie within the 3D image data. In some cases, some pixel shaders test whether sample points along a portion of a ray are within the 3D image data set prior to evaluating the iterative function at those points while other (or the same) pixel shaders evaluate the function at sample points along another portion of the ray without such testing. The tests are typically performed at points along a portion of the ray that is more likely to fall beyond the 3D image data.
In further aspects of the invention, one or more of the pixel shaders store the 2D image in an off-screen buffer. The pixel shaders can, then, effect the display of the buffered 2D image by applying another rendering pass thereto. Moreover, the pixel shaders can apply selected filtering operations, such as zoom, anti-aliasing or lower resolution rendering, to the stored 2D image. In some cases, the GPU generates the 2D image by executing instructions implemented thereon via one application programming interface (API), and effects the display of the 2D image, stored in the off-screen buffer, by executing instructions implemented thereon via a different API.
In another aspect of the invention, improvements are provided in an apparatus for computed tomography of the type that renders a 3D volume image data set into a 2D dimensional displayed image. The improvements include a graphics processing unit (GPU) comprising a plurality of programmable vertex shaders coupled to a plurality of programmable pixel shaders, which are configured to determine a color of each pixel in the 2D image as an iterative function of intensities and gradients at a plurality of sample points in the 3D image. The sample points are selected along viewing rays associated with the pixels, which extend through the 3D image. At least one of the pixel shaders computes a gradient at one of those sample points based on differences in intensities of plurality of data points in the 3D image neighboring that sample point.
The pixel shaders can compute gradients by employing any of central differences or on-sided differences techniques. In some cases, the pixel shaders compute gradients at sample points along a ray in a coordinate system that is rotated relative to a coordinate system in which the 3D image data is represented. The rotated system is preferably chosen such that an axis thereof is aligned along the ray being processed. Once a gradient is computed in the rotated system, the pixel shaders can rotate that gradient back to the initial coordinate system.
In another aspect of the invention, one or more of the vertex shaders and pixel shaders are configured to determine a color of at least one pixel in the 2D image by passing through the 3D image a viewing ray originating from that pixel and locating a point along the ray that is the nearest point to the pixel with an intensity above or below a predefined threshold. At least one of the pixel shaders assigns a color to that pixel as a function of the intensity of that nearest point. Further, for each ray, at least one of the pixel shaders evaluates intensities of multiple sample points along that ray to locate the afore-mentioned nearest point, prior to performing respective evaluations for a subsequent ray.
In a related aspect, one or more of the pixel shaders interpolate intensity values of a plurality of data points in the 3D image that lie in vicinity of a sample point along a ray so as to evaluate an intensity for that sample point.
In another aspect of the invention, an imaging apparatus is disclosed for rendering a 3D image into a 2D image, which comprises a digital data processor having a central processing unit (CPU) and associated memory in which at least a portion of the 3D image can be stored. The CPU is in communication with a GPU having a plurality of programmable vertex shaders coupled to a plurality of programmable pixel shaders. The CPU partitions the 3D image, or at least a portion thereof, into a plurality so-called “bricks.” One or more of the vertex shaders and pixel shaders are configured to determine intensities of one or more pixels in the 2D image as an iterative function of intensities of sample points in one or more bricks in the 3D image through which viewing rays associated with those pixels are passed. Any two adjacent bricks preferably have a sufficient overlap such that all points in the 3D image data that are required for evaluating the intensities of the sample points along a ray passing through a brick are located within that brick.
These and other aspects of the invention are evident in the drawings and in the description that follows.
A more complete understanding of the invention may be attained by reference to the drawings in which:
Described below are improved methodologies and apparatus for rendering three-dimensional (3D) volumetric images into two-dimensional (2D) images suitable for 2D display. As noted above, such three-dimensional images occur in many disciplines, such as medicine, geo-sciences, or microscopy, to name but a few. The 3D images can be acquired by employing imaging systems, such as computer tomography devices, magnetic resonance imaging devices, or confocal microscopes. Alternatively, the 3D images can be the result of theoretical computations.
The visual perception of natural 3D density distributions, like clouds or fire, involves projection onto a 2D plane, namely, the retina. This projection process can be mimicked using a computer in order to compute a 2D projection of a 3D image of physical 3D objects and to display that projection on a computer screen, thus simulating the perception of those objects. This process is commonly known as volume rendering. A set of techniques for volume rendering, known as direct volume rendering, produce a projected image directly from the volume data without intermediate constructs such as contour surface polygons.
The present invention exploits the functional capabilities of chips or chip sets, boards and/or processor configurations known as graphics processing units (GPUs)—or those of coprocessors providing comparable functions and/or comparable architectures—to implement fast and accurate volume rendering, and particularly, direct volume rendering of 3D image data, though the methods and apparatus described herein may be implemented on general purpose processors, and other special purpose processors. Still other embodiments use no GPU at all, relying on the CPU and/or other co-processing functionality (such as floating point units, array processors, and so forth) to provide or supplement such processing, all in accord with the teachings hereof.
In the following embodiments, the salient features of the methods and apparatus according to the teachings of the invention are described in connection with 3D images obtained by utilizing an image acquisition device. It should, however, be understood that the teachings of the invention are equally applicable to rendering of 3D images that are generated theoretically or otherwise.
The invention has application, for example, in medical imaging such as computed tomography (CT), position emission tomography (PET), single photon emission computed tomography (SPECT), and other medical applications.
Turning to the illustrated embodiment,
In one embodiment, those projections are generated in accord with the principles of computed tomography (CT), i.e., with the source 22 at discrete foci on an arc 24 that completely surrounds the volume 18. In another embodiment, those projections are generated in accord with principles of computed tomosynthesis, i.e., with the source 22 at discrete foci along a smaller arc above the object. In some embodiments, the radiation source is an x-ray source and the detector 22 is an x-ray detector, both mounted at opposite ends of a C-arm that rotates about the volume 18. The rotatable C-arm is a support structure that allows rotating the source 22 and the detector 20 around the volume 18, e.g., a long a substantially circular arc, to capture a plurality of projection images of the object 16 at different angels. It should, however, be understood that the teachings of the invention can be applied to a plurality of measured projection images regardless of the implementation of the apparatus that generates those projection images.
In view thereof and without loss of generality vis-à-vis these other apparatus with which the invention has application, the apparatus 12 is referred to hereafter as a CAT scanner, its attendant source 20 and detector 22 are referred to as an x-ray source and an x-ray detector, respectively, and the images 14 generated by the detector are referred to as projections.
By way of illustration,
Referring again to
Illustrated digital data processor 26 is a workstation, personal computer, mainframe, or other general or special-purpose computing device of the type conventionally known in the art, albeit adapted as discussed below for processing projections 14. As shown in the drawing, it includes a central processing unit (CPU) 30, dynamic memory (RAM) 32, and I/O section 34, all of the type conventionally known the art. The digital data processor 26 may be coupled, via I/O section 34, with a monitor or other graphical display or presentation device 28, as shown.
Illustrated digital data processor 26 also includes a graphical processing unit (GPU) 36 that is coupled to the CPU 30, through which it can access the other elements of the digital data processor 26, as shown. The GPU 36 serves, in the illustrated embodiment, as a coprocessor, operating under the control of the CPU 30 to perform a portion, or the totality, of the computations needed for reconstructing a 3D image of the volume based on the measured projection images. Other embodiments of the invention employ multiple GPUs for this purpose, each responsible for a respective portion of the reconstruction process. Further, as discussed in more detail below, the GPU 30 renders the 3D image into a 2D image suitable for 2D display. The GPU 30 is preferably of the variety having programmable vertex shaders and programmable pixel shaders that are commercially available from ATI research (for example, the Radeon™ 9700 processor), NVIDIA (for example, the GeForce™ FX and Quadro® processors). However, it will be appreciated that the invention can be practiced with processing elements other than commercially available GPUs. Thus, for example, it can be practiced with commercial, proprietary or other chips, chipsets, boards and/or processor configurations that are architected in the manner of the GPUs (e.g., as described below). It can also be practiced on such chips, chipsets, boards and/or processor configurations that, though of other architectures, are operated in the manner of GPUs described herein.
Components of the digital data processor 26 are coupled for communication with one another in the conventional manner known in the art. Thus, for example, a PCI or other bus 38 or backplane (industry standard or otherwise) may be provided to support communications, data transfer and other signaling between the components 30-36. Additional coupling may be provided among and between the components in the conventional manner known in the art or otherwise.
A typical architecture of the GPU 36 suitable for use in the practice of the invention is shown by way of expansion graphic in
Local memory 44 supports both the short-term and long-term storage requirements of the GPU 36. For example, it can be employed to buffer the projection image data 14, iterative estimates of the density distribution of the volume under reconstruction, forward-projection images generated based on those estimates as well as parameters, constants and other information (including programming instructions for the vector processors that make up the mapping and pixel processing sections).
In the illustrated embodiment, the mapping section 40 comprises a plurality of programmable vertex shaders 60-66 that generate mappings between the coordinate space of the projection images and that of the volume 18 and generate locations of sample points along a viewing ray extending from a pixel of the 2D image through the 3D image volume. For example, the vertex shaders map each pixel in a projection image to one or more voxels in the volume. The pixel processing section comprises a plurality of pixel shaders 80-94 that can perform computations for reconstructing the 3D image as well as rendering the 3D image into a 2D displayed image.
DMA engines 68 and 96 provide coupling between the local bus 46 and, respectively, the vertex shaders 60-66 and pixel shaders 80-94, facilitating access by those elements to local memory 44, interfaces 48, 50, or otherwise. A further DMS engine 98 provides additional coupling between the pixel shaders 80-94 and the bus 46. In addition, filters (labeled “F”) are coupled between the DMA engine 96 and the prixel shaders 80-94, as illustrated. These perform interpolation, anisotropic filtering or other desired functions. Also coupled to the vertex shaders 60-66 are respective iterators (labeled “1”), as illustrated. Each iterator generates addresses (in volume space) for the voxels that comprise the corresponding vertex shaders 60-66.
A variety of methodologies can be utilized to generate the 3D image (i.e., reconstruct the volume) from the multiple measured projection images. In some embodiments, reconstruction methods implemented entirely on the GPU, such as those described in co-pending patent application entitled “Method And Apparatus for Reconstruction of 3D-Image Volumes from Projection Images” concurrently filed with the present application and herein incorporated by reference, are employed. In other embodiments, the computational tasks for reconstructing the 3D image are shared between the CPU 30 of the digital data processor and the GPU 36. Some exemplary reconstruction methods for generating the 3D image are discussed in the above-referenced patent application entitled “Improved Methods and Apparatus for Back-Projection and Forward-Projection.” Regardless, the teachings of the invention can be employed to visualize the 3D images via direct rendering the 3D image data, as discussed in more detail below.
In the illustrated embodiment, the GPU renders the 3D volume image into a 2D image that can be displayed on the display device 28. The GPU generates the 2D image pixel-by-pixel, for a given viewing direction, by passing rays corresponding to that viewing direction through the 3D image volume and mapping, e.g., via a transfer function, the intensities at selected points along the rays to the pixels. In general, at each point along a viewing ray, light can be emitted, absorbed, or scattered. In order to reduce complexity, often scattering is neglected. In such a case, each ray can be processed independently from the others.
In many embodiments of the invention, the GPU employs optical models that map the 3D image intensities to emission and absorption coefficients or, more generally, to colors of projected pixels in the 2D image. By way of example, after discretization, the color C(p) of a projected pixel p can be represented as a function of discrete sample points xi along the ray associated with the pixel p as follows:
C(p)=F(x1,x2, . . . ;I,P, . . . )
wherein the function F is referred to herein as the ray formula, I denotes the 3D image and P represents a set of additional parameters, such as the viewing direction. In many cases, the above ray formula can be cast into an iterative format that defines the ray formula for one or n sample points as the result of the previous iterations of the formula for the previously processed sample points. An iterative implementation of the ray formula is as follows:
F(x1,x2, . . . ,I,P, . . . )=F(x1, . . . ,xn,F(x2, . . . ,xn+1,F( . . . )))
For many commonly employed emission-absorption optical models, F represents a sum of blended color and opacity values (ci and ai) of the sample points. The color and opacity values can be determined from the intensities of the 3D image by employing an appropriate transfer function or color table. The color values can also be computed or modified by employing a local shading model (e.g., shaded volume rendering). This latter approach requires that a gradient vector at each sample point (xi) be computed.
In some embodiments, one or more of the vertex shaders compute, for each pixel in the 2D image, a viewing ray for that pixel. The generated ray can be tested against the boundaries of the 3D image volume to ensure that it intersects that volume. At least one of the pixel shaders iteratively computes the above ray formula at a plurality of sample points (generated by at least one or the vertex shaders) along that ray. More specifically, the pixel shader evaluates the intensity of the 3D image at each sample point, e.g., by interpolating the intensity values at a plurality of neighboring 3D image data points. Those evaluated sample point intensities are then employed in the ray formula to obtain a color (intensity) for the pixel associated with that ray as a function of the integrated (sum) color and opacity values at the sample points.
In some preferred embodiments, one or more of the pixel shaders compute the ray formula for multiple sample points along one ray prior to performing the corresponding computations for a subsequent ray. That is, rather than computing one step of the ray formula for all rays followed by computing a subsequent step for those rays, which would require storing the intermediate values in the GPU local memory 44, multiple steps of the ray formula for one ray are computed prior to evaluating the ray formula for corresponding steps of a subsequent ray, thereby avoiding transfer of intermediate results to the memory. In some such embodiments, the pixel shaders test whether a current sample point along a ray, for which multiple steps of the ray function are being computed, lies within the 3D image volume. Alternatively, pixel shaders that provide such testing are employed for processing sample points along selected portions of a ray, e.g., those portions in which such testing may potentially be required, and pixel shaders without such testing capability are utilized for processing the remaining sample points along that ray.
The looping capabilities of the pixel shaders can be employed to compute the full iterative ray formula for one ray by employing a single invocation, rather than multiple invocations, of one pixel shader.
In some cases, the 3D image volume is subdivided into a plurality of three-dimensional segments known as “bricks.” This subdivision of the 3D image volume can be utilized, e.g., when the 3D image data is too large to be loaded at once into the GPU's local memory. In preferred embodiments, the 3D image volume is subdivided such that the resulting bricks overlap. Further, the overlap between adjacent bricks is chosen to be sufficiently large such that all 3D image points that need to be evaluated when rendering a part of a ray corresponding to a brick are guaranteed to be within that brick.
In some embodiments, the 3D image is rendered into an off-screen buffer. That is, the rendered image (i.e., color values of the pixels in the 2D image) is stored in the off-screen buffer. The GPU displays the buffered rendered image by executing another rendering pass. This allows the volume rendering to be implemented on the GPU by employing a different GPU application programming interface (API), e.g., DirectX or OpenGL) than that utilized to display the image. Such off-screen buffering of the rendered image further allows the subsequent rendering pass, which is utilized to display the 2D image, to apply a zoom or other filtering operations to the 2D image data. For example, a lower resolution rendering of the buffered image can be employed to obtain enhanced performance. Other possible filtering operations applied to the off-screed buffered image can provide, e.g., anti-aliasing or other effects.
In some cases, the GPU employs optical models for rendering the 3D image that require computing a gradient at each sample point along the ray. In such cases, rather than transferring a precomputed image normal volume to the GPU, in many embodiments, the GPU itself computes the gradient values “on-the-fly” while rendering the 3D image. For example, a pixel shader computes a gradient value at a sample point by evaluating differences among 3D image values at multiple locations around the sample point. By way of example, the pixel shader can employ central differences with six evaluations, or one-sided differences with four evaluations (three in addition to the sample point itself).
When utilizing the four-point one-sided differences approach for computing a gradient value on-the-fly, the pixel shader can rotate the coordinate system in which the 3D image is represented so as to align one axis thereof with the viewing ray. The pixel shader can then evaluate the gradient value in the rotated coordinate system. This advantageously results in fewer image evaluations as a previous sample point evaluation can be re-used for the gradient computation at the current sample point. The pixel shader can transform back the resulting gradient vector to the original coordinate system. Alternatively, the lighting parameters can be modified accordingly.
In some embodiments, the GPU renders the 3D image into the 2D image by employing an optical model known as isosurface or surface shaded display. In such a case, for each ray, a ray formula returns the color of the nearest point on the ray (nearest point to the pixel corresponding to that ray) whose intensity lies above or below a user-defined threshold. Typically, a standard local shading model, such the Phong model, is used to determine the color of that point. When utilizing such an approach, in preferred embodiments, the pixel shaders perform multiple discrete steps/evaluations and utilize linear or higher order interpolation to detect a threshold crossing along each ray of sight with sub-stepsize accuracy. Alternatively, an iteration method, such as Newton's method, can be employed. The exact position of the threshold is then used for gradient evaluation and/or shading calculation.
In some embodiments, the GPU utilizes a combination of two or more of the above rendering methods to generate a 2D image suitable for display from a 3D image data.
In further embodiments, the GPU sequentially renders a plurality of time-dependent 3D image data sets by employing any of the above methods, or a combination thereof, into a sequence of 2D images that can be displayed in a temporal sequence.
It should be understood that the teachings of the invention are applicable to a wide range of medical (and non-medical) imaging devices and techniques, and are not limited to the illustrated embodiment described above. Those having ordinary skill in the art will appreciate that various modifications can be made to the above illustrative embodiments without departing from the scope of the invention. In view of these, what is claimed is:
The present invention claims priority to a U.S. provisional application entitled “Method and Apparatus for Visualizing Three-dimensional and Higher-dimensional Image Data Sets,” filed Oct. 29, 2004, and having a Ser. No. 60/623,411, the teachings of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4746795 | Stewart et al. | May 1988 | A |
4928250 | Greenberg et al. | May 1990 | A |
4984160 | Saint Felix et al. | Jan 1991 | A |
5031117 | Minor et al. | Jul 1991 | A |
5091960 | Butler | Feb 1992 | A |
5128864 | Waggener et al. | Jul 1992 | A |
5218534 | Trousset et al. | Jun 1993 | A |
5241471 | Trousset et al. | Aug 1993 | A |
5253171 | Hsiao et al. | Oct 1993 | A |
5280428 | Wu et al. | Jan 1994 | A |
5287274 | Saint Felix et al. | Feb 1994 | A |
5307264 | Waggener et al. | Apr 1994 | A |
5368033 | Moshfeghi | Nov 1994 | A |
5375156 | Kuo-Petravic et al. | Dec 1994 | A |
5412703 | Goodenough et al. | May 1995 | A |
5412764 | Tanaka | May 1995 | A |
5442672 | Bjorkholm et al. | Aug 1995 | A |
5488700 | Glassner | Jan 1996 | A |
5594842 | Kaufman et al. | Jan 1997 | A |
5602892 | Llacer | Feb 1997 | A |
5633951 | Moshfeghi | May 1997 | A |
5671265 | Andress | Sep 1997 | A |
5793374 | Guenter et al. | Aug 1998 | A |
5813988 | Alfano et al. | Sep 1998 | A |
5821541 | Tumer | Oct 1998 | A |
5825842 | Taguchi | Oct 1998 | A |
5909476 | Cheng et al. | Jun 1999 | A |
5930384 | Guillemaud et al. | Jul 1999 | A |
5931789 | Alfano et al. | Aug 1999 | A |
5960056 | Lai | Sep 1999 | A |
5963612 | Navab | Oct 1999 | A |
5963613 | Navab | Oct 1999 | A |
5963658 | Klibanov et al. | Oct 1999 | A |
6002739 | Heumann | Dec 1999 | A |
6018562 | Willson | Jan 2000 | A |
6044132 | Navab | Mar 2000 | A |
6049582 | Navab | Apr 2000 | A |
6088423 | Krug et al. | Jul 2000 | A |
6091422 | Ouaknine et al. | Jul 2000 | A |
6108007 | Shochet | Aug 2000 | A |
6108576 | Alfano et al. | Aug 2000 | A |
6123733 | Dalton | Sep 2000 | A |
6219061 | Lauer et al. | Apr 2001 | B1 |
6226005 | Laferriere | May 2001 | B1 |
6243098 | Lauer et al. | Jun 2001 | B1 |
6264610 | Zhu | Jul 2001 | B1 |
6268846 | Georgiev | Jul 2001 | B1 |
6278460 | Myers et al. | Aug 2001 | B1 |
6282256 | Grass et al. | Aug 2001 | B1 |
6289235 | Webber et al. | Sep 2001 | B1 |
6304771 | Youdh et al. | Oct 2001 | B1 |
6320928 | Vaillant et al. | Nov 2001 | B1 |
6324241 | Besson | Nov 2001 | B1 |
6377266 | Baldwin | Apr 2002 | B1 |
6404843 | Vaillant | Jun 2002 | B1 |
6415013 | Hsieh et al. | Jul 2002 | B1 |
6470067 | Harding | Oct 2002 | B1 |
6475150 | Haddad | Nov 2002 | B2 |
6507633 | Elbakri et al. | Jan 2003 | B1 |
6510241 | Vaillant et al. | Jan 2003 | B1 |
6519355 | Nelson | Feb 2003 | B2 |
6615063 | Ntziachristos et al. | Sep 2003 | B1 |
6636623 | Nelson et al. | Oct 2003 | B2 |
6654012 | Lauer et al. | Nov 2003 | B1 |
6658142 | Kam et al. | Dec 2003 | B1 |
6664963 | Zatz | Dec 2003 | B1 |
6674430 | Kaufman et al. | Jan 2004 | B1 |
6697508 | Nelson | Feb 2004 | B2 |
6707878 | Claus et al. | Mar 2004 | B2 |
6718195 | Van Der Mark et al. | Apr 2004 | B2 |
6731283 | Navab | May 2004 | B1 |
6741730 | Rahn et al. | May 2004 | B2 |
6744253 | Stolarczyk | Jun 2004 | B2 |
6744845 | Harding | Jun 2004 | B2 |
6745070 | Wexler et al. | Jun 2004 | B2 |
6747654 | Laksono et al. | Jun 2004 | B1 |
6754299 | Patch | Jun 2004 | B2 |
6765981 | Heumann | Jul 2004 | B2 |
6768782 | Hsieh et al. | Jul 2004 | B1 |
6770893 | Nelson | Aug 2004 | B2 |
6771733 | Katsevich | Aug 2004 | B2 |
6778127 | Stolarczyk et al. | Aug 2004 | B2 |
6825840 | Gritz | Nov 2004 | B2 |
6825843 | Allen et al. | Nov 2004 | B2 |
6947047 | Moy et al. | Sep 2005 | B1 |
7006101 | Brown et al. | Feb 2006 | B1 |
7034828 | Drebin et al. | Apr 2006 | B1 |
7050953 | Chiang et al. | May 2006 | B2 |
7098907 | Houston et al. | Aug 2006 | B2 |
7133041 | Kaufman et al. | Nov 2006 | B2 |
7167176 | Sloan et al. | Jan 2007 | B2 |
7184041 | Heng et al. | Feb 2007 | B2 |
7219085 | Buck et al. | May 2007 | B2 |
7242401 | Yang et al. | Jul 2007 | B2 |
7262770 | Sloan et al. | Aug 2007 | B2 |
7324116 | Boyd et al. | Jan 2008 | B2 |
20010026848 | Van Der Mark et al. | Oct 2001 | A1 |
20020080143 | Morgan et al. | Jun 2002 | A1 |
20020099290 | Haddad | Jul 2002 | A1 |
20020123680 | Vaillant et al. | Sep 2002 | A1 |
20020138019 | Wexler et al. | Sep 2002 | A1 |
20020150202 | Harding et al. | Oct 2002 | A1 |
20020150285 | Nelson | Oct 2002 | A1 |
20030001842 | Munshi | Jan 2003 | A1 |
20030031352 | Nelson et al. | Feb 2003 | A1 |
20030065268 | Chen et al. | Apr 2003 | A1 |
20030086599 | Armato et al. | May 2003 | A1 |
20030103666 | Edic et al. | Jun 2003 | A1 |
20030123720 | Launay et al. | Jul 2003 | A1 |
20030179197 | Sloan et al. | Sep 2003 | A1 |
20030194049 | Claus et al. | Oct 2003 | A1 |
20030220569 | Dione et al. | Nov 2003 | A1 |
20030220772 | Chiang et al. | Nov 2003 | A1 |
20030227456 | Gritz | Dec 2003 | A1 |
20030234791 | Boyd et al. | Dec 2003 | A1 |
20040010397 | Barbour et al. | Jan 2004 | A1 |
20040012596 | Allen et al. | Jan 2004 | A1 |
20040015062 | Ntziachristos et al. | Jan 2004 | A1 |
20040022348 | Heumann | Feb 2004 | A1 |
20040066385 | Kilgard et al. | Apr 2004 | A1 |
20040066891 | Freytag et al. | Apr 2004 | A1 |
20040102688 | Walker et al. | May 2004 | A1 |
20040125103 | Kaufman et al. | Jul 2004 | A1 |
20040147039 | Van Der Mark et al. | Jul 2004 | A1 |
20040162677 | Bednar et al. | Aug 2004 | A1 |
20040170302 | Museth et al. | Sep 2004 | A1 |
20040239672 | Schmidt | Dec 2004 | A1 |
20050088440 | Sloan et al. | Apr 2005 | A1 |
20050128195 | Houston et al. | Jun 2005 | A1 |
20050152590 | Thieret et al. | Jul 2005 | A1 |
20050225554 | Bastos et al. | Oct 2005 | A1 |
20050231503 | Heng et al. | Oct 2005 | A1 |
20050259103 | Kilgard et al. | Nov 2005 | A1 |
20050270298 | Thieret | Dec 2005 | A1 |
20060197780 | Watkins et al. | Sep 2006 | A1 |
Number | Date | Country |
---|---|---|
103 17 384 | Apr 2004 | DE |
0 492 897 | Jul 1992 | EP |
0 502 187 | Sep 1992 | EP |
0 611 181 | Aug 1994 | EP |
0 476 070 | Aug 1996 | EP |
0 925 556 | Jun 1999 | EP |
0 953 943 | Nov 1999 | EP |
0 964 366 | Dec 1999 | EP |
1 087 340 | Mar 2001 | EP |
00953943 | Jul 2004 | EP |
WO 9016072 | Dec 1990 | WO |
WO 9102320 | Feb 1991 | WO |
WO 9205507 | Apr 1992 | WO |
WO 9205507 | Apr 1992 | WO |
WO 9642022 | Dec 1996 | WO |
WO 9810378 | Mar 1998 | WO |
WO 9812667 | Mar 1998 | WO |
WO 9833057 | Jul 1998 | WO |
WO 0120546 | Mar 2001 | WO |
WO 0134027 | May 2001 | WO |
WO 0163561 | Aug 2001 | WO |
WO 0163561 | Aug 2001 | WO |
WO 0174238 | Oct 2001 | WO |
WO 0185022 | Nov 2001 | WO |
WO 0241760 | May 2002 | WO |
WO 02067201 | Aug 2002 | WO |
WO 02082065 | Oct 2002 | WO |
WO 03061454 | Jul 2003 | WO |
WO 03088133 | Oct 2003 | WO |
WO 03090171 | Oct 2003 | WO |
WO 03098539 | Nov 2003 | WO |
WO 2004019782 | Mar 2004 | WO |
WO 2004020996 | Mar 2004 | WO |
WO 2004020997 | Mar 2004 | WO |
WO 2004034087 | Apr 2004 | WO |
WO 2004044848 | May 2004 | WO |
WO 2004066215 | Aug 2004 | WO |
WO 2004072906 | Aug 2004 | WO |
Number | Date | Country | |
---|---|---|---|
60623411 | Oct 2004 | US |