Currently claimed embodiments of this invention relate to systems and methods for x-ray fluoroscopy, and more particularly motion correction for digital subtraction angiography.
Digital subtraction angiography (DSA) is a common x-ray imaging technique in which two x-ray projection (sometimes referred to as “fluoroscopic”) images are acquired—one without administration of (iodine) contrast in blood vessels, and one with administration of (iodine) contrast in blood vessels. Subtraction of the two in principle yields a two-dimensional (2D) projection image of the (iodine contrast-enhanced) blood vessels, with surrounding anatomy—such as surrounding bones and other soft tissues—extinguished via subtraction. Ideally, the DSA image provides clear visualization of the (contrast-enhanced) blood vessels.
The image without administration of contrast (non-contrast-enhanced, NCE) is often called the “mask” image. The image with administration of contrast (contrast-enhanced, CE) is often called the “live” image. The subtraction of the two is called the DSA image. The discussion below focuses on 2D DSA—i.e., a subtraction yielding a 2D projection of CE vasculature [1]. However, the discussion can be extended to other types of DSA techniques such as 3D DSA or 4D DSA.
An important assumption underlying the subtraction in conventional DSA is that no patient motion has occurred between the acquisition of the two images. However, it is common in clinical practice that the patient undergoes various forms of voluntary or involuntary motion. Such motion causes a “mis-registration” of the two 2D images and results in “artifacts” in the resulting DSA image. Even small amounts of involuntary motion (for example even ˜1 mm) is sufficient to create strong motion artifacts in the DSA image, since sharp image gradients (edges) associated with surrounding anatomy (especially bones) are sensitive to even small degrees of motion. As a result, motion artifacts are a common confounding factor in 2D DSA, resulting in artifacts that severely diminish the visibility of contrast-enhanced vessels and causing multiple retakes of the two images to obtain a clean (motion-free) subtraction.
Motion artifacts are a common confounding factor that diminishes clear visualization of vessels in interventional radiology, neuroradiology, and cardiology (e.g., treatment of ischemic or hemorrhagic stroke via intravascular stenting and/or coiling). Given the frequency with which motion artifacts occur, the clinical value of 2D motion compensation is anticipated to be very high. Accordingly, there remains a need for improved motion correction techniques for DSA.
An embodiment of the invention is an angiography system. The angiography system includes a table configured to support a subject, and a C-arm configured to rotate around the table, the C-arm including a two-dimensional X-ray imaging system. The angiography system also includes a display arranged proximate the table so as to be visible by a user of the angiography system, and a processing system communicatively coupled to the two-dimensional X-ray imaging system and the display. The processing system is configured to receive, from the two-dimensional X-ray imaging system, contrast-enhanced two-dimensional X-ray imaging data of a region of the subject's body containing vasculature of interest and acquired after administration of an X-ray contrast agent to least a portion of said vasculature, the contrast-enhanced two-dimensional X-ray imaging data corresponding to a position and orientation of the X-ray imaging system relative to the region of the subject's body. The processing system is further configured to receive, from a three-dimensional X-ray imaging system, three-dimensional X-ray imaging data of the region of the subject's body acquired prior to administration of the X-ray contrast agent to at least the portion of the vasculature. The processing system is further configured to generate, from the three-dimensional X-ray imaging data, a two-dimensional mask of the region of the subject's body, the mask comprising simulated non-contrast-enhanced two-dimensional X-ray imaging data that corresponds to the position and orientation of the X-ray imaging system relative to the region of the subject's body. The processing system is further configured to generate a vasculature image of the region of the subject's body, by subtracting the contrast-enhanced two-dimensional X-ray imaging data from the two-dimensional mask, and provide the vasculature image on the display.
Another embodiment of the invention is a method for digital subtraction angiography. The method includes receiving, from a two-dimensional X-ray imaging system, contrast-enhanced two-dimensional X-ray imaging data of a region of a subject's body containing vasculature of interest and acquired after administration of an X-ray contrast agent to at least a portion of said vasculature, the contrast-enhanced two-dimensional X-ray imaging data corresponding to a position and orientation of the X-ray imaging system relative to the region of the subject's body. The method also includes receiving, from a three-dimensional X-ray imaging system, three-dimensional X-ray imaging data of the region of the subject's body acquired prior to administration of the X-ray contrast agent to at least the portion of the vasculature. The method further includes generating, from the three-dimensional X-ray imaging data, a two-dimensional mask of the region of the subject's body, the mask comprising simulated non-contrast-enhanced two-dimensional X-ray imaging data that corresponds to the position and orientation of the X-ray imaging system relative to the region of the subject's body. The method further includes generating a vasculature image of the region of the subject's body, by subtracting the contrast-enhanced two-dimensional X-ray imaging data from the two-dimensional mask, and providing the vasculature image on a display.
Another embodiment of the invention is a non-transitory computer-readable medium storing a set of computer-executable instructions for digital subtraction angiography. The set of instructions include instructions to receive, from a two-dimensional X-ray imaging system, contrast-enhanced two-dimensional X-ray imaging data of a region of a subject's body containing vasculature of interest and acquired after administration of an X-ray contrast agent to least a portion of said vasculature, the contrast-enhanced two-dimensional X-ray imaging data corresponding to a position and orientation of the X-ray imaging system relative to the region of the subject's body. The set of instructions also include instructions to receive, from a three-dimensional X-ray imaging system, three-dimensional X-ray imaging data of the region of the subject's body acquired prior to administration of the X-ray contrast agent to at least the portion of the vasculature. The set of instructions further include instructions to generate, from the three-dimensional X-ray imaging data, a two-dimensional mask of the region of the subject's body, the mask comprising simulated non-contrast-enhanced two-dimensional X-ray imaging data that corresponds to the position and orientation of the X-ray imaging system relative to the region of the subject's body. The set of instructions further include instructions to generate a vasculature image of the region of the subject's body, by subtracting the contrast-enhanced two-dimensional X-ray imaging data from the two-dimensional mask, and provide the vasculature image on a display.
Further objectives and advantages will become apparent from a consideration of the description, drawings, and examples.
Some embodiments of the current invention are discussed in detail below. In describing embodiments, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. A person skilled in the relevant art will recognize that other equivalent components can be employed, and other methods developed without departing from the broad concepts of the current invention. All references cited anywhere in this specification, including the Background and Detailed Description sections, are incorporated by reference as if each had been individually incorporated.
Some embodiments of the invention provide a technique to reduce or eliminate motion artifacts in 2D DSA. In some embodiments, the source of the non-contrast-enhanced (NCE) “mask” image is a three-dimensional (3D) image of the patient (c.f., a 2D projection image). An accurate 2D mask is computed via 3D to 2D (3D2D) image registration to mitigate patient motion that may have occurred, together with a high-fidelity forward projection model (to convert the 3D image to a 2D mask image) to better reflect physical signal characteristics of the 2D “live” image (e.g., X-ray spectral effects, X-ray scatter, and image blur).
The processing system 130 receives, from the 2D X-ray imaging system 115, 117, contrast-enhanced 2D X-ray imaging data of a region of the subject's body 107 containing vasculature of interest. The contrast-enhanced data is acquired after administration of an X-ray contrast agent to least a portion of the vasculature, corresponds to a position and orientation of the X-ray imaging system 115, 117 relative to the region of the subject's body 107.
The processing system 130 also receives, from a 3D imaging system (not shown in
The processing system 130 uses the 3D imaging data to generate a 2D mask of the region of the subject's body 107, the mask being simulated non-contrast-enhanced 2D X-ray imaging data that corresponds to the position and orientation of the X-ray imaging system relative to the region of the subject's body. The processing system 130 also generates a vasculature image of the region of the subject's body 107, by subtracting the contrast-enhanced 2D X-ray imaging data from the simulated 2D mask, and provides the vasculature image on the display 120.
In some embodiments, the processing system 130 generates the 2D mask by registering the 3D imaging data to the contrast-enhanced 2D X-ray imaging data, and projecting the registered 3D imaging data to generate the 2D mask. In some embodiments, registering the 3D imaging data to the contrast-enhanced 2D X-ray imaging data includes using a neural network to solve a transformation between the 3D imaging data and the contrast-enhanced 2D X-ray imaging data, where the neural network is trained on previously acquired imaging data from other subjects, simulated data, or any combination thereof. In other embodiments, registering the 3D imaging data to the contrast-enhanced 2D X-ray imaging data includes using an accelerated iterative optimization technique based on a rigid motion model.
In some embodiments, registering the 3D imaging data includes using a physical model of the 2D X-ray imaging system to match signal characteristics of the 2D mask to signal characteristics of the 2D X-ray imaging data. The physical model includes at least one of an X-ray spectrum model, an X-ray attenuation model, an X-ray scatter model, an x-ray focal spot size, an antiscatter grid model, a detector model, and a scintillator model.
In some embodiments, the processing system 130 receives, from the 2D X-ray imaging system, non-contrast-enhanced 2D X-ray imaging data of the region of a subject's body 107, that was acquired prior to administration of the X-ray contrast agent. The non-contrast-enhanced 2D X-ray imaging data corresponds to a different position and orientation of the X-ray imaging system relative to the region of the subject's body, due to motion of the subject (or equivalently, error in positioning the subject's body 107) between acquisition of the non-contrast-enhanced 2D X-ray imaging data and the contrast-enhanced 2D X-ray imaging data. The processing system 130 generates an initial vasculature image of the region of the subject's body 107, by subtracting the contrast-enhanced 2D X-ray imaging data from the non-contrast-enhanced 2D X-ray imaging data. This initial vasculature image may be contaminated by an artifact arising from the motion of the subject. The processing system 130 provides the initial vasculature image on the display 120, and provides a user interface control to send a request to correct motion artifacts in the initial vasculature image. The processing system 130 only generates the subsequent vasculature image (by subtracting the contrast-enhanced 2D X-ray imaging data from the simulated 2D mask), after receiving a request to correct any visible motion artifacts from the user interface control.
In some embodiments, the angiography system 100 also has a second C-arm (not shown in
In other such embodiments, the second C-arm includes a second 2D X-ray imaging system. The processing system 130 receives, from the second 2D X-ray imaging system, additional contrast-enhanced 2D X-ray imaging data of the region of the subject's body 107 containing vasculature of interest and acquired after administration of the X-ray contrast agent, the additional contrast-enhanced 2D X-ray imaging data corresponding to a different position and orientation of the X-ray imaging system relative to the position and orientation of the first 2D X-ray imaging system. The processing system 130 then generates, from the 3D imaging data, a second 2D mask of the region of the subject's body, the second mask comprising simulated non-contrast-enhanced 2D X-ray imaging data that corresponds to the different position and orientation of the second 2D X-ray imaging system. The processing system generates a second vasculature image of the region of the subject's body 107, by subtracting the additional contrast-enhanced 2D X-ray imaging data from the second 2D mask, and provides the second vasculature image on the display alongside the first vasculature image.
The conventional methodology for formation of a 2D DSA image is shown in the top half of
An example embodiment is illustrated in the bottom half of
In some embodiments, the user 125 (e.g., a physician performing the procedure on a patient) begins with standard 2D DSA, until motion of the patient results in a DSA image exhibiting motion artifacts that challenge clear visualization of vessels and interventional devices. At that point the user 125 may select (e.g., on a user interface control of the angiography or fluoroscopic system) to invoke the motion correction technique. In other words, the user 125 proceeds until step (2) of the conventional technique (which is the same in both halves of
In some embodiments, the new technique is automatically invoked without user intervention, by automated detection of motion (or misregistration) artifacts in the DSA image. Various algorithms may be employed for the automated artifact recognition, including but not limited to a “streak detector” algorithm.
In some embodiments, the motion correction technique only involves one 3D2D registration. Specifically, the motion correction method involves two primary steps— (1) a 3D2D registration of the 3D NCE image mask to the 2D live CE image and (2) a high-fidelity forward projection of the 3D NCE mask according to the 6 DoF or 9 DoF pose resulting from (1). Each step can present a large computational burden with challenges to fast, real-time run-time. However, it is noted that in the event that patient motion has occurred and steps (1) and (2) are performed, subsequent live 2D CE images may be subtracted from the previously computed high-fidelity forward projection of the registered 3D mask, and neither the registration nor high-fidelity forward projection needs to be recomputed. As a result, subsequent motion-corrected DSA images are acquired without re-computation of the registration or forward projection, allowing DSA to proceed in real-time. Only the event that patient motion is again observed (or automatically detected) is there a need to recompute the 3D2D registration and high-fidelity forward projection.
In some scenarios, artifacts in the DSA image may be caused not by patient motion but by the inability (non-reproducibility) to position the x-ray projection image (e.g., C-arm) in the same position for the conventional 2D NCE mask image and the 2D CE live image. Even small discrepancies in the positioning reproducibility of the imager can result in significant artifacts in the DSA image. The method referred to generally herein as “motion correction” applies equally to this scenario of image positioning or repositioning, the scenario of patient motion, or any combination thereof.
For step (3), some embodiments involve an iterative optimization solution of the x-ray imaging system geometry (referred to as the “pose” of the imaging system) for 3D2D registration consisting of: a motion model (for example, 6 or 9 degree-of-freedom rigid-body motion); an objective function that quantifies the similarity of (a) a 2D projection of the 3D image and (b) the live image from step (2); and an iterative optimization that minimizes (or maximizes) the objective function, and involves a rigid motion model, objective function (e.g., gradient information), and an optimizer (e.g., gradient descent or CMA-ES). The utility of the motion compensation is greatly increased using 3D2D registration techniques that are not only accurate and robust but also very fast—e.g., computing the registration in less than 1 second.
The iterative optimization approach tends to be computationally intensive, and in some embodiments employs “acceleration” techniques to run faster. For example, acceleration techniques for 3D reconstruction based on iterative optimization can be employed as described in reference [3], incorporated herein by reference. For the 3D reconstruction problem, the acceleration technique reduces runtimes from hours to −20 seconds. For the proposed 3D2D registration problem, acceleration techniques were found to reduce runtime from minutes to under 10 sec. These techniques include, for example:
A hierarchical pyramid in which the iterations proceed in “coarse-to-fine” stages in which various factors are changed from one level to the next, including pixel size, optimization parameters, etc.
In some embodiments, the 3D2D registration can be solved using deep-learning based techniques in which a neural network learns to solve the (6 DOF or 9 DOF) transformation between the 3D and 2D images. This technique tends to be intrinsically fast and can solve the 3D2D registration in −10 sec or less. For example, the neural network may be a convolutional neural network (CNN) which is trained using a database of previous 3D and 2D datasets (e.g., previously acquired patient data).
In some embodiments, the training data is not acquired but synthesized using data from the same imaging modality or from different imaging modalities. This include instances in which the training data is generated (for example) from a multidetector computed tomography (MDCT) volume, segmented, and forward projected using a high-fidelity forward projection technique similar to the embodiments used to generate the mask for DSA motion compensation. Other embodiments include an analogous method that uses a high-fidelity forward projector with a digital phantom instead of a CT volume, or a pre-existing neural network (such as a generative adversarial network, or GAN) to generate simulated X-ray data. In general, the training data need not necessarily be a pre-acquired dataset from an X-ray CBCT system, but may instead be synthesized from a variety of data sources.
For step (4), the 2D simulated projection is computed not via relatively simple forward projection techniques that are common in the scientific literature (e.g., Siddon forward projection or others) to compute a digitally reconstructed radiograph (DRR). Instead, the 2D simulated projection of some embodiments is a “high-fidelity forward projection” (HFFP) calculation that includes a model of important physical characteristics of the imaging chain—for example, x-ray scatter, the beam energy, energy-dependent attenuation, x-ray scatter, and detector blur. Use of a high-fidelity forward projection to produce the 2D mask image better ensures that the 2D mask image matches the signal characteristics of the live image from step (2), whereas a simple DRR would result in non-stationary (spatially varying) biases that result in residual artifacts in the subtraction image.
In some embodiments, the model of the imaging system used by the high-fidelity forward projector, referred to as the physical model, incorporates numerous variables in order to compute a highly realistic projection image with signal characteristics that closely match x-ray fluoroscopy projection images. The resulting projection image is termed “high fidelity” because it is nearly indistinguishable from a real image. These variables include but are not limited to:
Failure to include these factors in the forward projection image simulation (as with a conventional DRR) would result in numerous signal discrepancies from the live image resulting from inaccurate oversimplifications in the assumptions described above. Even in the absence of motion, the resulting DSA would be littered with subtraction artifacts resulting from discrepancies in signal magnitude, energy-dependent absorption effects, spatially varying x-ray scatter distributions, and mismatch to the blur characteristics (edge resolution) of the real detector.
The high-fidelity forward projection step is performed in some embodiments after the 3D2D registration, only once per application instance of the technique. Therefore, while the high-fidelity forward projector is more computationally demanding than the simple ray-tracing algorithms used in the 3D2D registration loop, the impact in the final runtime is minimal.
For embodiments having more than one x-ray projection imager providing more than one projection view (e.g., bi-plane fluoroscopy systems), the high-fidelity forward projection step is computed for each system such that the motion-corrected DSA can be computed for each view.
To reduce computational burden of the forward projection calculation to a minimum, some embodiments use massive parallelization in GPUs, split into multiple (e.g., three) stages: i) estimation of ray-dependent effects; ii) Monte Carlo scatter estimation, in which rays are not independent of each other; and, iii) operations affecting the primary+scatter signal. For the computation of per-ray effects, all rays forming the projection are traced simultaneously using a GPU kernel and deterministic and stochastic processes are applied independently and in parallel for each ray, resulting in very fast runtimes (e.g., <1 sec.). The Monte Carlo estimation is conventionally the most computationally intensive operation. Instead of a “ray” used in ray tracing, millions of simulated photons (“histories”) in the Monte Carlo estimation are traced in parallel in the GPU. Each photon history is an individual realization of one of the quanta forming a ray. The approach uses a comprehensive set of variance reduction techniques to maximize the certainty of the estimation per photon and accelerate the computation. Employed variance reduction methods include:
Details on the particular implementation of some embodiments can be found in reference [4], incorporated herein by reference. The subsequent operations dependent on the total (primary+scatter) signal, including noise estimation and convolutional operations (e.g., lag and glare), add a negligible amount of time to the computation. In that example, the total runtime is <3 sec, with a non-fully optimized implementation.
The utility of various embodiments of the invention is anticipated to be in clinical applications for which the patient anatomy of interest can be approximated by rigid-body motion, such as the cranium for interventional neuroradiology visualization of blood vessels in the brain, where motion artifacts arise from involuntary motion of the cranium. Other applications include musculoskeletal extremities such as the feet or legs for visualization of peripheral vasculature in cases such as diabetic peripheral vascular disease, blood clots, or deep vein thrombosis.
In applications such as the examples above, motion correction is effective within the constraints of the 6 or 9 DoF rigid-body motion model in the 3D2D registration component. Areas of clinical application where the rigid-body assumption may not be expected to hold (and a deformable 3D2D registration method may be warranted) include cardiology, thoracic imaging (e.g., pulmonary embolism), and interventional body radiology (e.g., DSA of the liver). In such applications, embodiments of the invention that use a deformable, non-rigid 3D2D registration are envisioned.
The following describes some specific examples according to some embodiments of the current invention. The general concepts of this invention are not limited to these particular examples.
Preliminary results to test the methodology of some embodiments of the motion correction technique (MoCo) are shown in
A rigid head phantom featuring contrast-enhanced (CE) simulated blood vessels was used. A 3D MDCT image of the head phantom was acquired, and the CE simulated blood vessels were digitally masked (removed) by segmentation and interpolation of neighboring voxel values. The resulting 3D image represents the non-contrast-enhanced (NCE) corresponding to step (1) in the MoCo technique—acquisition of a 3D NCE mask.
A 2D projection image of the head phantom was simulated by forward projection with a geometry emulating a C-arm x-ray fluoroscopy system. The 2D projection represents the 2D CE “live image” of step (2) in the MoCo technique. During the simulation of the live image a random 6 DoF perturbation of the 3D NCE image was performed such that the “pose” of the system is unknown, as a surrogate for patient motion (and/or non-reproducibility of C-arm positioning). The random transformation included maximum translations of 2 mm and rotations of 2 degrees.
A 3D2D image registration was performed between the 3D NCE CBCT image and the 2D live image. Registration was performed using a 6 DoF rigid-body motion model, an objective function based on gradient orientation (GO), and an iterative optimization method based on the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The 3D2D registration yields a 6 DoF pose of the imaging system.
A 2D forward projection of the 3D NCE CBCT image was computed according to the system geometry described by the “pose” solution of the 3D2D registration. The resulting 2D projection corresponds to the 2D NCE “MoCo Mask” image of step (4) in the MoCo technique. The resulting 2D NCE mask image was subtracted from the live 2D CE fluoroscopic image to yield a (motion-corrected) 2D DSA.
For purposes of comparison, a 2D DSA was also computed according to a 2D forward projection of the 3D NCE CBCT image without 3D2D image registration, corresponding to the conventional “mask” image, from which a conventional 2D DSA image was computed and expected to present significant motion artifact.
The magnitude of motion artifacts in the 2D DSA computed by the proposed MoCo technique was compared to that in conventional 2D DSA without motion compensation as a function of the motion magnitude.
Several embodiments of angiography systems are illustrated in
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium,” etc. are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
The term “computer” is intended to have a broad meaning that may be used in computing devices such as, e.g., but not limited to, standalone or client or server devices. The computer may be, e.g., (but not limited to) a personal computer (PC) system running an operating system such as, e.g., (but not limited to) MICROSOFT® WINDOWS® available from MICROSOFT® Corporation of Redmond, Wash., U.S.A. or an Apple computer executing MAC® OS from Apple® of Cupertino, Calif, U.S.A. However, the invention is not limited to these platforms. Instead, the invention may be implemented on any appropriate computer system running any appropriate operating system. In one illustrative embodiment, the present invention may be implemented on a computer system operating as discussed herein. The computer system may include, e.g., but is not limited to, a main memory, random access memory (RAM), and a secondary memory, etc. Main memory, random access memory (RAM), and a secondary memory, etc., may be a computer-readable medium that may be configured to store instructions configured to implement one or more embodiments and may comprise a random-access memory (RAM) that may include RAM devices, such as Dynamic RAM (DRAM) devices, flash memory devices, Static RAM (SRAM) devices, etc.
The secondary memory may include, for example, (but not limited to) a hard disk drive and/or a removable storage drive, representing a floppy diskette drive, a magnetic tape drive, an optical disk drive, a read-only compact disk (CD-ROM), digital versatile discs (DVDs), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), read-only and recordable Blu-Ray® discs, etc. The removable storage drive may, e.g., but is not limited to, read from and/or write to a removable storage unit in a well-known manner. The removable storage unit, also called a program storage device or a computer program product, may represent, e.g., but is not limited to, a floppy disk, magnetic tape, optical disk, compact disk, etc. which may be read from and written to the removable storage drive. As will be appreciated, the removable storage unit may include a computer usable storage medium having stored therein computer software and/or data.
In alternative illustrative embodiments, the secondary memory may include other similar devices for allowing computer programs or other instructions to be loaded into the computer system. Such devices may include, for example, a removable storage unit and an interface. Examples of such may include a program cartridge and cartridge interface (such as, e.g., but not limited to, those found in video game devices), a removable memory chip (such as, e.g., but not limited to, an erasable programmable read only memory (EPROM), or programmable read only memory (PROM) and associated socket, and other removable storage units and interfaces, which may allow software and data to be transferred from the removable storage unit to the computer system.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
The computer may also include an input device may include any mechanism or combination of mechanisms that may permit information to be input into the computer system from, e.g., a user. The input device may include logic configured to receive information for the computer system from, e.g., a user. Examples of the input device may include, e.g., but not limited to, a mouse, pen-based pointing device, or other pointing device such as a digitizer, a touch sensitive display device, and/or a keyboard or other data entry device (none of which are labeled). Other input devices may include, e.g., but not limited to, a biometric input device, a video source, an audio source, a microphone, a web cam, a video camera, and/or another camera. The input device may communicate with a processor either wired or wirelessly.
The computer may also include output devices which may include any mechanism or combination of mechanisms that may output information from a computer system. An output device may include logic configured to output information from the computer system. Embodiments of output device may include, e.g., but not limited to, display, and display interface, including displays, printers, speakers, cathode ray tubes (CRTs), plasma displays, light-emitting diode (LED) displays, liquid crystal displays (LCDs), printers, vacuum florescent displays (VFDs), surface-conduction electron-emitter displays (SEDs), field emission displays (FEDs), etc. The computer may include input/output (I/O) devices such as, e.g., (but not limited to) communications interface, cable and communications path, etc. These devices may include, e.g., but are not limited to, a network interface card, and/or modems. The output device may communicate with processor either wired or wirelessly. A communications interface may allow software and data to be transferred between the computer system and external devices.
The term “data processor” is intended to have a broad meaning that includes one or more processors, such as, e.g., but not limited to, that are connected to a communication infrastructure (e.g., but not limited to, a communications bus, cross-over bar, interconnect, or network, etc.). The term data processor may include any type of processor, microprocessor and/or processing logic that may interpret and execute instructions, including application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs). The data processor may comprise a single device (e.g., for example, a single core) and/or a group of devices (e.g., multi-core). The data processor may include logic configured to execute computer-executable instructions configured to implement one or more embodiments. The instructions may reside in main memory or secondary memory. The data processor may also include multiple independent cores, such as a dual-core processor or a multi-core processor. The data processors may also include one or more graphics processing units (GPU) which may be in the form of a dedicated graphics card, an integrated graphics solution, and/or a hybrid graphics solution. Various illustrative software embodiments may be described in terms of this illustrative computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the invention using other computer systems and/or architectures.
The term “data storage device” is intended to have a broad meaning that includes removable storage drive, a hard disk installed in hard disk drive, flash memories, removable discs, non-removable discs, etc. In addition, it should be noted that various electromagnetic radiation, such as wireless communication, electrical communication carried over an electrically conductive wire (e.g., but not limited to twisted pair, CATS, etc.) or an optical medium (e.g., but not limited to, optical fiber) and the like may be encoded to carry computer-executable instructions and/or computer data that embodiments of the invention on e.g., a communication network. These computer program products may provide software to the computer system. It should be noted that a computer-readable medium that comprises computer-executable instructions for execution in a processor may be configured to store various embodiments of the present invention.
The term “network” is intended to include any communication network, including a local area network (“LAN”), a wide area network (“WAN”), an Intranet, or a network of networks, such as the Internet.
The term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
Further aspects of the present disclosure are provided by the subject matter of the following clauses.
According to an embodiment, an angiography system, that includes a table configured to support a subject, a C-arm configured to rotate around the table and including a two-dimensional X-ray imaging system, a display arranged proximate the table so as to be visible by a user of the angiography system, and a processing system communicatively coupled to the two-dimensional X-ray imaging system and the display. The processing system is configured to receive, from the two-dimensional X-ray imaging system, contrast-enhanced two-dimensional X-ray imaging data of a region of the subject's body containing vasculature of interest and acquired after administration of an X-ray contrast agent to least a portion of said vasculature, the contrast-enhanced two-dimensional X-ray imaging data corresponding to a position and orientation of the X-ray imaging system relative to the region of the subject's body. The processing system is also configured to receive, from a three-dimensional X-ray imaging system, three-dimensional X-ray imaging data of the region of the subject's body acquired prior to administration of the X-ray contrast agent to at least the portion of the vasculature. The processing system is configured to generate, from the three-dimensional X-ray imaging data, a two-dimensional mask of the region of the subject's body, the mask including simulated non-contrast-enhanced two-dimensional X-ray imaging data that corresponds to the position and orientation of the X-ray imaging system relative to the region of the subject's body, and to generate a vasculature image of the region of the subject's body, by subtracting the contrast-enhanced two-dimensional X-ray imaging data from the two-dimensional mask, and to provide the vasculature image on the display.
The angiography system of any preceding clause, where the vasculature image is a first vasculature image, the processing system is further configured to receive, from the two-dimensional X-ray imaging system, non-contrast-enhanced two-dimensional X-ray imaging data of the region of a subject's body and acquired prior to administration of the X-ray contrast agent, the non-contrast-enhanced two-dimensional X-ray imaging data corresponding to a different position and orientation of the X-ray imaging system relative to the region of the subject's body, and to generate a second vasculature image of the region of the subject's body, by subtracting the contrast-enhanced two-dimensional X-ray imaging data from the non-contrast-enhanced two-dimensional X-ray imaging data, where the second vasculature image is contaminated by an artifact arising from the motion of the subject.
The angiography system of any preceding clause, where the different position and orientation is due to motion of the subject between acquisition of the non-contrast-enhanced two-dimensional X-ray imaging data and the contrast-enhanced two-dimensional X-ray imaging data.
The angiography system of any preceding clause, where the different position and orientation is due to a difference in positioning the C-arm relative to the subject between acquisition of the non-contrast-enhanced two-dimensional X-ray imaging data and the contrast-enhanced two-dimensional X-ray imaging data.
The angiography system of any preceding clause, the processing system further configured to provide the second vasculature image on the display, and to provide on the display a user interface control to send a request to correct motion artifacts in the second vasculature image, where the first vasculature image is generated only after receiving a request to correct motion artifacts from the user interface control.
The angiography system of any preceding clause, the processing system further configured to automatically detect motion artifacts in the second vasculature image, where the first vasculature image is generated only after motion artifacts are detected in the second vasculature image.
Although the foregoing description is directed to certain embodiments, it is noted that other variations and modifications will be apparent to those skilled in the art, and may be made without departing from the spirit or scope of the disclosure. Moreover, features described in connection with one embodiment may be used in conjunction with other embodiments, even if not explicitly stated above.
https://www.spiedigitallibrary.org/conference-proceedings-of-spie/0626/000/A-Technique-For-Automatic-Motion-Correction-In-DSA/10.1117/12.975402.short
This application claims priority to U.S. Provisional Application No. 63/164,756, filed Mar. 23, 2021, which is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/021378 | 3/22/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63164756 | Mar 2021 | US |