The present disclosure relates to ultrasound imaging in general and, more particularly, to methods and systems for using an acoustic sensor to provide guidance to an interventional device, such as a needle, a catheter, etc., via ultrasound imaging.
Using ultrasound to guide diagnostic or therapeutic invasive procedures involving interventional devices (e.g., needles or catheters) has become increasingly popular in the clinical fields. Interventional ultrasound requires accurately locating the tip or head of an interventional device via ultrasound imaging. Some existing technologies suggest mounting an electrical sensor on the tip of an interventional device to collect an electrical signal from the heart. Those existing technologies, however, have limitations. Often, an interventional device is placed near a target where no or very weak heart signal can be collected, and thus the accurate location of the tip of the interventional device cannot be detected and presented in an ultrasound image. Other existing technologies suggest mounting an electrical sensor on the tip of an interventional device to receive an ultrasonic pulse transmitted from an imaging transducer, convert the pulse into an electrical signal, and pass the signal back to the ultrasound device. Under those existing technologies, however, visualizing the tip of an interventional device in an ultrasound image is difficult when strong tissue clutters are present in the image to weaken the ultrasonic pulse. Also, in those existing technologies, it is difficult to accurately determine which transmitted acoustic beam triggers the electrical sensor, and thus the accurate location of the tip of the interventional device cannot be detected. Moreover, because the ultrasonic pulse traveling in a human or animal body is attenuated very fast and becomes weak and not stable, it is difficult for those existing technologies to distinguish a noise from a real pulse signal at the tip of the interventional device. In sum, the existing technologies can only calculate an approximate, not accurate, location of the tip of the interventional device.
Thus, there is a need to develop a method and system for easily and accurately detecting and presenting the position of interventional devices, such as needles, catheters, etc., via ultrasound imaging and overcome the limitations of prior art systems.
The present disclosure includes an exemplary method for providing real-time guidance to an interventional device coupled to an ultrasound imaging system operating in a first mode and a second mode. Embodiments of the method include, in the first mode: stopping transmission of ultrasound signals from a transducer of the ultrasound imaging system; transmitting, via an acoustic sensor mounted on a head portion of the interventional device, an ultrasound signal; receiving, via the transducer, the transmitted ultrasound signal; and generating a first image of a location of the head portion based on the received ultrasound signal. Embodiments of the method also include, in the second mode: stopping transmitting ultrasound signals from the acoustic sensor; transmitting, via the transducer, ultrasound signals; receiving echoes of the transmitted ultrasound signals reflected back from an object structure; and generating a second image of the object structure based on the received echoes. Embodiments of the method further include combining the first image with the second image to derive a third image displaying a location of the head portion relative to the object structure. Some embodiments of the method also include highlighting the relative location of the head portion in the third image by brightening the location, coloring the location, or marking the location using a text or sign.
An exemplary system in accordance with the present disclosure comprises a transducer, a processor coupled to the transducer, and an acoustic sensor mounted on a head portion of an interventional device. When the disclosed system operates in a first mode, the transducer stops transmitting ultrasound signals, and the acoustic sensor transmits an ultrasound signal that is then received by the transducer and is used to generate a first image of a location of the head portion. When the disclosed system operates in a second mode, the acoustic sensor stops transmitting ultrasound signals, and the transducer transmits ultrasound signals and receives echoes of the transmitted ultrasound signals that are used to generate a second image of an object structure. In some embodiments, the processor combines the first image with the second image to derive a third image displaying a location of the head portion relative to the object structure. In certain embodiments, the processor highlights the relative location of the head portion in the third image by brightening the location, coloring the location, or marking the location using a text or sign.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Reference will now be made in detail to the exemplary embodiments illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Methods and systems disclosed herein address the above described needs. For example, exemplary embodiments include an acoustic sensor mounted on a head portion of an interventional device, such as a needle, a catheter, etc. The acoustic sensor is used as a beacon. Instead of receiving an electrical signal from the heart or receiving an acoustic pulse from an imaging transducer, the acoustic sensor disclosed herein will be a part of an ultrasound imaging system to transmit acoustic pulses. In a first mode of the ultrasound imaging system, the imaging transducer itself does not transmit acoustic pulses or transmits with zero power. Instead, the system instructs the acoustic sensor to transmit acoustic pulses with the timing as if it were located at the center of the transmitting aperture of the imaging transducer to form a sensor image. The transmitting aperture comprises one or more transducer elements. The sensor image, which is a two-dimensional (“2D”) or three-dimensional (“3D”) image, is formed as if the transducer is transmitting. As a result, a one-way point spread function (“PSF”) of the acoustic sensor can be seen on the sensor image. The imaging depth should be multiplied by two due to the one-way characteristics. This sensor image can be combined with an ultrasound image of an object structure to derive an enhanced visualization image, which shows a location of the head portion of the interventional device relative to the object structure. The acoustic pulses transmitted by the acoustic sensor disclosed herein are much stronger and more stable than an acoustic beam transmitted by a transducer element and an echo of the beam, and can be easily and accurately detected and recorded in the sensor image. Methods and systems disclosed herein provide a real-time and accurate position of a head portion of an interventional device in live ultrasound imaging.
Ultrasound apparatus 100A can be any device that utilizes ultrasound to detect and measure an object located within the scope of ultrasound imaging field 120, and presents the measured object in an ultrasonic image. The ultrasonic image can be in gray-scale, color, or a combination thereof, and can be 2D or 3D.
Interventional device 110 can be any device that is used in a diagnostic or therapeutic invasive procedure. For example, interventional device 110 can be provided as a needle, a catheter, or any other diagnostic or therapeutic device.
Acoustic sensor 112 can be any device that transmits acoustic pulses or signals (i.e., ultrasound pulses or signals), which are converted from electrical pulses. For example, acoustic sensor 112 can be a type of microelectromechanical systems (“MEMS”). In some embodiments, acoustic sensor 112 can also receive acoustic pulses transmitted from another device.
With reference to
Ultrasound beamformer 108 can be any device that enables directional or spatial selectivity of acoustic signal transmission or reception. In particular, ultrasound beamformer 108 focuses acoustic beams to be transmitted to point in a same direction, and focuses echo signals received as reflections from different object structures. In some embodiments, ultrasound beamformer 108 delays the echo signals arriving at different elements and aligns the echo signals to form an isophase plane. Ultrasound beamformer 108 then sums the delayed echo signals coherently. In certain embodiments, ultrasound beamformer 108 may perform beamforming on electrical or digital signals that are converted from echo signals.
Processor 106 can be any device that controls and coordinates the operation of other parts of ultrasound apparatus 100A, processes data or signals, generates ultrasound images, and outputs the generated ultrasound images to a display. In some embodiments, processor 106 may output the generated ultrasound images to a printer, or remote device through a data network. For example, processor 106 can be a central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), a printed circuit board (PCB), a digital signal processor (DSP), etc.
Display 102 can be any device that displays ultrasound images. For example, display 102 can be a monitor, display panel, projector, or any other display device. In certain embodiments, display 102 can be a touchscreen display with which a user can interact through touches. In some embodiments, display 102 can be a display device with which a user can interact by remote gestures.
After receiving electrical pulses provided (302) by ultrasound apparatus 100A, acoustic sensor 112 transmits (304) to ultrasound transducer 104 acoustic pulses (ultrasound signals) that are converted from the electrical pulses. The conversion can be performed by acoustic sensor 112 or another component. Upon receiving (304) the acoustic pulses transmitted from acoustic sensor 112, ultrasound transducer 104 converts the received acoustic pulses into electrical signals, which are forwarded (306) to ultrasound beamformer 108. In some embodiments, the electrical signals are converted into digital signals and are then forwarded (306) to ultrasound beamformer 108 for beamforming.
Following a beamforming process, ultrasound beamformer 108 transmits (308) the processed electrical or digital signals to processor 106, which processes the signals to generate an image of a one-way point spread function (“PSF”) of acoustic sensor 112.
Referring back to
In some embodiments, the sensor image can include a unique identifier (image ID) for later retrieval and association purpose. In some embodiments, the sensor image can be stored in a storage or database for later processing.
Under beamforming control (402) of ultrasound beamformer 108, ultrasound transducer 104 transmits (404) ultrasound signals and receives (406) echo signals reflected from an object structure (e.g., a tissue, organ, bone, muscle, tumor, etc. of a human or animal body) in ultrasound imaging field 120. Ultrasound transducer 104 converts the received echo signals into electrical signals, which are passed (408) to ultrasound beamformer 108. In some embodiments, the electrical signals are converted into digital signals and are then passed (408) to ultrasound beamformer 108 for beamforming.
Following a beamforming process, ultrasound beamformer 108 transmits (410) the processed electrical or digital signals to processor 106, which processes the signals to generate an ultrasound image of the object structure.
Referring back to
Processor 106 combines the sensor image generated in the first mode with the ultrasound image generated in the second mode to derive an enhanced visualization image, which is outputted (412) to display 102. In some embodiments, processor 106 retrieves the sensor image stored in a storage or database based on an image ID, which corresponds to an image ID of the ultrasound image, to derive the enhanced visualization image. In certain embodiments, the enhanced visualization image can include a unique identifier (image ID) for later retrieval and association purpose. In some embodiments, the enhanced visualization image can be stored in a storage or database for later processing.
Since the sensor image has a same size as the ultrasound image, in some embodiments, processor 106 derives the enhanced visualization image based on a sum of pixel values in corresponding coordinates of the sensor image and the ultrasound image. For example, processor 106 can perform a pixel-by-pixel summation. That is, processor 106 adds a pixel value at a coordinate of the sensor image to a pixel value at a corresponding coordinate of the ultrasound image to derive a pixel value for the enhanced visualization image, and then computes a next pixel value for the enhanced visualization image in a similar manner, and so on.
In other embodiments, processor 106 derives the enhanced visualization image based on a weighted pixel-by-pixel summation of pixel values at corresponding coordinates of the sensor image and the ultrasound image. For example, processor 106 applies a weight value to a pixel value of the sensor image and applies another weight value to a corresponding pixel value of the ultrasound image, before performing the pixel summation.
In certain embodiments, processor 106 derives the enhanced visualization image based on computing maximum values of corresponding pixels of the sensor image and the ultrasound image. For example, processor 106 determines a maximum value by comparing a pixel value at a coordinate of the sensor image to a pixel value at a corresponding coordinate of the ultrasound image, and uses the maximum value as a pixel value for the enhanced visualization image. Processor 106 then computes a next pixel value for the enhanced visualization image in a similar manner, and so on.
With reference to
It will now be appreciated by one of ordinary skill in the art that the illustrated procedure can be altered to delete steps, change the order of steps, or include additional steps.
After an initial start step, an ultrasound apparatus operates in a first mode, and stops (902) transmission of ultrasound signals from its transducer. In the first mode, the ultrasound apparatus instructs an acoustic sensor mounted on a head portion of an interventional device to transmit (904) an ultrasound signal, and instructs the transducer to receive (906) the ultrasound signal. The ultrasound apparatus generates a first image of the acoustic sensor, indicating a location of the head portion.
In a second mode, the ultrasound apparatus stops (908) transmission of ultrasound signals from the acoustic sensor, and instructs the transducer to transmit ultrasound signals and receive (910) echo signals reflected back from an object structure. Based on the received echo signals, the ultrasound apparatus generates a second image, which is an ultrasound image of the object structure.
The ultrasound apparatus then combines (912) the first image with the second image to derivate a third image, which displays a location of the head portion of the interventional device relative to the object structure. The ultrasound apparatus performs the combination, as explained above.
The ultrasound apparatus displays (914) the third image that may highlight the location of the head portion of the interventional device in the object structure. The process then proceeds to end.
The methods disclosed herein may be implemented as a computer program product, i.e., a computer program tangibly embodied in a non-transitory information carrier, e.g., in a machine-readable storage device, or a tangible non-transitory computer-readable medium, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
A portion or all of the methods disclosed herein may also be implemented by an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), a printed circuit board (PCB), a digital signal processor (DSP), a combination of programmable logic components and programmable interconnects, a single central processing unit (CPU) chip, a CPU chip combined on a motherboard, a general purpose computer, or any other combination of devices or modules capable of performing depth map generation for 2D-to-3D image conversion based on image content disclosed herein.
In the preceding specification, the invention has been described with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made without departing from the broader spirit and scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded as illustrative rather than restrictive. Other embodiments of the invention may be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein.
This application claims the priority and benefit of U.S. Provisional Application No. 61/790,586, filed on Mar. 15, 2013, titled “Systems and Methods to Detect and Present Interventional Devices via Ultrasound Imaging,” which is incorporated in its entirety by reference herein.
Number | Date | Country | |
---|---|---|---|
61790586 | Mar 2013 | US |