All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.
Ultrasound is a safe, portable, fast, and low-cost imaging modality, compared to some other imaging modalities such as magnetic resonance imaging (“MRI”) and x-ray computed tomography (“CT”). MRI machines are generally very large, and require the patient to be very still during the scan, which can take a long time, even up to several minutes. CT scanners are generally very large, and while the scanning time is relatively fast compared to MRI, they deliver a relatively high dose of ionizing radiation to the patient. Ultrasound systems are portable, lower cost, and don't deliver radiation to the patient. Some of the benefits of CT and MRI scanning are that the quality of the imaging is often better than ultrasound, the patient is in a known fixed frame of reference (e.g., lying supine on a bed translated through the scanning cylinder), and the scanning captures a complete anatomic volume image dataset, which can be visualized in any number of ways (e.g., rendered in 3D or panned through slice-by-slice along any cardinal anatomical direction) by the physician after the scanning procedure.
The image quality of some 2D ultrasound systems may be considered relatively grainy, and thus not adequate in some situations where a high quality image is required. Furthermore, because 2D ultrasound is effectively a sampling of non-standardized cross-sections of a volume of the patient, 2D ultrasound does not afford the opportunity to visualize image data in planes or volumes other than those planes originally acquired.
Systems have been developed that can use ultrasound to generate a 3D volume of a portion of the patient, but to date they are very expensive, and generally do not provide a frame of reference to orient the 3D volume with respect to the patient. The lack of a reference frame can limit the utility of the images, or result in medical errors related to incorrect interpretation of the orientation of the image with respect to the patient. Some examples include systems incorporating electromagnetic sensors or application-specific matrix-array probes.
It would be beneficial to have an easy to use, more cost-effective way of generating 3D volumes of tissue using ultrasound, wherein the 3D volumes can be viewed and analyzed in real-time, near real-time, or for subsequent review, and optionally properly oriented to the patient's frame of reference (i.e., aligned with the patient's cardinal anatomical axes). Optionally, but not required, it would also be beneficial to have systems, devices, and methods that enable 3D ultrasound volume generation using existing relatively low-end 2D ultrasound equipment, which can be important in low-resource settings, including rural areas and the developing world. An emergency department is merely an exemplary setting in which it may be beneficial to provide a fast, safe, cost-effective way of obtaining 3D ultrasound volumes using existing 2D ultrasound systems.
Optionally still, it may also be beneficial to provide ultrasound systems that can aid medical personnel in obtaining and interpreting patient data, such as by annotating or providing visual guides on 2D or 3D ultrasound images, regardless of the image reconstruction method.
One aspect of the disclosure is a method, comprising: moving an ultrasound transducer and an orientation sensor stabilized with respect to the ultrasound transducer, while restricting the movement of the ultrasound transducer about an axis or point; and tagging each of a plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by the orientation sensor, relative to the axis or point, each of the plurality of frames of electronic signals indicative of information received by the ultrasound transducer representing a plane or 3D volume of information within the patient. The method can be performed without sensing a position of the transducer with a position sensor.
The method can also include generating a 3D ultrasound volume image of the patient by positioning the plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the axis or point.
The method can also include, prior to acquiring the electronic signals indicative of information received by the ultrasound transducer and prior to moving the transducer while restricted about the axis or point, calibrating the orientation sensor relative to a patient reference.
In some embodiments the tagged frames of electronic signals indicative of information received by the ultrasound transducer are any of raw channel data, raw beamformed data, detected data, and 3D volumes.
In some embodiments, generating a 3D ultrasound volume image of the patient occurs real-time or near-real time with the movement of the ultrasound transducer. In some embodiments, generating a 3D ultrasound volume image of the patient does not occur real-time or near-real time with the movement of the ultrasound transducer.
In some embodiments the tagging is performed by software disposed in an ultrasound system's computing station (i.e., a housing that includes hardware and software for generating and/or processing ultrasound data). In some embodiments software for generating the 3D volume of information is also disposed in an ultrasound system's computing station. Existing ultrasound systems can thus be updated with the tagging and/or 3D volume generating software, or new ultrasound systems can be manufactured to include new software and/or hardware to carry out the methods herein.
In some embodiments communication is established between an external device and one or more ultrasound system data ports. The external device can be adapted to receive as input, from the ultrasound system, a plurality of frames of electronic signals (any type of data herein) indicative of information received by the ultrasound transducer. The software for tagging and/or 3D volume generation can be disposed on the external device. In some exemplary embodiments the external device is in communication with the ultrasound system's video out port or other data port, and the external device is adapted to receive as input 2D ultrasound image data. In some embodiments the external device is adapted to receive as input raw channel data from the ultrasound system.
In some embodiments the axis or point is a first axis or point, the method further comprising restricting movement of the transducer about a second axis or point, further comprising tagging each of a second plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by the orientation sensor, relative to the second axis or point, each of the second plurality of frames of electronic signals indicative of information received by the ultrasound transducer. The method can also generate a second 3D ultrasound volume of the patient by positioning the second plurality of tagged electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the second particular axis or point. Any number of 3D ultrasound volumes can be generated using any of the methods herein, and used in any of the suitable methods herein (e.g., in any type of combining technique).
The method can also combine a first 3D ultrasound volume and a second 3D ultrasound volume together. Combining the first and second 3D volumes can create a combined 3D volume with an extended field of view relative to the first and second 3D volumes individually. Combining the first and second 3D volumes can create a combined 3D volume with improved image quality compared to the first and second 3D volumes individually. In some embodiments restricting movement about the first axis or point and the second axis or point is performed using a single movement restrictor. In some embodiments restricting movement about the first axis or point is performed with a first movement restrictor, and wherein restricting movement about the second axis or point is performed with a second movement restrictor, optionally wherein the first and second movement restrictors are fixed relative to one another at a known orientation, optionally co-planar, angled, or perpendicular.
In some embodiments the movement is restricted due to an interface between an ultrasound probe and a movement restrictor. In some embodiments the movement restrictor is part of the ultrasound probe. In some embodiments the movement restrictor is a component separate from the probe, and can be configured to stabilize the relative positions of the ultrasound probe and movement restrictor. In some embodiments the movement restrictor is part of the patient's body. In some embodiments the movement restrictor is part of the probe user's body (e.g., fingers).
In some embodiments the transducer and orientation sensor are disposed within an ultrasound probe. In some embodiments the orientation sensor is adapted and configured to be removably secured to the ultrasound probe.
The ultrasound probes herein can be wired or wireless.
One aspect of the disclosure is a computer executable method for tagging frames of electronic signals indicative of information received by an ultrasound transducer, comprising: receiving as input a plurality of frames of electronic signals indicative of information received by the ultrasound transducer, the plurality of frames of electronic signals representing a plane or 3D volume of information within a patient, wherein the movement of the ultrasound transducer was limited about an axis or point when moved with respect to the patient; receiving as input information sensed by an orientation sensor stabilized in place with respect to the ultrasound transducer; and tagging each of the plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by an orientation sensor. The computer executable method can be executed without receiving as input position information of the transducer sensed by a position sensor.
In some embodiments the computer executable method is disposed in an ultrasound system housing that includes hardware and software for generating and/or processing ultrasound data. In some embodiments the computer executable method is disposed in an external computing device adapted to be in communication with an ultrasound system housing that includes hardware and software for generating and/or processing ultrasound data.
In some embodiments receiving as input a plurality of frames of electronic signals indicative of information received by the ultrasound transducer comprises receiving as input a plurality of frames of 2D ultrasound image data, and the tagging step comprises tagging each of the plurality of frames of 2D ultrasound data with information sensed by the orientation sensor.
One aspect of the disclosure is an ultrasound system that is adapted to receive as input a plurality of frames of electronic signals indicative of information received by an ultrasound transducer, the plurality of frames of electronic signals representing a plane or 3D volume of information within a patient, wherein the movement of the ultrasound transducer was limited about an axis or point when moved with respect to the patient; receive as input information sensed by an orientation sensor stabilized in place with respect to the ultrasound transducer; and tag each of the plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by the orientation sensor. The ultrasound system can be further adapted to generate a 3D volume image of the patient by positioning the plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the axis or point. The ultrasound system is adapted to generate the 3D volume without receiving as input transducer position information sensed by a position sensor.
One aspect of the disclosure is an ultrasound system that is adapted to generate a 3D ultrasound volume using sensed information provided from an orientation sensor that is tagged to each of a plurality of frames of electronic signals indicative of information received by an ultrasound transducer, and without using information sensed from a position sensor. The sensed information will have been sensed by an orientation sensor in a fixed position relative to the ultrasound transducer.
One aspect of the disclosure is a 3D ultrasound volume generating system, comprising: a freehand ultrasound transducer in a fixed position relative to an orientation sensor, and not a position sensor, the system adapted to generate a 3D ultrasound volume using sensed information provided from the orientation sensor that is tagged to frames of electronic signals indicative of information received by the ultrasound transducer, and without information sensed from a position sensor.
In some embodiments the system further comprises a probe movement restrictor with at least one surface configured to interface with an ultrasound probe, to limit the movement of the ultrasound transducer about an axis or point.
One aspect of the disclosure is a computer executable method for generating a 3D volume image of a patient, comprising: receiving as input a plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer, the plurality of tagged frames of electronic signals each representing a plane or 3D volume of information within a patient, each of the received plurality of frames of electronic signals tagged with information sensed by an orientation sensor stabilized in place with respect to the ultrasound transducer, wherein the movement of the ultrasound transducer was limited about a particular axis or point when moved with respect to the patient; and generating a 3D ultrasound volume image by positioning the plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the particular axis or point.
The computer executable method is adapted to be executed without receiving as input position information of the transducer sensed by a position sensor.
In some embodiments receiving as input a plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer comprises receiving as input a plurality of tagged 2D ultrasound image data, and wherein generating the 3D ultrasound volume comprises positioning the plurality of tagged 2D ultrasound image data at their respective orientations relative to the particular axis or point.
One aspect of the disclosure is a method of generating a 3D ultrasound image volume, comprising: scanning a patient's body with an ultrasound probe in a fixed position relative to an orientation sensor; sensing orientation information while moving the probe, but not sensing x-y-z position information of the probe; and generating a 3D ultrasound volume from a plurality of frames of electronic signals indicative of information received by the ultrasound transducer. The method can further include restricting the movement of the probe about an axis or point.
One aspect of the disclosure is an ultrasound imaging apparatus, comprising: an ultrasound probe in a fixed position relative to an orientation sensor; and a movement restrictor configured with at least one surface to interface with the ultrasound probe, and adapted so as to limit the movement of the ultrasound probe about an axis or point, the movement restrictor further comprising at least one surface adapted to interface with the body of a patient. In some embodiments the movement restrictor has at least a first configuration (or state) and a second configuration (or state), wherein the first configuration (or state) restricts the ultrasound probe's movement about the axis or point, and the second configuration (or state) restricts the ultrasound probe's movement about a second axis or point, optionally wherein the two axes are orthogonal, or in the same plane (but not so limited). In some embodiment the movement restrictor comprises a probe cradle with at least one surface to interface with a surface of the ultrasound probe. In some embodiments the movement restrictor further comprises an axis selector, which is adapted to be moved or reconfigured to select one of at least two axes or points for restriction of movement. In some embodiments the apparatus further comprises a second movement restrictor configured to stably interface with the movement restrictor, the second movement restrictor adapted to so as to limit the movement of the ultrasound probe about a second axis or point.
One aspect of the disclosure is a 3D ultrasound image volume generating apparatus, comprising: an ultrasound probe in a fixed position relative to an orientation sensor; a movement restrictor configured so as to restrict the movement of the ultrasound probe about a particular axis or point; a tagging module adapted to tag each of a plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by the orientation sensor, relative to the particular axis or point; and a 3D volume generating module adapted to position each of the plurality of orientation tagged frames of electronic signals indicative of information received by the ultrasound transducer at respective orientations, relative to the particular axis or point, to generate a 3D image.
In some embodiments the movement restrictor is integral with the ultrasound probe.
In some embodiments the movement restrictor is configured with at least one surface to interface with a surface of the ultrasound probe so as to restrict the movement of the ultrasound probe about a particular axis or point.
In some embodiments the orientation sensor is disposed within a body of the ultrasound probe.
In some embodiments the orientation sensor is adapted to be removably secured to the ultrasound probe. The apparatus can further comprise a sensing member comprising the orientation sensor, the sensing member configured with at least one surface such that it can be secured to a proximal portion of the ultrasound probe, optionally where a probe housing meets a probe cable. In some embodiments the sensing member comprises a probe interface, the probe interface optionally having an opening with a greatest linear dimension of 10 mm-35 mm, optionally 15 mm-30 mm.
In some embodiments the apparatus does not include a position sensor.
In some embodiments the movement restrictor comprises an axis or point selector adapted so that the movement restrictor can restrict the movement of the ultrasound probe about a second axis or point.
In some embodiments the movement restrictor is configured with at least one surface such that it can be positioned on the body of a patient.
In some embodiments, the apparatus further comprises an external device in communication with an ultrasound system, the external device comprising the tagging module, and receiving as input the plurality of frames of electronic signals indicative of information received by the ultrasound transducer. The external device can also be in communication with the orientation sensor. The external device can further comprise the 3D volume generating module. The external device can be in communication with a video out port of the ultrasound system. The external device can be in communication with the ultrasound system to enable the external device to receive as input from the ultrasound system at least one of raw channel data, raw beamformed data, and detected data.
In some embodiments the apparatus further comprises a second movement restrictor configured to be stabilized with respect to the movement restrictor, the second movement restrictor configured with at least one surface to interface with the ultrasound probe so as to restrict the movement of the ultrasound probe about a second particular axis or point. The tagging module can be adapted to tag each of a plurality of frames of electronic signals indicative of information received by the ultrasound transducer with information sensed by the orientation sensor, relative to the second particular axis or point, wherein the 3D volume generating module is adapted to generate a second 3D ultrasound volume of the patient by positioning the second plurality of tagged frames of electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the second particular axis or point. The 3D volume generating module can further be adapted to merge the 3D ultrasound volume and the second 3D ultrasound volume together.
In some embodiments the tagging module and the 3D volume generating module are disposed within an ultrasound system housing that includes hardware and software for generating and/or processing ultrasound data.
One aspect of the disclosure is a sensing member with at least one surface configured to be removably secured in a fixed position relative to an ultrasound probe, the sensing member comprising an orientation sensor and not a position sensor. In some embodiments the sensing member comprises an adhesive backing. In some embodiments the sensing member has an opening, optionally, with a largest linear dimension from 10 mm-35 mm, optionally 15 mm-30 mm. In some embodiments the sensing member comprises a deformable element configured to be deformed to allow the sensing member to be secured to the ultrasound probe. In some embodiments the sensing member is adapted for wireless communication. In some embodiments the sensing member is adapted for wired communication.
One aspect of the disclosure is an ultrasound probe movement restrictor, the movement restrictor configured to stably interface with an ultrasound probe. The movement restrictor can be adapted and configured to restrict movement of the probe about one, two, three, four, five, or even more, axes or points. In some embodiments the movement restrictor is configured to be stabilized to one more movement restrictors.
This disclosure relates generally to ultrasound imaging, and more particularly to tagging frames of electronic signals indicative of information received by an ultrasound transducer with sensed orientation information, and generating a 3D volume using the tagged frames of electronic signals. The methods herein restrict movement of the ultrasound transducer about at least one axis or point, and are capable of generating the 3D volume using information sensed from an orientation sensor, without requiring position information sensed by a position sensor (i.e., from an x-y-z sensor, such as an optical position sensor or electromagnetic field sensor).
Use of position sensors (which may also incorporate orientation sensing) with ultrasound probes for volume image generation give the advantage of allowing the ultrasound probe greater freedom of movement in space and providing precise location information about the image plane from wherever the probe may be held in contact with and in relation to the patient's body. Methods using position sensors with ultrasound probes have been proposed and investigated as early as the 1990's, but precise position determination can be difficult to achieve (and often subject to many constraints or sensitive to factors in the clinical environment, such as electromagnetic noise) and the sensors or sensing systems developed to achieve this which have been used with ultrasound probes for volume image generation—are often quite complex and may come in awkward form factors. Because of this, position-sensor-based ultrasound volume image generation methods have had limited success, and generally have not been integrated into commercial ultrasound systems and have not gained traction in the marketplace. The disclosure herein includes methods that can generate 3D ultrasound volumes without requiring the use of position sensors.
The methods herein can optionally calibrate the orientation sensor with respect to a patient's orientation and use the calibration reading(s) to properly orient at least one of the 2D image data and the 3D volume with respect to the patient's cardinal anatomical axes, thus providing the ultrasound images with a correct frame of reference to aid interpretation of the images. While the calibration methods herein provide significant advantages, they are optional.
One of the advantages of methods and systems herein is that they can, by restricting the movement of the transducer about at least one axis or point, generate a 3D volume using feedback from an orientation sensor and without the use of a position sensor. Orientation sensors are widely available in a very small form factor and relatively inexpensive, while position sensors are relatively more expensive and add complexity to the system.
An additional advantage of some (but not all) of the methods and devices herein is that they can augment, or be used with, existing ultrasound systems that are capable of acquiring and displaying 2D image data (a majority of existing systems and probes only have 2D imaging capability, but some have a 3D mode as well). Once augmented, the ultrasound systems can then be used to generate 3D ultrasound image volumes of a subject, and viewed in real-time or near real-time, or those volumes can subsequently be visualized using a variety of 2D and 3D display methods. These embodiments provide a relatively simple and low-cost way of generating beneficial 3D volumes of a patient using existing 2D ultrasound systems. While not limited in use, these embodiments can be important in low-resource settings, including rural areas and the developing world. They may of course be used in developed regions as well, or in any setting or application where a more cost-effective solution is beneficial. As used herein, an existing ultrasound system generally refers to an ultrasound system that includes an ultrasound probe (with transducer therein), hardware and software for generating and/or processing ultrasound data, and a monitor for displaying ultrasound images. A majority of existing ultrasound systems and probes are only capable of acquiring, generating, and displaying 2D data and images, but some existing systems are capable of 3D imaging, even if they are typically not used clinically in that manner. Existing ultrasound systems can, of course, include additional components and provide additional functionality. It is important to note that the augmenting of existing ultrasound systems as described herein is merely an example of using the methods herein, and the disclosure is not so limited.
One aspect of the disclosure is a method of generating a 3D ultrasound volume, comprising moving an ultrasound transducer and an orientation sensor stabilized with respect to the ultrasound transducer, while restricting the movement of the ultrasound transducer about an axis or point, optionally due to an interface between an ultrasound probe and a movement restrictor; tagging each of a plurality of electronic signals indicative of information received by the ultrasound transducer, optionally 2D ultrasound image data, with information sensed by the orientation sensor, relative to the axis or point, each of the plurality of electronic signals indicative of information received by the ultrasound transducer representing a plane of information within the patient; and generating a 3D ultrasound volume image of the patient by positioning the plurality of tagged electronic signals indicative of information received by the ultrasound transducer at their respective orientations relative to the axis or point.
The methods of use herein allow for freehand movement of the probe, meaning that a person can move the probe with her hand, about an axis or point.
By restricting the movement of the transducer about a particular axis or point, each of the electronic signals indicative of information received by the ultrasound transducer can be tagged, or associated with, real-time information sensed by the orientation sensor (e.g., an angle) relative to the particular axis or point. The axis or point is thus a reference axis or point, and the electronic signals indicative of information received by the ultrasound transducer, tagged with orientation data, can then be used to generate a 3D volume relative to the reference axis or point. For example, the tagged electronic signals indicative of information received by the ultrasound transducer can be inserted into a 3D voxel grid along a plane at an appropriate angle relative to the axis or point.
“Movement” about an axis or point, as used herein, includes any movement with respect to an axis or point, such as rotation, pivoting, tilting, spinning or twisting, and freely tumbling about a point. Freely tumbling refers to moving the transducer in multiple dimensions, methods of which generally require using all coordinates/dimensions of the orientation sensor's quaternion orientation reading.
When movement is restricted as described herein, the movement can be restricted by any object that can restrict movement about a particular axis or point. For example, movement may be restricted by a mechanical fixture, or a hand (or fingers) of medical personnel or the patient. For example, medical personnel can “pinch” the sides of an ultrasound probe with two or more fingers, thus using fingers as a movement restrictor to restrict movement about an axis or point. In some embodiments the movement restrictor is the patient's body. For example, in transvaginal ultrasound applications, the patient's body can act as the movement restrictor. In some embodiments the movement restrictor is part of, or integral with, the ultrasound probe. That is, the movement restrictor can be any feature or mechanism built into the probe that allows for restricted movement about at least one particular axis or point.
One aspect of the disclosure is a 3D ultrasound image volume generating apparatus, comprising: an ultrasound probe in a fixed position relative to an orientation sensor; a movement restrictor configured so as to restrict the movement of the ultrasound probe about a particular axis or point; a tagging module adapted to tag each of a plurality of frames of electronic signals indicative of information received by the ultrasound transducer, optionally 2D ultrasound image data, with information sensed by the orientation sensor, relative to the particular axis or point; and a 3D volume generating module adapted to position each of the plurality of orientation tagged frames of electronic signals indicative of information received by the ultrasound transducer at respective orientations, relative to the particular axis or point, to generate a 3D volume image.
One of the advantages of systems and methods herein is that they can generate 3D volumes using information sensed by an orientation sensor, and do not require information sensed by a position sensor (i.e., an x, y, z sensor). In fact, in the embodiments herein, the systems and methods (unless indicated to the contrary) specifically exclude a position sensor (although information from a position sensor can conceivably be used with modification to the systems and methods). Examples of commercially available position sensors (which are not needed) include optical, electromagnetic and static discharge types. An electromagnetic version includes a transmitter (which may be placed on the transducer), and three receivers (placed at different, known locations in the room). From the phase shift difference in the electromagnetic signals received by these three receivers, the location and orientation of the ultrasound transducer can be determined. Such sensing methods require expensive equipment external to the sensing device for triangulation purposes, and these can cause electromagnetic interference with other medical equipment commonly found in hospitals and clinics. Additional disadvantages of some of these sensor types and their use include that the scanning room must have these sensors installed and the system calibrated, before actual scanning can occur, and that they have limited range—the receivers can only be used with accuracy with about 2-3 feet of the transmitter box.
Orientation sensors (which may also be referred to as an angle, or angular, sensors) are of a type that sense rotation about a single or multiple axes, including, but not limited to, capacitive MEMS devices, gyroscopes, magnetometers, sensors employing the Coriolis force, and accelerometers. The orientation sensors are capable of providing real-time feedback data corresponding to the probe's angular orientation. Any number of inertial modules (for example, one may employ a 3-axis gyroscope, a 3-axis magnetometer, and 3-axis accelerometer-components, which are common in many modern smartphones) are capable of this and are commercially available for relatively low cost. The orientation sensors may also be adapted to transmit sensed orientation information wirelessly. Orientation sensors are generally inexpensive compared to position sensors and their use, which is why the systems and methods herein, which can generate 3D volumes using only sensed orientation information and do not need sensed position information, provide a more cost-effective and simplified solution than other approaches to 3D ultrasound generating that include position sensors. Off-the-shelf orientation sensors can be used in the systems and method herein. Alternative embodiments that are modified relative to those herein could include a position sensor, but would not have advantages of systems and methods herein.
Sensing member 24 is configured to be secured to probe 22 so that the position of orientation sensor 243 is fixed relative to the ultrasound transducer once sensing member 24 is secured to probe 22. In this embodiment probe interface 240 is configured so that it can be attached directly to a proximal region of probe 22 and stabilized to probe 22, but can be easily removed from probe 22 at the end of the procedure. In this embodiment probe interface 240 includes two stabilizing arms 2401, and probe interface 240 is a deformable material. The stabilizing arms are spaced from one another, and the interface 240 is deformable enough, such that as the interface 240 is slid onto the proximal region of probe 22, the stabilizing arms deform away from one another as they pass the largest diameter region of the proximal region of probe 22, but as they interface 240 continues to be advanced, the arms will again move towards one another and towards their as-manufactured spacing. Arms 2401 help secure the probe interface 240 of sensing member 24 to probe 22, and thus help secure sensing member 24 to probe 22.
Any of the cradles herein, examples of which are described in more detail below, can include a probe interface 240 (or any other type of probe interface herein that fixes the position of orientation sensor and the transducer). That is, a sensing member can be integral with the cradle, or it can be a component separate from the cradle (whether it is stabilized with respect to the cradle or not).
One of the advantages of some of the sensing members herein is that they can be secured to many type of existing ultrasound probes, which allows the sensing member to be used at least near-universally with existing ultrasound systems. This can eliminate the need to redesign or reconfigure existing probes, or manufacture completely new or different probes, which can greatly reduce the cost of the methods of 3D volume generation set forth herein. Probe interface 240 is configured to be able to be secured with many different types of existing ultrasound probes, such as convex, linear, curvilinear, phased array, micro convex, T-type linear, biplanar, endolumenal (for example, endovascular), or endocavitary (for example, transesophageal, endovaginal or intrarectal), and have proximal regions (where the cord or cable begins) that are the same size or are similar in size. Arms 2401 are deformable so that they can be moved away from one another when securing sensing member 24 to probe 22, but have at-rest, or manufactured, spacing between them to secure the sensing member 24 to probe 22.
In some embodiments, the “diameter” of the opening in probe interface 240 is between 10 mm and 35 mm (such as between 15 mm and 30 mm), and may be sized in that manner to be able to accommodate many standard ultrasound probes. In some embodiments the probe interface is adjustable to allow it to be secured to a plurality of different sized probes. Some sensing members are, however, probe-specific, and as such can be sized and configured to be secured to specific types of probes. When diameter is used in the context of a probe interface opening, it does not require a circular opening; rather, diameter refers to the largest linear dimension across the opening. As can be seen in
Securing sensing member 24 to the proximal region of the probes secures the sensing member to the probe, and it does not interfere with a user's movement of probe 22. This allows a user to be able to grasp the probe 22 body and use it as she normally would during a procedure, and still have the sensing member 24 secured stably thereto. The position of the sensing member 24 relative to probe 22, as well as its configuration, allows for near universal use with existing ultrasound probes. Medical personnel thus need not be retrained using new probes, and new probes need not be manufactured.
Sensing member 24 includes probe interface 240 and housing 241, which includes the orientation sensor(s). In other embodiments the sensing member can have different configurations or constructions, as long as an orientation sensor is included therein or thereon. For example, the probe interface could still have stabilizing arms, but those arms could have magnetic elements at their respective ends, to help maintain their “grasp” on the probe 22 when in use. Alternatively, the sensing member can be secured to probe with other securing mechanisms, such as, for example, one or more straps wrapped around one or more portions of the probe body, a temporary or permanent adhesive, or hook-and-loop closures. The type of securing mechanism can vary greatly, and can be any suitable mechanism, as long as the sensor's position is fixed relative to the transducer so that their relative positions do not change during data acquisition.
In
In some embodiments that modify the systems and methods herein, an orientation sensor may be optional, such as when orientation can be sensed from a component with known rotation (e.g., a motor). For example, the component that interfaces the ultrasound probe may also include motorized rotational stages to provide automated sweeps (for example, automated “twisting” or “fanning”). In this case, an orientation sensor may not explicitly be required to provide position, as an electronic motor may know exactly the amount of rotation being applied. The known amount of rotation can be used as part of the tagging procedure to tag each of the 2-D images.
As set forth above, methods herein include restricting the movement of the ultrasound probe about an axis or point while sensing orientation information relative to the axis or point. Restricting the probe's movement (whether it is rotating, twisting, tumbling, etc.) about a desired point or axis may be achieved in a variety of ways, and can be mechanical or non-mechanical (e.g., with fingers or a hand). Mechanical examples include, without limitation, features incorporated into the design of the probe housing itself, such as protrusions, indentations, rods, or wheels meant for holding or clamping the probe by hand or some other mechanism, a stand attachable to the probe that can provide a stable reference to the body surface, or by mating the probe with a fixture that can be positioned on the patient and interface with the probe.
Such stands or fixtures may be adapted and/or configured to be positioned on and stabilized relative to the surface of the patient's body. For example, a fixture can be made of a material that is deformable to some extent, allowing for better conformation to the body. In other embodiments an adhesive (for example, using existing ECG adhesive stickers) can be used to provide additional stability between the fixture and the patient. In some embodiments the system can mechanically pull a local vacuum (creating suction), or have a bottom surface perforated with holes and a port to attach tubing from a vacuum line. These and other movement-limiting components and features can be made with relatively inexpensive materials (e.g., plastic), can be machined or manufactured using methods such as 3D printing (also see
Movement restrictor 56 includes base 560, and slip ring 561, which is disposed within base 560. Movement restrictor 56 also includes probe cradle 562, which is configured to receive and stabilize ultrasound probe 52. Probe distal end 520 can be seen extending distally beyond probe cradle 562. Movement restrictor 56 also includes axis selector 563, which is adapted to be reconfigured relative to cradle 562 so that a particular probe restriction axis or point can be selected.
In
Movement restrictor 56 is also adapted to restrict the movement of probe about a second axis, A2-A2, when axis selector 563 is moved to a second state or configuration (different than the first state) relative to base 560.
Movement restrictor 56, and other movement restrictors herein, may also be configured to restrict movement within a single image plane of the transducer, which could be helpful in, for example, scenarios in which it may be advantage to widen the field of view in-plane, such as in cardiac applications. Some cardiac probes have a relatively narrow aperture, and rocking back and forth in-plane could widen the field of view.
The movement restrictors herein can be configured to limit the movement about more than two axes (or in some cases only one axis).
In some alternative embodiments, however, a mechanical movement restrictor is not required to restrict the movement of the probe about a particular axis. For example, in some methods of use, a user such as medical personnel (or a second person assisting in the procedure, or even the patient) may be able to effectively pinch the sides of the probe with fingers, or another tool that is not interfacing the patient's body, creating enough friction to cause the probe to, when the probe is moved, only rotate about the axis defined by the axis between the fingers. The fingers in these embodiments are thus the movement restrictor. The disclosure herein thus includes restricting movement about a particular axis without necessarily using a mechanical movement restrictor. There may be advantages to using a mechanical movement restrictor, however, such as that the movement restrictor may be adapted to restrict movement about at least a first axis and a second axes.
In some embodiments herein the orientation sensor is secured to a component other than the probe, but is secured to have a fixed position relative to the transducer through the movement. For example, in some embodiments the orientation sensor is secured to a cradle, which in the embodiment in
Additionally, a “frame” of data can also be a 3D volume of data. For example, methods herein can be used with a matrix-array or wobbler probe and a 3D-capable scanner. In these embodiments the 3D frames of data (i.e., 3D volumes) that are internal to the scanner are tagged with orientation sensor information, using any of the methods and systems herein. In these embodiments, first and second (or more) 3D volumes can be used, based on the known orientation relative to at least axis or point, to generate, for example, a larger 3D ultrasound volume image. The concepts herein related to tagging frames of data can thus apply to both 2D data and well as 3D data.
When the phrase “electronic signals indicative of information received by the ultrasound probe” is used herein, it is describing a frame of data, even if the term “frame” is not specifically used.
In this particular embodiment, the tagging step 78 tags each of the plurality of 2D ultrasound image data with orientation information sensed by the orientation sensor (step 76), such as, without limitation an angle relative to the particular axis or point (additional exemplary aspects of which are shown in
A 3D volume is then, either in real-time or near-real time, or a later time, generated by software, step 80, that positions the plurality of tagged 2D ultrasound image data at their respective orientation relative to the particular axis or point. Exemplary details of a 3D generation method are also shown in
In alternative methods to that shown in
The tagging and 3D generation methods can be performed with software that is added to existing ultrasound systems. That is, the methods can be incorporated with existing ultrasound systems, or added the manufacture of new ultrasound systems.
Alternatively, existing 2D ultrasound systems can be augmented with devices or methods herein to provide high quality 3D volumes, which greatly reduces the cost and avoids the need to update existing ultrasound systems or manufacture an entirely new ultrasound system. Existing 2D ultrasound systems already include an ultrasound probe and are already adapted to generate 2D image data (and display 2D images) based on echo signals received by the transducer.
Any of the information or data obtained at any step in the process can be stored in one or more memory locations for future use, including further visualization. Additionally, electronic signals indicative of information received by the ultrasound probe and sensed orientation data can be stored separately or together, and the 3D volume generation software can be adapted to generate the 3D volumes later based on the stored data.
Again, the exemplary system in
The sensed orientation sensor information can be communicated from the orientation sensor to the external device in a wired or wireless manner. For example, in the embodiment in
In alternative embodiments, however, it may be desirable to redesign existing ultrasound systems and probes to incorporate aspects of the systems and methods herein. An exemplary method of doing that is to include an orientation sensor inside a probe (rather than being a separate component secured to it), and the computing device of the ultrasound systems can be modified to include the tagging software and/or the 3D volume generating software (a separate external device is thus not a required aspect of this disclosure). The computing device on the ultrasound system would then receive as input the feedback from the orientation sensor (via the probe cable), and the tagging software and the 3D reconstruction method—using both the sensor feedback and the electronic signals indicative of information received by the ultrasound transducer (e.g., raw channel data, raw beamformed data, and detected data) already existing in the ultrasound system—can be disposed in the ultrasound system. The existing monitor can then display the 3D-generated volume, and the system can include updated user interface software to allow the user to interact with the visualization of the 3D volume as set forth herein. The user interface can be adapted to toggle the ultrasound monitor between 2D mode and 3D visualization modes.
Calibration
As set forth above (see
The calibration step can be used with systems that are not adapted to or do not generate 3D volumes. The calibration step and the associated methods of use can be beneficially used with existing 2D image systems. For example, the calibrating step can be used to provide a visual indicator on the 2D image of how the probe is oriented with respect to the patient.
While 10A does illustrate the calibration position of the probe, the illustration in
3D Volume Combining
An exemplary advantage of some of the methods and systems herein is that they allow for restricted movement about more than one axis (see, for example,
First and second 3D volumes can be generated using ultrasound transducers that are operating at different frequencies. For example, high frequency ultrasound probes operate at relatively higher frequency, provide higher image resolution, and image at shallower depths. Lower frequency probes operate at lower frequencies, provide generally lower resolution, but have a better depth of penetration. The 3D volumes, generated using probes with different frequencies, can be compounded, taking advantage of the higher resolution at shallower depth, with the better depth of penetration of the lower frequency probe. In some embodiments the movement restrictor is configured to interface different types of probes with different frequencies, and is configured to restrict movement of each probe about at least one axis or point. For example, the system can include a restrictor with interchangeable cradles, each cradle configured to interface with a particular type of probe (or particular family of probes).
In some embodiments a user interface, on an external device on a modified existing ultrasound system, includes buttons (or similar actuators) or a touch screen that allow a user to select from the multiple axes. The user then performs the sweep about the axis or point, and the software saves that image data. The user can then select a different axis, and then performs the second sweep about a second axis or point. The software method can then compound the 3D image volumes, and the output is a higher quality 3D volume. Compounding in this context is generally known, and an exemplary reference that includes exemplary details is Trahey G E, Smith S W, Von Ramm T. Speckle Pattern Correlation with Lateral Aperture Translation: Experimental Results and Implications for Spatial Compounding. Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactions on. 1986 May; 33(3):257-64.
Any of the methods herein can also include confidence mapping steps to assess 2D pixel quality prior to incorporating any of the 2D images into the 3D volume. Confidence mapping can also be used in any of the methods herein to preferentially select data from between at least two 3D volumes when combining/merging 3D volumes. Exemplary aspects of confidence mapping that can be used in these embodiments can be found in, for example, Karamalis A, Wein W, Klein T, Navab N. Ultrasound confidence maps using random walks. Medical image analysis. 2012 Aug. 31; 16(6):1101-12.
The disclosure herein also includes methods of use that merge, or stitch together, multiple 3D volumes (which may be adjacent or partially overlapping) to expand the total field of view inside the patient, thus generating a larger merged 3D image volume. This can enable the physician to perform a more complete scan of the body for immediate review, similar to CT but without the use of ionizing radiation. The plurality of 3D volumes can be merged, or stitched, together, as long as the relative position of each rotation axis or point is known or can be determined. In these embodiments, the 3D volumes can be partially overlapping, with a first 3D volume being at a different depth than a second 3D volume.
A merely exemplary apparatus that is adapted to enable multiple 3D volumes that can be combined (e.g, merged or stitched) together is shown in
The apparatus can also include one or more angled connectors 903A and 903B, which also have linking elements like the bases, and are thus adapted to interface with the bases. The angled nature of angled connectors allows adjacent movement restrictors to be coupled at an angle relative to one another (i.e., not aligned along a plane). This can be beneficial on a curved portion of the patient, where it is advantageous or necessary in order to engage the movement restrictor with the patient's body. The angled connectors can be used at any desired location to provide the relative angled coupling between adjacent movement restrictors.
Any number of movement restrictors may be linked together, in a variety of configurations, aligned or at an angle to one another, depending on the surface of the patient and/or the application.
Any of the bases can have configurations other than hexagonal, such as rectangular, square, circular, triangular, octagonal, or even irregular, such as if the shape or shapes are custom made for a particular use on the patient. The connectors can similarly have any suitable variety of configurations and linking members as desired.
In the embodiment shown in
In this embodiment, a probe 92 is shown stabilized in probe cradle associated with movement restrictor 90I C. The probe can be used in any of the manners described herein, such as moving the probe about one or both axes after selecting the particular axis with the axis selector. The probe has an associated orientation sensor (inside the probe or secured thereto), and the 2D images can be tagged as described herein (with orientation information and/or calibration information). After data has been obtained using one movement restrictor, the probe can be moved (and perhaps the entire slip ring/probe cradle, axis selector unit as well) to a different movement restrictor. The probe can be swept again about one or more axes or points. The probe can be moved to any number of movement restrictors to obtain image data. Information and data can be stored at any location at any or all steps in the process.
For a particular base, if sweeps about two axes are performed, image compounding can occur for each base before 3D volumes from adjacent movement restrictors are stitched.
Additionally, data can be saved after each sweep, and the software can process the data at any stage of the process.
In some embodiments the components interfacing the patient are fixed with respect to the patient. A user can simply hold the movement restrictors against the patient, or, for example, a temporary adhesive sticker or vacuum suction can be applied to hold the movement restrictors in place. In some embodiments, even if the patient interface component(s) is not specifically fixed with respect to the patient, software can correctly identify image landmarks to aid in stitching partially overlapping 3D volumes that were not acquired with the aid of a fixed mechanical reference system. Using a fixed mechanical system with a known configuration can, however, simplify and improve the accuracy of volume stitching.
In some embodiments the patient interface (e.g., the bases of the movement restrictors) can be a single integral unit. For example, in a modification of
The methods and devices herein (e.g., orientation sensor with restricted motion of the transducer) can also be used with synthetic aperture imaging. Synthetic aperture imaging requires RF data (channel or beamformed), which is obtained as described above. Synthetic aperture imaging can be performed by, e.g., saving 2D channel data for many different angular positions (e.g., using the apparatus in
Live Updating
Any of the methods herein can also be adapted to provide “live-updating” processing and/or display of the generated 3D volume with continued sweeping of the ultrasound probe. After a 3D volume has been generated (using any of the methods and systems herein) and the probe is still in use, the software is adapted to receive as input the current (i.e., live) 2D image data from the orientation-sensor-indicated plane and insert the current image data into the 3D data array, to add to, overwrite, or update the previous/existing data in the volume. The interface can optionally be adapted to display ‘past’ image data in the volume as dim or semi-transparent, and the current/live/most-recent plane of data can be shown as bright, highlighted, and/or opaque. The display thus allows the user to distinguish between the previous data and the current data. The live-updating volume display can provide guidance and confidence to users when performing intraoperative, invasive, or minimally-invasive procedures.
Additionally, some methods can provide near-live updating, rather than true live-updating. Near-live updating is intended to encompass all updates that are not true live updating. For example, near-live updating can replace the entire 3D volume during the procedure, or portions of the 3D volume, as new data is acquired.
Additional
The base 560 shown in
As set forth herein, the methods, devices, and systems herein enable much easier and more intuitive uses of ultrasound for many applications. Additionally, because of the speed, safety, portability, and low-cost of ultrasound relative to other imaging modalities (for example, CT or MRI), the 3D image volumes can be acquired quickly, and optionally immediately reviewed at the bedside post-acquisition, saved for later use or post-acquisition reconstruction, or sent electronically to a remote location for review and interpretation. Systems, devices, and methods herein also enables effective use and enhancement of existing low-end equipment, which is important in low-resource settings, including rural areas and the developing world, as well as cost-conscious developed world settings.
Whether real-time or not, interface and image functions such as thresholding, cropping, and segmentation can be performed to isolate and visualize particular structures of interest.
The bottom surfaces (the surfaces that contact the body) of the any of the bases herein need not be flat, but can be molded with curvature to conform to certain body surfaces, if desired.
“Transducer” and ultrasound “probe” may be used interchangeably herein. Generally, an ultrasound probe includes an ultrasound transducer therein. When this disclosure references a “probe,” it is generally also referencing the transducer therein, and when this disclosure references an ultrasound “transducer,” it is also generally referencing the probe in which the transducer is disposed.
While the embodiments above describe systems and methods that rely on the ultrasound transducer within the probe as the energy source (i.e., sound pulses), the systems and methods herein are not so limited. This disclosure includes any method or system in which the energy source is not the ultrasound transducer. In these embodiments the ultrasound transducer can still function as a detector, or receiver, of acoustic data that occurs as a result of energy emitted into the tissue, whatever the source. Photoacoustic imaging is an example of such an application. Photoacoustic imaging involves exciting tissue with a pulsed laser. Scattering dominates light propagation, so unlike ultrasound, the light excitation generally cannot be spatially focused within the body, and the speed-of-light propagation is considered an instantaneous excitation, which can be considered like a ‘flash’, everywhere, at time=0. The light energy is absorbed to varying degrees in various tissues to create very rapid, localized thermal expansion, which acts as an acoustic source that launches an ultrasonic pressure wave. The resulting ultrasound waves can be detected by a conventional handheld probe with transducer therein, and used to generate an image that is effectively a map of optical absorption within the tissue. In this exemplary embodiment light energy is transmitted into tissue, rather than acoustic energy as in the case of ultrasound imaging. A probe (with transducer therein) used for photoacoustic imaging can thus be used with any of the systems and methods herein, such as by securing an orientation sensor in a fixed position relative to the probe. The embodiments herein are thus not limited to ultrasound transducers being the source of acoustic energy. In
The orientation methods described above, including image annotation and reference icon creation (such as shown in
Arrows with dashed (broken) lines in the figures herein are meant to indicate optional steps.
Any of the methods herein can be used with any suitable device, system, or apparatus herein, and any device, system, or apparatus can be used with any suitable method herein.
This application claims the priority of U.S. Provisional Application No. 62/172,313, filed Jun. 8, 2015, and U.S. Provisional Application No. 62/204,532, filed Aug. 13, 2015, the disclosures of which are incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2016/036530 | 6/8/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62172313 | Jun 2015 | US | |
62204532 | Aug 2015 | US |