Calibration of multiple aperture ultrasound probes

Information

  • Patent Grant
  • 11253233
  • Patent Number
    11,253,233
  • Date Filed
    Tuesday, September 4, 2018
    6 years ago
  • Date Issued
    Tuesday, February 22, 2022
    2 years ago
Abstract
The quality of ping-based ultrasound imaging is dependent on the accuracy of information describing the precise acoustic position of transmitting and receiving transducer elements. Improving the quality of transducer element position data can substantially improve the quality of ping-based ultrasound images, particularly those obtained using a multiple aperture ultrasound imaging probe, i.e., a probe with a total aperture greater than any anticipated maximum coherent aperture width. Various systems and methods for calibrating element position data for a probe are described.
Description
INCORPORATION BY REFERENCE

All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.


FIELD

This disclosure generally relates to ultrasound imaging systems and more particularly to systems and methods for calibrating a multiple aperture ultrasound probe.


BACKGROUND

In conventional ultrasonic imaging, a focused beam of ultrasound energy is transmitted into body tissues to be examined and the returned echoes are detected and plotted to form an image. While ultrasound has been used extensively for diagnostic purposes, conventional ultrasound has been greatly limited by depth of scanning, speckle noise, poor lateral resolution, obscured tissues and other such problems.


In order to insonify body tissues, an ultrasound beam is typically formed and focused either by a phased array or a shaped transducer. Phased array ultrasound is a commonly used method of steering and focusing a narrow ultrasound beam for forming images in medical ultrasonography. A phased array probe has many small ultrasonic transducer elements, each of which can be pulsed individually. By varying the timing of ultrasound pulses (e.g., by pulsing elements one by one in sequence along a row), a pattern of constructive interference is set up that results in a beam directed at a chosen angle. This is known as beam steering. Such a steered ultrasound beam may then be swept through the tissue or object being examined. Data from multiple beams are then combined to make a visual image showing a slice through the object.


Traditionally, the same transducer or array used for transmitting an ultrasound beam is used to detect the returning echoes. This design configuration lies at the heart of one of the most significant limitations in the use of ultrasonic imaging for medical purposes: poor lateral resolution. Theoretically, the lateral resolution could be improved by increasing the width of the aperture of an ultrasonic probe, but practical problems involved with aperture size increase have kept apertures small. Unquestionably, ultrasonic imaging has been very useful even with this limitation, but it could be more effective with better resolution.


SUMMARY OF THE DISCLOSURE

A method of calibrating an ultrasound probe is provided, comprising the steps of placing a first array and a second array of the ultrasound probe in position to image a phantom, each of the first and second arrays having a plurality of transducer elements, imaging the phantom with the first array to obtain a reference image, wherein imaging is dependent on data describing a position of each transducer element of the first array, imaging the phantom with the second array to obtain a test image, wherein imaging is dependent on data describing a position of each transducer element of the second array, quantifying a first error between the reference image and the test image; iteratively optimizing the data describing the position of each transducer element of the second array until the first error is at a minimum.


In some embodiments, the method further comprises imaging the phantom with a third array of the ultrasound probe to obtain a second test image, the third array having a plurality of transducer elements, quantifying a second error between the reference image and the second test image and iteratively optimizing data describing a position of each element of the third array until the second error is minimized.


In some embodiments, the method further comprises storing raw echo data received while imaging the phantom with the second array.


In one embodiment, the iteratively optimizing step comprises adjusting the data describing the position of the transducer elements of the second array to create first adjusted position data, re-beamforming the stored echo data using the first adjusted position data to form a second test image of the reflectors, quantifying a second error between the second test image and the reference image, and determining whether the second error is less than the first error.


In one embodiment, adjusting the data describing the position of the transducer elements of the second array includes adjusting a position of a reference point of the array and an angle of a surface of the array, but does not include adjusting a spacing between the elements of the second array.


In some embodiments, the method further comprises, after a first iteratively optimizing step, performing a second iteratively optimizing step comprising adjusting the first adjusted position data, including adjusting a spacing between at least two transducer elements of the second array to create second adjusted position data, re-beamforming the stored echo data using the second adjusted position data to form a third test image of the reflectors, quantifying a third error between the third test image and the reference image, and determining whether the third error is less than the second error.


In one embodiment, iteratively optimizing the transducer element position data comprises optimizing using a least squares optimization process.


In other embodiments, quantifying the first error comprises quantifying a distance between positions of reflectors in the reference image relative to positions of the same reflectors in the test image. In some embodiments, quantifying the first error comprises quantifying a difference in brightness between reflectors in the reference image and reflectors in the test image. In additional embodiments, quantifying the first error comprises quantifying a difference between a pattern of reflectors and holes in the reference image compared with a pattern of holes and reflectors in the test image.


In one embodiment, the reference image and the test image are three-dimensional volumetric images of a three-dimensional pattern of reflectors, holes, or both reflectors and holes.


In other embodiments, wherein the phantom comprises living tissue.


In some embodiments, the method further comprises identifying positions of reflectors in the phantom and fitting a mathematically defined curve to a detected pattern of reflectors.


In one embodiment, the curve is a straight line.


In other embodiments, the step of quantifying a first error comprises calculating a coefficient of determination that quantifies a degree of fit of the curve to the pattern of reflectors.


A method of calibrating an ultrasound probe is provided, comprising the steps of insonifying a plurality of reflectors of a phantom with the ultrasound probe, receiving echo data with the ultrasound probe, storing the echo data, beamforming the stored echo data using first transducer element position data to form an image of the reflectors, obtaining reference data describing the reflectors, quantifying an error between the image and the reference data, and iteratively optimizing the transducer element position data based on the quantified error.


In some embodiments, the iteratively optimizing step comprises iteratively optimizing the transducer element position data with a least squares optimization process.


In one embodiment, the iteratively optimizing step comprises adjusting the transducer element position data, re-beamforming the stored echo data using the adjusted transducer element position data to form a second image of the reflectors, quantifying a second error based on the second image, and evaluating the second error to determine whether the adjusted transducer element position data improves the image.


In some embodiments, adjusting the transducer element position data comprises adjusting an array horizontal position variable, an array vertical position variable and an array angle variable. In other embodiments, adjusting the transducer element position data does not comprise adjusting a spacing between adjacent transducer elements on a common array.


In one embodiment, the reference data is based on physical measurements of the phantom.


In some embodiments, the method further comprises deriving the reference data from a reference image of the phantom.


In one embodiment, the reference image is obtained using a different group of transducer elements of the probe than a group of transducer elements used for the insonifying and receiving steps.


In additional embodiments, the step of iteratively optimizing the transducer element position data comprises using a least squares optimization process.


In some embodiments, the method further comprises identifying positions of reflectors in the phantom and fitting a mathematically defined curve to a detected pattern of reflectors. In one embodiment, the curve is a straight line.


In some embodiments, the step of quantifying a first error comprises calculating a coefficient of determination that quantifies a degree of fit of the curve to the pattern of reflectors.


A method of calibrating ultrasound imaging data is also provided, comprising the steps of retrieving raw echo data from a memory device, the raw echo data comprising a plurality of echo strings, each echo string comprising a collection of echo records corresponding to echoes of a single ultrasound ping transmitted from a single transmit aperture and received by a single receive element, retrieving first calibration data describing a position of each receive transducer element corresponding to each echo string, retrieving second calibration data describing a position of at least one transducer element corresponding to a transmitted ping associated with each echo string, forming a reference image by beamforming a first collection of echo strings corresponding to a first group of receive transducer elements, wherein beamforming comprises triangulating a position of reflectors based on the first and second calibration data, forming a test image by beamforming a second collection of echo strings corresponding to a second group of transducer elements that is not identical to the first group of transducer elements, quantifying first error between the reference image and the test image, adjusting first calibration data to describe adjusted positions for the elements of the second group, re-beamforming the test image with the adjusted positions for the elements of the second group to obtain a second test image, quantifying a second error between the second test image and the reference image, and evaluating the new error to determine whether the second error is less than the first error.


In some embodiments, the method is performed without any physical or electronic connection to a probe used to create the raw echo data.


In some embodiments, there is no ultrasound probe connected to the memory device.


An ultrasound probe calibration system is provided, comprising an ultrasound probe having a plurality of transmit transducer elements and a plurality of receive transducer elements, a phantom having a pattern of reflectors, a first memory device containing reference data describing the pattern of reflectors of the phantom, a second memory device containing transducer element position data describing a position of each transmit transducer element and each receive transducer element relative to a common coordinate system, and an imaging control system containing calibration program code configured to direct the system to insonify the phantom with the transmit transducer elements, receive echo data with the receive transducer elements, and store echo data in a third memory device, form a first image of the pattern of reflectors by beamforming the stored echo data using the transducer element position data, determine measurement data describing a position of the pattern of reflectors as indicated by the first image, quantify an error between the measurement data and the reference data, and iteratively optimize the transducer element position data based on the quantified error.


In some embodiments, the imaging control system is configured to iteratively optimize the phantom by adjusting the transducer element position data; forming a second image of the pattern of reflectors by re-beamforming the stored echo data using the adjusted transducer element position data quantifying a second error based on the second image and evaluating the second error to determine whether the adjusted transducer element position data improves the image.


In one embodiment, the reference data is based on physical measurements of the phantom.


In other embodiments, the reference data is based on a reference image.


In some embodiments, the imaging control system is configured to iteratively optimize the transducer element position data using a least squares optimization process.


In other embodiments, the phantom further comprises at least one region that absorbs ultrasound signals.


In some embodiments, the ultrasound probe comprises a plurality of transducer arrays. In another embodiment, the ultrasound probe comprises a single continuous transducer array. In one embodiment, the ultrasound probe comprises a transducer array with a concave curvature.


In some embodiments, the phantom comprises a pattern of pins.


In one embodiment, the phantom comprises living tissue.


In some embodiments, the calibration program code is configured to determine measurement data by fitting a curve to a detected pattern of reflectors.


In one embodiment, the calibration program code is configured to quantify an error by determining a coefficient of determination quantifying a degree of fit of the curve.


In another embodiment, at least two of the first memory device, the second memory device, and the third memory device are logical portions of a single physical memory device.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth with particularity in the claims that follow. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:



FIG. 1 is a schematic illustration of an embodiment of a three-aperture ultrasound imaging probe and a phantom object being imaged.



FIG. 2 is a section view of one embodiment of a multiple aperture ultrasound probe with a continuous curvilinear array positioned above a phantom and held in place by a clamp mechanism.



FIG. 3 is a section view of an embodiment of an adjustable multiple aperture imaging probe positioned above a phantom.



FIG. 4A is a longitudinal sectional view of a multiple aperture ultrasound imaging probe configured for trans-esophageal ultrasound imaging.



FIG. 4B is a longitudinal sectional view of a multiple aperture ultrasound imaging probe configured for trans-rectal ultrasound imaging.



FIG. 4C is a longitudinal sectional view of a multiple aperture ultrasound imaging probe configured for intravenous ultrasound.



FIG. 4D is a longitudinal sectional view of a multiple aperture ultrasound imaging probe configured for trans-vaginal ultrasound imaging.



FIG. 4E is a sectional view of a multiple aperture ultrasound imaging probe configured for imaging round structures or features.



FIG. 4F is a plan view of a multiple aperture ultrasound imaging probe with a radial array of transducer elements configured for three-dimensional imaging.



FIG. 5A is a cross-sectional view of an ultrasound probe calibration phantom having a docking section with receiving slots for receiving and retaining ultrasound probes to be calibrated.



FIG. 5B is a top plan view of the ultrasound probe calibration phantom docking section of FIG. 5A.



FIG. 6 is a process flow diagram of one embodiment of a process for calibrating a multiple aperture ultrasound probe using a static phantom.



FIG. 7 is a process flow diagram illustrating one embodiment of an iterative optimization process for minimizing an error function by adjusting transducer element position variables.



FIG. 8 is a block diagram illustrating components of an ultrasound imaging system in accordance with some embodiments.





DETAILED DESCRIPTION

The various embodiments will be described in detail with reference to the accompanying drawings. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.


The various embodiments herein provide systems and methods for dynamically calibrating a multiple aperture ultrasound probe using a static phantom. Calibration of a multiple aperture ultrasound imaging probe may generally comprise determining an acoustic position of each transducer element in the probe. Some embodiments of a dynamic calibration process may generally include the steps of imaging a calibration phantom having a known pattern of reflectors, quantifying an error between known information about the phantom and information obtained from the imaging, and performing an iterative optimization routine to minimize an error function in order to obtain improved transducer element position variables. Such improved transducer element position variables may then be stored for use during subsequent imaging using the calibrated probe.


Introduction & Definitions

Although the various embodiments are described herein with reference to ultrasound imaging of various anatomic structures, it will be understood that many of the methods and devices shown and described herein may also be used in other applications, such as imaging and evaluating non-anatomic structures and objects. For example, the probes, systems and methods described herein may be used in non-destructive testing or evaluation of various mechanical objects, structural objects or materials, such as welds, pipes, beams, plates, pressure vessels, etc.


As used herein the terms “ultrasound transducer” and “transducer” may carry their ordinary meanings as understood by those skilled in the art of ultrasound imaging technologies, and may refer without limitation to any single component capable of converting an electrical signal into an ultrasonic signal and/or vice versa. For example, in some embodiments, an ultrasound transducer may comprise a piezoelectric device. In other embodiments, ultrasound transducers may comprise capacitive micromachined ultrasound transducers (CMUT).


Transducers are often configured in arrays of multiple individual transducer elements. As used herein, the terms “transducer array” or “array” generally refers to a collection of transducer elements mounted to a common backing plate. Such arrays may have one dimension (1D), two dimensions (2D), 1.X dimensions (1.XD) or three dimensions (3D). Other dimensioned arrays as understood by those skilled in the art may also be used. Annular arrays, such as concentric circular arrays and elliptical arrays may also be used. An element of a transducer array may be the smallest discretely functional component of an array. For example, in the case of an array of piezoelectric transducer elements, each element may be a single piezoelectric crystal or a single machined section of a piezoelectric crystal.


As used herein, the terms “transmit element” and “receive element” may carry their ordinary meanings as understood by those skilled in the art of ultrasound imaging technologies. The term “transmit element” may refer without limitation to an ultrasound transducer element which at least momentarily performs a transmit function in which an electrical signal is converted into an ultrasound signal. Similarly, the term “receive element” may refer without limitation to an ultrasound transducer element which at least momentarily performs a receive function in which an ultrasound signal impinging on the element is converted into an electrical signal. Transmission of ultrasound into a medium may also be referred to herein as “insonifying.” An object or structure which reflects ultrasound waves may be referred to as a “reflector” or a “scatterer.”


As used herein, the term “aperture” may refer to a conceptual “opening” through which ultrasound signals may be sent and/or received. In actual practice, an aperture is simply a single transducer element or a group of transducer elements that are collectively managed as a common group by imaging control electronics. For example, in some embodiments an aperture may be a physical grouping of elements which may be physically separated from elements of an adjacent aperture. However, adjacent apertures need not necessarily be physically separated.


It should be noted that the terms “receive aperture,” “insonifying aperture,” and/or “transmit aperture” are used herein to mean an individual element, a group of elements within an array, or even entire arrays with in a common housing, that perform the desired transmit or receive function from a desired physical viewpoint or aperture. In some embodiments, such transmit and receive apertures may be created as physically separate components with dedicated functionality. In other embodiments, any number of send and/or receive apertures may be dynamically defined electronically as needed. In other embodiments, a multiple aperture ultrasound imaging system may use a combination of dedicated-function and dynamic-function apertures.


As used herein, the term “total aperture” refers to the total cumulative size of all imaging apertures. In other words, the term “total aperture” may refer to one or more dimensions defined by a maximum distance between the furthest-most transducer elements of any combination of send and/or receive elements used for a particular imaging cycle. Thus, the total aperture is made up of any number of sub-apertures designated as send or receive apertures for a particular cycle. In the case of a single-aperture imaging arrangement, the total aperture, sub-aperture, transmit aperture, and receive aperture will all have the same dimensions. In the case of a multiple array probe, the dimensions of the total aperture may include the sum of the dimensions of all of the arrays.


In some embodiments, two apertures may be located adjacent one another on a continuous array. In still other embodiments, two apertures may overlap one another on a continuous array, such that at least one element functions as part of two separate apertures. The location, function, number of elements and physical size of an aperture may be defined dynamically in any manner needed for a particular application. Constraints on these parameters for a particular application will be discussed below and/or will be clear to the skilled artisan.


Elements and arrays described herein may also be multi-function. That is, the designation of transducer elements or arrays as transmitters in one instance does not preclude their immediate redesignation as receivers in the next instance. Moreover, embodiments of the control system herein include the capabilities for making such designations electronically based on user inputs, pre-set scan or resolution criteria, or other automatically determined criteria.


As used herein the term “point source transmission” may refer to an introduction of transmitted ultrasound energy into a medium from single spatial location. This may be accomplished using a single ultrasound transducer element or combination of adjacent transducer elements transmitting together as a single transmit aperture. A single transmission from a point source transmit aperture approximates a uniform spherical wave front, or in the case of imaging a 2D slice, a uniform circular wave front within the 2D slice. In some cases, a single transmission of a circular or spherical wave front from a point source transmit aperture may be referred to herein as a “ping” or a “point source pulse.”


Point source transmission differs in its spatial characteristics from a “phased array transmission” which focuses energy in a particular direction from the transducer element array. Phased array transmission manipulates the phase of a group of transducer elements in sequence so as to strengthen or steer an insonifying wave to a specific region of interest. A short duration phased array transmission may be referred to herein as a “phased array pulse.”


In some embodiments, multiple aperture imaging using a series of transmitted pings may operate by transmitting a point-source ping from a first transmit aperture and receiving echoes with elements of two or more receive apertures, one or more of which may include some or all elements of a transmit aperture. A complete image may be formed by triangulating the position of scatterers based on delay times between ping transmission and reception of echoes, the speed of sound, and the relative positions of transmit and receive transducer elements. As a result, each receive aperture may form a complete image from echoes of each transmitted ping. In some embodiments, a single time domain frame may be formed by combining images formed from echoes at two or more receive apertures from a single transmitted ping. In other embodiments, a single time domain frame may be formed by combining images formed from echoes received at one or more receive apertures from two or more transmitted pings. In some such embodiments, the multiple transmitted pings may originate from different transmit apertures.



FIG. 1 illustrates an embodiment of a three-array multiple aperture ultrasound imaging probe 10 and a phantom 20 to be imaged. The phantom 20 generally includes a pattern of reflectors 30 within a solid or liquid medium 35. In some embodiments, a phantom 20 may also include one or more “holes”—regions or objects that substantially absorb and do not reflect significant ultrasound signals. The probe 10 is shown with a left transducer array 12 which may include three transmit apertures labeled ‘n,’ ‘j,’ and ‘k’ (which may be referred to herein by short-hand designations Ln, Lj and Lk). A right transducer array 14 may also include three transmit apertures ‘n,’ ‘j,’ and ‘k’ (which may be referred to herein by short-hand designations Rn, Rj and Rk). Some or all of the elements of the left transducer array 12 may also be designated as a left receive aperture 13. Similarly, some or all of the elements of the right transducer array 14 may be designated as a right receive aperture 15. In addition to the left and right arrays, a multiple aperture ultrasound probe 10 may include a center transducer array 16, which may include three transmit apertures labeled ‘n,’ ‘j,’ and ‘k’ (which may be referred to herein by short-hand designations Cn, Cj and Ck). Some or all of the elements of the center transducer array 16 may also be designated as a center receive aperture 17. It should be understood that each of the three apertures can include any number of transducer elements which may be spaced from one another in one, two or three dimensions.


In other embodiments, any other multiple aperture ultrasound imaging probe may be calibrated using the systems and methods described below. For example, FIG. 2 illustrates a multiple aperture ultrasound probe 55 with a single large (i.e., larger than an expected coherence width for an intended imaging application) continuous curved array 18 positioned over a phantom 20. Some embodiments of the calibration methods and devices below may be particularly useful with adjustable probes such as that illustrated in FIG. 3. FIG. 3 illustrates an adjustable multiple aperture ultrasound probe 11 positioned over a phantom 20. FIG. 4A illustrates a multiple aperture ultrasound probe 100 with one or more transducer arrays 102 positioned at a distal end of an endoscope 104 sized and configured for transesophageal positioning and imaging. FIG. 4B illustrates a multiple aperture ultrasound probe 110 with one or more transducer arrays 112 and a housing 114 sized and configured for trans-rectal positioning and imaging. FIG. 4C illustrates a multiple aperture ultrasound probe 120 including one or more transducer arrays 122 and a housing 124 positioned at a distal end of a catheter 126 all of which may be sized and configured for intravenous positioning and imaging. FIG. 4D illustrates a multiple aperture ultrasound probe 130 with one or more transducer arrays 132 and a housing 134 sized and configured for trans-vaginal positioning and imaging. FIG. 4E illustrates a multiple aperture ultrasound probe 140 with a continuous curved transducer array 142 and a housing 144 and a side-mounted cable 146 sized and configured for positioning over curved anatomical structures such as arms and legs. FIG. 4F illustrates a multiple aperture ultrasound probe 150 with a large circular array 152 that may have a concave curvature about two axes. The probe of FIG. 4F and other probes may include transducer elements with substantial displacement along orthogonal axes. Such probes may be particularly suitable for directly obtaining echo data from a three-dimensional volume. Any of these or other ultrasound probes (including single-aperture ultrasound probes) may be calibrated using the systems and methods herein.


As used herein, the term “phantom” may refer to any substantially static object to be imaged by an ultrasound probe. For example, any number of phantoms designed for sonographer training are widely commercially available from various suppliers of medical equipment, such as Gammex, Inc. (gammex.com). Some commercially available phantoms are made to mimic the imaging characteristics of objects to be imaged such as specific or generic human tissues. Such properties may or may not be required by various embodiments of the invention as will be further described below. The term “phantom” may also include other objects with substantially static reflectors, such as a region of a human or animal body with substantially static strong reflectors. An object need not be purpose-built as a phantom to be used as a phantom for the calibration processes described herein.


With reference to FIG. 1, in one example embodiment of a multiple aperture imaging process, a first image may be formed by transmitting a first ping from a first transmit aperture Ln and receiving echoes of the first ping at a left receive aperture 13. A second image may be formed from echoes of the first ping received at the right receive aperture 15. Third and fourth images may be formed by transmitting a second ping from a second transmit aperture Lj and receiving echoes of the second ping at the left receive aperture 13 and the right receive aperture 15. In some embodiments, all four images may then be combined to form a single time domain frame. In other embodiments, a single time domain frame may be obtained from echoes received at any number of receive apertures from any number of pings transmitted by any number of transmit apertures. Time domain frames may then be displayed sequentially on a display screen as a continuous moving image. Still images may also be formed using any of the above techniques.


In some embodiments, the width of a receive aperture may be limited by the assumption that the speed of sound is the same for every path from a scatterer to each element of the receive aperture. In a narrow enough receive aperture this simplifying assumption is acceptable. However, as receive aperture width increases, an inflection point is reached (referred to herein as the “maximum coherent aperture width” or “coherence width”) at which the echo return paths will necessarily pass though different types of tissue having different speeds of sound. When this difference results in phase shifts in excess of 180 degrees, additional receive elements beyond the maximum coherent receive aperture width will actually degrade the image rather than improve it. The coherence width will vary depending on an intended imaging application and is difficult if not impossible to predict in advance.


Therefore, in order to make use of a wide probe with a total aperture width greater than the maximum coherent width, the full probe width may be physically or logically divided into multiple apertures, each of which may be limited to a width less than the maximum coherent aperture width and small enough to avoid phase cancellation of received signals. The maximum coherent width can be different for different patients and for different probe positions on the same patient. In some embodiments, a compromise width may be determined for a given probe system. In other embodiments, a multiple aperture ultrasound imaging control system may be configured with a dynamic algorithm to subdivide the available elements in multiple apertures into groups that are small enough to avoid significant phase cancellation.


In some embodiments, it may be difficult or impossible to meet additional design constraints while grouping elements into apertures with a width less than the maximum coherent width. For example, if material is too heterogeneous over very small areas, it may be impractical to form apertures small enough to be less than the maximum coherent width. Similarly, if a system is designed to image a very small target at a substantial depth, an aperture with a width greater than the maximum coherent width may be needed. In such cases, a receive aperture with a width greater than the maximum coherent width can be accommodated by making additional adjustments or corrections may be made to account for differences in the speed-of-sound along different paths. Some examples of such speed-of-sound adjustments are provided herein.


With a multiple aperture probe using a point-source transmission imaging technique (also referred to as ping-based imaging), each image pixel may be assembled by beamforming received echo data to combine information from echoes received at each of the multiple receive apertures and from each of the multiple transmit apertures. In some embodiments of multiple aperture imaging with point-source transmission, receive beamforming comprises forming a pixel of a reconstructed image by summing time-delayed echo returns on receive transducer elements from a scatterer in the object being examined. The time delays may be determined by the geometry of the probe elements and an assumed value for the speed of sound through the medium being imaged.


The locus of a single reflector will lie along an ellipse with a first focus at the position of the transmit transducer element(s) and the second focus at the position of the receive transducer element. Although several other possible reflectors lie along the same ellipse, echoes of the same reflector will also be received by each of the other receive transducer elements of a receive aperture. The slightly different positions of each receive transducer element means that each receive element will define a slightly different ellipse for a given reflector. Accumulating the results by coherently summing the ellipses for all elements of a common receive aperture will indicate an intersection of the ellipses for a reflector, thereby converging towards a point at which to display a pixel representing the reflector. The echo amplitudes received by any number of receive elements may thereby be combined into each pixel value. In other embodiments the computation can be organized differently to arrive at substantially the same image.


Because the position of each transmit and receive element plays an important role in producing an image during ping-based ultrasound imaging, the quality of an image produced from ping-based imaging is substantially dependent on the accuracy of the information describing the relative positions of the transducer elements.


Various algorithms may be used for combining echo signals received by separate receive elements. For example, some embodiments may process echo-signals individually, plotting each echo signal at all possible locations along its ellipse, then proceeding to the next echo signal. Alternatively, each pixel location may be processed individually, identifying and processing all echoes potentially contributing to that pixel location before proceeding to the next pixel location.


Image quality may be further improved by combining images formed by the beamformer from one or more subsequent transmitted pings, transmitted from the same or a different point source (or multiple different point sources). Still further improvements to image quality may be obtained by combining images formed by more than one receive aperture. An important consideration is whether the summation of images from different pings, different transmit point-sources or different receive apertures should be coherent summation (phase sensitive) or incoherent summation (summing magnitude of the signals without phase information).


In some embodiments, multiple aperture imaging using a series of transmitted pings may operate by transmitting a point-source ping from a first transmit aperture and receiving echoes with elements of one or more receive apertures (which may overlap with the transmit aperture). A complete image may be formed by triangulating the position of scatterers based on delay times between transmission and receiving echoes and the known position of each receive element relative to each point-source transmit aperture. As a result, a complete image may be formed from data received at each receive aperture from echoes of each transmitted ping.


Images obtained from different unique combinations of a ping and a receive aperture may be referred to herein as image layers. Multiple image layers may be combined to improve the overall quality of a final combined image. Thus, in some embodiments, the number of image layers can be the product of the number of receive apertures and the number of transmit apertures (where a “transmit aperture” can be a single transmit element or a group of transmit elements). In other embodiments, the same ping imaging processes may also be performed using a single receive aperture.


Phantom Calibration Embodiments


Some embodiments of ultrasound probe calibration methods using a phantom may generally include the steps of characterizing the phantom using some known baseline reference data, then imaging the phantom with the probe to be calibrated. An error between the known reference data and data obtained from the generated image may then be quantified and an iterative optimization routine may be used to obtain improved transducer element position information. Such improved transducer element position variables may then be stored for use during subsequent imaging using the calibrated probe.



FIG. 1 illustrates one embodiment of a phantom 20 that may be used for calibrating a multiple aperture probe. In some embodiments, a phantom 20 for calibrating a multiple aperture probe may include a plurality of reflectors 30 arranged in a two-dimensional pattern within a solid, liquid or gel material 35 that has a consistent and known speed-of-sound. The reflectors may be made of any material, such as a plastic, metal, wood, ceramic, or any other solid material that is substantially highly reflective of ultrasound waves relative to the surrounding medium.


In some embodiments, reflectors 30 may be arranged in the phantom 20 in a pattern that may have characteristics selected to facilitate a calibration process. For example, a non-repeating reflector pattern will allow a calibration process to recognize an imaged position of the reflectors without confusion. For example, a complete grid pattern is highly repetitive because portions of the pattern are identically duplicated merely by shifting one full grid position. In some embodiments, the pattern of reflectors may also comprise a number of reflectors with displacement along the X axis 46 that is approximately equal to a number of reflectors with displacement along the Y axis 47. Thus, in some embodiments a pattern in the shape of a cross or a plus sign may be used. In other embodiments, reflectors may be positioned randomly or in other patterns, such as an X-shape, an asterisk, a sunburst, a spiral or any other pattern.


In some embodiments, reflectors may also have depth or distinguishable detail in the z-direction 48. For example, the reflectors 30 may be rods with longitudinal axes along the z-direction 48. Alternatively, the reflectors may be substantially spherical or uniform three-dimensional shapes. In other embodiments, an arrangement of intersecting wires or rods may be used to form a distinguishable pattern in three-dimensional space within a phantom.


The reflectors 30 in the calibration phantom 20 may be of any size or shape as desired. In some embodiments, the reflectors 30 may have a circular diameter that is on the same order of magnitude as the wavelength of the ultrasound signals being used. In general, smaller reflectors may provide better calibration, but in some embodiments the precise size of the reflectors need not be an important factor. In some embodiments, all reflectors 30 in the phantom may be the same size as one another, while in other embodiments, reflectors 30 may be provided in a variety of sizes.


In some embodiments, the physical size and location of the reflectors in the phantom 20 may be determined by mechanical measurement of the phantom (or by other methods, such as optical measurement or ultrasonic measurement using a known-calibrated system) prior to, during or after construction of the phantom. Reflector position reference data may then by obtained by storing the reflector location information within a memory device accessible by software or firmware performing a calibration process. Such reference data may include information such as the position, size, orientation, arrangement or other information about the reflectors and/or holes in the phantom. Reference data may be represented or stored as a reference image or as a series of data points. Alternatively, reference data may be extracted from a reference ultrasound image.


In some embodiments, a reference image of the phantom may be obtained using a probe or an array within a probe that is known to be well-calibrated. In other embodiments, a reference image of the phantom may be obtained using a selected group of elements of the probe. Reflector size and/or location information may then be determined from the reference image for use in calibrating remaining elements of the probe or a different probe.


Therefore, in some embodiments a reference image may be obtained by retrieving previously-determined reflector position data from a memory device. In other embodiments, a reference image may be obtained by imaging the phantom using a sub-set of all elements in a probe. In some embodiments, it may be desirable to obtain a reference image using an aperture that is no wider than an assumed maximum coherence width (as described above). This allows for a reference image to be formed without the need to correct for speed-of-sound variations along different ultrasound wave paths. If the phantom is known to have a uniform speed-of-sound (except for reflectors and/or holes), then the coherence width may be as large as an entire total aperture of a multiple aperture probe. In such embodiments, obtaining a reference image with a receive aperture smaller than the coherence width for an intended imaging application may be useful as a starting point.


For example, when calibrating a three-array probe such as that shown in FIG. 1, a reference image may be obtained by imaging the phantom 20 using only one of the arrays (e.g., the center array 16, the left array 12 or the right array 14). In other embodiments, such as when calibrating a probe with a continuous convex transducer array 19 such as that shown in FIG. 2, a reference image may be obtained by imaging the phantom 20 using only a small group of transducer elements of the array. For example, a group of elements near the center of the curved array may be used as transmit and/or receive elements for obtaining a reference image. Similarly, a reference image may be obtained using a single adjustable array 19 of an adjustable probe 11 such as that shown in FIG. 3. Reference images may be obtained using any multiple aperture ultrasound imaging probe in a similar manner.


As shown for example in FIG. 2, in some embodiments the phantom may be mounted in an enclosure that includes a probe-retaining portion 50. A mounting bracket 52 may also be provided to securely hold the probe 55 in a consistent position relative to the phantom 20 during a calibration process. Any mechanical bracket may be used. In some embodiments, a coupling gel and/or a gel or fluid-filled standoff 42 may be used to provide a continuous medium through which the ultrasound signals will pass. The coupling gel and/or standoff 42 should have approximately the same speed-of-sound as the phantom medium. In some embodiments, a standoff 42 may be a liquid or gel-filled bag.



FIG. 5A illustrates an alternative arrangement comprising a docking section 342 having a plurality of receiving slots 310 designed to receive probes of specific shapes. The docking section 342 may be made of the same material as the material of the phantom 20. Alternatively, the docking section 342 may be made of a material having the same speed-of-sound characteristics as the phantom 20. As shown in FIG. 5B, many probe receiving slots 310 may be provided for a single docking section 342. In various embodiments, each probe receiving slot 310 may be sized, shaped, and otherwise configured to receive one or more specific ultrasound probes.



FIG. 6 is a process flow diagram illustrating an embodiment of a process 400 for calibrating a multiple aperture probe using a phantom. In general, some embodiments of the process 400 may comprise the steps of obtaining reference data 402 that characterizes known information about the phantom (such as reflector or hole positions, sizes, etc.), insonifying the phantom with a test transmit (TX) aperture 404, receiving echoes with a test receive (RX) aperture 405, at least temporarily storing the received echo data 406, forming a test image of the reflectors by beamforming the echo data 408, determining an error function 412 based on a comparison of the generated image and the reference data, and minimizing the error function 414 to obtain improved transducer element position variables 416. The resulting improved element position information may be stored in a memory device for subsequent use by a beamforming process. Steps 404-416 may then be repeated for each additional transmit and/or aperture in the probe, and the position of each transducer element in each transmit and/or receive aperture within the probe may be determined relative to a common coordinate system.


In some embodiments, the process 400 may be entirely automated in software or firmware. In other embodiments, at least some steps may involve human participation, such as to identify or to quantify an error between an obtained image and a reference image. In other embodiments, a human user may also be called upon to determine whether a resulting image is “good enough” or whether the calibration process should be repeated or continued.


In various embodiments, the process 400 may be used to calibrate the position of one or more test transmit apertures, one or more test receive apertures, or both. The choice of which type of aperture to calibrate may depend on factors such as the construction of the probe, the number of transmit or receive apertures, or other factors. The definitions of test transmit apertures and test receive apertures used for the calibration process may be, but need not necessarily be the same as the definition of apertures used for normal imaging with the probe. Therefore, the phrase “test aperture” as used herein may refer to either a transmit test aperture or a receive test aperture unless otherwise specified.


In some embodiments, the test transmit aperture and the test receive aperture used during the process 400 of FIG. 6 may be substantially close to one another. For example, in some embodiments, the test transmit aperture and the test receive aperture may be within an expected coherence width of an intended imaging application relative to one another. For example, in some embodiments, a receive aperture may include all elements on a common array (e.g., elements sharing a common backing block). Alternatively, a receive aperture may comprise elements from two or more separate arrays. In further embodiments, a receive aperture may include a selected group of transducer elements along a large continuous array. In other embodiments, the test transmit aperture and the test receive aperture need not be close to one another, and may be spaced from one another by a distance greater than any anticipated coherence width. In further embodiments, if the phantom is known to have a uniform speed of sound, the coherence width need not be a significant consideration.


In some embodiments, a single transmit test aperture may be used to obtain both a reference image and data from which a test image may be formed. In such embodiments, a first receive aperture may be used to form a reference image, and a second (or third, etc.) receive aperture may be used to form or obtain test image data. Similarly, a single receive aperture may be used for obtaining both a reference image and data for a test image if different transmit apertures are used for the reference image and the test image data. Thus, the test transmit aperture and the test receive aperture need not necessarily be near one another. In other embodiments, reference images may be obtained using transmit and receive elements of a first array, while data for test images may be obtained using transmit and receive elements of a second array, where the second array is a test array to be calibrated.


As described above, in some embodiments, the step of obtaining reference data 402 may comprise retrieving reference data from a data storage device. Such a data storage device may be physically located within a calibration controller, within an ultrasound imaging system, within a probe, or on a separate storage device that may be accessible via a wired or wireless network connection. Alternatively, the step of obtaining reference data 402 may comprise imaging the phantom with a reference group of transducer elements.


In some embodiments, the step of insonifying the phantom with a test transmit aperture 404 may comprise transmitting one or more pings from one or more transmit elements of a transmit aperture. A single transmit aperture may typically comprise one, two, three or a small number of adjacent elements.


After each transmitted ping, returning echoes may be received by all receive elements of the test receive aperture, and the echo data may be digitized and stored 406 in a digital memory device. The memory device may be any volatile or non-volatile digital memory device in any physical location that is electronically accessible by a computing device performing the imaging and calibration processes.


The received echo data may then be beamformed and processed to form a test image 408. In some embodiments, the steps of insonifying the phantom from a test transmit aperture 404 and receiving echoes with a test receive aperture 405 may be repeated using multiple combinations of different transmit apertures and/or receive apertures, and images obtained 408 from such transmitting and receiving may be combined in a process referred to as image layer combining prior to proceeding to subsequent steps of the process 400.


In various embodiments, the error function may be determined from some difference between the phantom reference data (e.g., information known about the position of reflectors in the phantom) and an image of the phantom obtained with the test receive aperture. In some embodiments, the choice of error function may be based on characteristics of the phantom used, available processing capabilities, a chosen optimization method or many other factors.


In some embodiments, a modified least squares optimization method may be used to minimize an error function based on the square of an aggregated straight-line error distance between the expected reflector center and an imaged reflector center. For example, after forming an image of the phantom with the echoes received at a test receive aperture, the system may identify the location of each reflector in the image by identifying the brightest point in the image of approximately the expected size in approximately the expected location of each known reflector. Once each reflector is identified, an error between the imaged position and the expected position of each reflector may be determined. In some embodiments, these individual reflector-position errors may then be aggregated into a collective reflector pattern error, such as by summing all individual reflector errors. Alternatively, the individual errors may be aggregated using any other function, such as taking a maximum error, an average, or a weighted sum of individual errors. For example, if a phantom has some reflectors that are more difficult to detect than others, difficult-to-detect reflectors may be given less weight in the aggregate error function so as to obtain a more balanced result. In various embodiments, such individual and/or aggregate errors may be either scalar or vector quantities.


In some embodiments, reflector images may be sought within a predetermined search area surrounding the expected location of each reflector. The shape and size of a search area may be defined based on the known pattern of reflectors and the distance between reflectors. In some embodiments, images of reflectors may be identified by artificial intelligence or probability analysis using information about nearby reflectors and the known pattern of reflectors. In other embodiments, the search area surrounding each reflector may comprise a circular, rectangular or other geometric area centered on the point of a center of an expected reflector position. The size of a search area may be selected to be larger than the imaged reflectors, but typically small enough that adjacent search areas do not overlap.


In some embodiments, when the actual positions of reflectors in the phantom are known, this knowledge may be used to greatly simplify the process of forming an image of the phantom. For example, forming an image 408 may be limited to beamforming only echoes representing search areas surrounding the expected positions of reflectors in the phantom (rather than beamforming an entire image field). In other embodiments, beamforming may be limited to a search area defining the overall pattern of reflectors. For example, this may be accomplished in some embodiments by beamforming vertical and horizontal pixel bands slightly wider than the expected position of the pins in FIG. 1.


In some embodiments, the error function may be defined based on one or more simplifying assumptions. For example, instead of detecting and optimizing based on the two-dimensional or three-dimensional position of each individual reflector, a line or curve may be fit to the series of reflectors. For example, using the phantom layout shown in FIG. 1, a vertical line may be drawn through the pins spaced along the Y axis. In practice, reflectors in the approximate location of the vertical pins may be detected, a fit line through the detected reflectors may be calculated, and the quality of the fit line may be evaluated using a factor such as a coefficient of determination (R2 value). An error function may then be defined based on the R2 value of the line connecting the vertical pins. A similar approach may be taken for the horizontal pins. The simplifying assumption of pins fit to a line may ignore the spacing between the pins along the fit line, and may therefore be less precise than methods defining an error function based on two-dimensional position of each pin. However, optimizing based on a single line segment may be substantially faster in processing terms than optimizing based a plurality of individual pin reflector positions. Therefore, such simplifications may still provide valuable information in exchange for a faster processing time. In alternative embodiments, polynomial curves, circles or other mathematically-defined geometric shapes may be used as simplifications for representing a pattern of reflectors within a phantom.


In other embodiments, the error function may be defined as some quantity other than reflector position. For example, in some embodiments, an error function may be defined as a sum of absolute value differences in brightness of the individual imaged reflectors relative to a reference image. In another embodiment, an error function may be defined based on a complete collective reflector pattern. For example, a phantom may be designed to contain an array of reflectors representing a reference number in binary form (i.e., a reflector may represent a ‘1’ and the absence of a reflector at a grid position may represent a ‘0’). In such embodiments, a calibration process may be configured to ‘read’ the binary values, and the error function may be defined as the number of bits different from the expected reference number. In further embodiments, an error function may be at least partially based on a pattern of “holes”—regions of the phantom that absorb the ultrasound energy. Many other error functions may also be used.



FIG. 7 illustrates one embodiment of an iterative optimization process 414 for minimizing an error function by adjusting transducer element position variables. After determining an initial error function (E0) in step 412, the process 414 may proceed to iteratively seek a minimum of an error function by making incremental adjustments to one or more variables describing the position of the elements of the test transmit and/or receive aperture. Thus, during a first iteration, the process may adjust 452 one or more initial test aperture element position variables (P0) to obtain new test aperture element position variables (P1). Without the need to re-insonify the phantom, the stored received echo data (from 406 in FIG. 6) may then be re-beamformed using the adjusted element position parameters 454 (P1) (image layers may also be combined as needed during this step) in order to form a new image of the phantom. From the new image, a new error function (E1) may be quantified 456 and then evaluated or stored 460 before returning to step 452 for a second iteration. The nature of the adjustments 452 and the error evaluations 460 may depend on the type of optimization routine being used.


In some embodiments, adjustments to the element position variables may be essentially random in each iteration (i.e., with no connection to adjustments made in prior iterations). Such random adjustments may be made within a predetermined range of values relative to current element position data based on expectations of the possible degree of mis-calibration of existing element position data. In the case of random adjustments, an error function obtained from each iteration may be stored, and a minimum error function may be identified by comparing the results of all iterations.


In other embodiments, adjustments may be directly based on information from previous iterations, such as an evaluation of the magnitude and/or direction of a change in the error value. For example, in some embodiments, if the new error function E1 is less than the initial error function E0, then the adjustment made in step 452 may be determined to be a good adjustment and the process may repeat for more iterations making further incremental adjustments to the position variable(s). If the new error function E1 obtained in the first iteration is not less than the initial error function E0 (i.e. E1≥E0), then it may be assumed that the adjustment of step 452 was made in the wrong direction. Thus, in a second iteration, during step 452, the original element position variable(s) P0 may be adjusted in a direction opposite to that tried during the first iteration. If the resulting new error function E2 is still not smaller than the initial error function E0, then the error function is at a minimum (at least with respect to the adjusted element position variable(s)). In such a case, the error minimization process may be stopped, and the last good position variables may be stored as the new transducer element positions.


In some embodiments, the process 414 may be repeated through as many iterations as needed until the error function is minimized. In other embodiments, the process 414 may be stopped after a fixed number of iterations. As will be clear to the skilled artisan, multiple ‘optimum’ solutions may exist. As a result, in some embodiments, the iterative calibration process may be repeated multiple times, and the results of the several calibrations may be compared (automatically using image processing techniques or manually by a person) to identify a suitable solution. In any event, it is not necessary to identify the absolute optimal result.


In various embodiments, the position of transducer elements may be described by multiple variable quantities. Ultimately, it is desirable to know the acoustic position (which may be different than the element's apparent mechanical position) of each transducer element relative to some known coordinate system. Thus, in some embodiments, the acoustic position of each transducer element may be defined by an x, y, and z position (e.g., with reference to a Cartesian coordinate system 45 such as that shown in FIGS. 1-3). In adjusting such quantities during the optimization process 414, position variables may be adjusted individually or in groups.


Performing the optimization process by adjusting the x, y and z position of each transducer element may be somewhat computationally intensive, since a single aperture may contain hundreds of individual elements. This may result in the iterative adjustment of several hundred if not thousands of variables. This is particularly true for probes with 2D arrays (i.e., those with transducer elements spaced from one another in X and Z directions), curved 1D or 2D arrays (i.e., arrays with curvature about either the X or the Z axis), and 3D arrays (i.e., probes with curvature about two axes). While potentially computationally intensive, the various embodiments herein may be used to calibrate any ultrasound probe with large continuous planar or curved 1D or 2D arrays as well as large continuous 3D arrays with curvature about two axes.


As an alternative, some embodiments may employ one or more simplifying assumptions. For example, in some embodiments it may be assumed that element position relationships within a single array remain fixed relative to one another such that an array with a common backing block will only move, expand or contract uniformly. In some embodiments, it may also be assumed that the elements are uniformly distributed across the array. Using such assumptions, locating a center point of an array, a width of the array and an angle of the array surface relative to a known datum may provide sufficient information about the acoustic position of each element. For example (with reference to FIG. 1), the position of all elements in the left array 12 may be assumed based on overall array position variables, which may include array width (‘w’), the position of the array's center (i) in the scan plane (i.e., the X-Y plane), and the angle of the array surface in the scan plane relative to some baseline (θ). If it is assumed that the acoustic centers of elements are uniformly distributed across the array with a consistent spacing in the X direction for a 1D array or in the X and Z directions for a 2D array, then the acoustic position of each transducer element may be mathematically expressed in terms of the above four variables (center-X, center-Y, width and angle). In some embodiments, if the array is a 2D array, a fifth variable describing the position of an array's center in the Z-direction (center-Z) may also be used. Alternatively, one or more of these variables may be treated as fixed in some embodiments. Using such simplifications, an error function minimizing process need only iteratively optimize four or five transducer element position variables. In the case of different probe constructions, different simplifying assumptions may also be used.


In some embodiments two or more optimization processes may be combined in parallel or sequential processes in order to improve processing efficiency, calibration precision, or both. For example, in one embodiment, a two-stage optimization process may be used in which a first stage provides a coarse improvement to element position variables while relying on one or more simplifying assumptions. A second stage may then provide a more detailed improvement to the element position variables while relying on fewer simplifying assumptions, but starting from the improved information obtained during the first stage. During a first stage of one example embodiment, multiple reflectors may be represented with a single geometric shape such as a line, and the spacing between transducer elements may be treated as fixed (i.e., such values are not varied during the optimization). A second stage process may then be performed, in which the position of each pin is optimized by varying element position variables including the spacing between transducer elements.


In some embodiments, a similar calibration process may be used to calibrate a probe 55 with a large continuous array 18, such as that illustrated in FIG. 2. Because the continuous array 18 lacks physical separations, the same simplifying assumptions discussed above with regard to the probe of FIG. 1 may not apply. Instead, the probe 55 of FIG. 2 may be calibrated by making simplifying assumptions about the shape of the large array, and apertures may be defined by using relatively small groups of elements at various positions along the array. In some embodiments, the x-y position of each element in an aperture may be used as element position parameters to be optimized. Such selected apertures may then be calibrated in substantially the same manner described above.


Regardless of the number of variables to be optimized in the iterative error function minimizing process 414, element position variables may be adjusted 452 either in series or in parallel. For example, in embodiments in which position variables are to be adjusted in series, only one variable may be adjusted during each iteration. In some embodiments of serial optimization, a single variable may be optimized (i.e., the error function may be minimized by adjusting only that single variable) before proceeding to the next variable. In embodiments in which two or more position variables are to be adjusted in parallel, the two or more variables may each be adjusted during each iteration. In some embodiments, those two variables may be optimized before proceeding to optimization of other variables. Alternatively, all variables may be optimized in parallel. In other embodiments, position variables may be optimized using a combination of series and parallel approaches. It should be noted this distinction between series and parallel optimization approaches should not be confused with parallel computer processing. Depending on computing hardware used, even optimizations performed in series as described above may be computed simultaneously using separate threads in parallel processors.


After completing calibration of a first array or aperture, the process of FIG. 6 may be repeated for each remaining array or aperture individually. For example, using the three-array probe of FIG. 1, the calibration process may be repeated for the right array 14 and then again for the left array 12. After determining updated element position data for the first array, updated element position data for each subsequently-tested array may be determined and stored relative to a common coordinate system such that the position of any element in the probe may be determined relative to any other. For example, the calibration process may determine the center of the center array, which may be used as the center of the coordinate system for the other arrays. The angle of the center array may also be used as a datum against which angles of the other arrays may be defined. In other embodiments, the positions and orientations of the apertures may be determined relative to some other datum independent of any array. In other embodiments, element positions may ultimately be defined using any coordinate system centered around any point relative to the probe.


In some embodiments, transducer element position adjustments may be obtained and stored in the form of new corrected element position coordinates. In other embodiments, position adjustments may be obtained and stored as coefficients to be added to or multiplied with previous element position coordinates. For example, in some embodiments “factory” element position data may be stored in a read-only memory device in a location readable by an ultrasound system, such as a ROM chip within a probe housing. Such factory position data may be established at the time of manufacturing the probe, and subsequent calibration data may be stored as coefficients that may be applied as adjustments to the factory position data.


In some embodiments, adjusted element position data for each transducer element in a probe may be stored in a non-volatile memory device located within a probe housing. In other embodiments, adjusted element position data may be stored in a non-volatile memory device located within an imaging system, on a remote server, or in any other location from which the information may be retrieved by an imaging system during image beamforming.


In some embodiments, a calibration process using the methods described above, may be particularly useful in rapidly re-calibrating an adjustable probe such as that illustrated in FIG. 3. Generally, an “adjustable probe” may be any ultrasound imaging probe in which the position and/or orientation of one or more transducer arrays or transducer elements may be changed relative to one or more other transducer arrays or elements. Many adjustable probe configurations beyond that shown in FIG. 3 are possible and may be designed for specific imaging applications.


In some embodiments, one or more of the arrays in an adjustable probe may be permanently secured to the housing in a fixed orientation and position (e.g., the center array or the left or right end array), while the remaining arrays may be movable to conform to a shape of an object to be imaged. The fixed array would then be in a permanently known position and orientation. Alternatively, the position and orientation of one or more arrays may be known based on one or more position sensors within an adjustable probe. The known-position array(s) may then be used to obtain a reference image of a phantom (or even a region of an object or patient to be imaged), and an optimization process may be used to determine an adjusted position of the movable arrays. For example, a sonographer may adjust the adjustable arrays of an adjustable probe to conform to a patient's anatomy. Then, during normal imaging, a reference image may be obtained using the known array, and positions of the remaining arrays may be determined by an optimization routine configured to minimize an error function (e.g., using an optimization routine as described above) defining an error between the reference image obtained from the center array and images obtained from each adjustable array.


In other embodiments, a sonographer may adjust the arrays of an adjustable probe to conform to a patient's anatomy. The sonographer may then place the probe onto a phantom that includes a conformable section configured to receive the probe in its adjusted position. For example, a conformable section may include a flexible bag containing a liquid or gel selected to transmit ultrasound signals at substantially the same speed of sound as the material of the phantom. A calibration process may then be initiated, and the position of each adjustable array may be determined by an iterative optimization routine in which reference data describing the phantom is compared with images of the phantom obtained with each array.


In some embodiments, the element-position information may change between performing a calibration operation and capturing raw ultrasound data. For example, a probe may be dropped, damaged or may be otherwise altered (such as by thermal expansion or contraction due to a substantial temperature change) before or during a raw sample data capture session. In some embodiments, the probe may be re-calibrated using captured, stored raw echo data as described below.


In other embodiments, a calibration system may be incorporated into an ultrasound imaging system. In some embodiments, as shown for example in FIG. 8, an ultrasound imaging system 500 may include a raw data memory device 502 configured to capture and store raw, un-beamformed echo data. As shown in FIG. 8 an ultrasound imaging system configured to perform an optimization-based calibration may include a transmit control subsystem 504, a probe subsystem 506, a receive subsystem 508, an image generation subsystem 510, a video subsystem 512, a calibration memory 530 and a calibration processor 540. The image generation subsystem may include a beamformer 520 (hardware or software) and an image-layer combining block 522.


In some embodiments, a calibration system may be provided independently of an imaging system. In such embodiments, components such as the video subsystem 512 may be omitted. Other components shown in FIG. 8 may also be omitted where practicable.


In practice, the transmit control subsystem 504 may direct the probe to transmit ultrasound signals into a phantom. Echoes returned to the probe may produce electrical signals which are fed into the receive sub-system 508, processed by an analog front end, and converted into digital data by an analog-to-digital converter. The digitized echo data may then be stored in a raw data memory device 502. The digital echo data may then be processed by the beamformer 520 in order to determine the location of each reflector so as to form an image. In performing beamforming calculations, the beamformer may retrieve calibration data from a calibration memory 530. The calibration data may describe the position of each transducer element in the probe. In order to perform a new calibration, the calibration processor may receive image data from the image formation block 520 or from an image buffer memory device 526 which may store single image frames and/or individual image layers.


The calibration processor may then perform an optimization-based calibration routine. Once a calibration process is complete, new calibration information may be stored in the calibration memory device 530 for use in subsequent imaging processes or in additional calibration processes.


Using such a system, raw echo data of a phantom may be captured and stored along with raw echo data from a target object imaging session (e.g., with a patient). Capturing and storing raw echo data of a phantom before and/or after an imaging session may allow for later optimization of the imaging-session data. Such optimization may be applied at any point after the imaging session using the stored raw data and the methods described above.


As shown in FIG. 8, an ultrasound imaging system 500 may comprise an ultrasound probe 506 which may include a plurality of individual ultrasound transducer elements, some of which may be designated as transmit elements, and others of which may be designated as receive elements. In some embodiments, each probe transducer element may convert ultrasound vibrations into time-varying electrical signals and vice versa. In some embodiments, the probe 506 may include any number of ultrasound transducer arrays in any desired configuration. A probe 506 used in connection with the systems and methods described herein may be of any configuration as desired, including single aperture and multiple aperture probes.


The transmission of ultrasound signals from elements of the probe 506 may be controlled by a transmit controller 504. Upon receiving echoes of transmit signals, the probe elements may generate time-varying electric signals corresponding to the received ultrasound vibrations. Signals representing the received echoes may be output from the probe 506 and sent to a receive subsystem 508. In some embodiments, the receive subsystem 508 may include multiple channels. Each channel may include an analog front-end device (“AFE”) 509 and an analog-to-digital conversion device (ADC) 511. In some embodiments, each channel of the receive subsystem 508 may also include digital filters and data conditioners (not shown) after the ADC 511. In some embodiments, analog filters prior to the ADC 511 may also be provided. The output of each ADC 511 may be directed into a raw data memory device 502. In some embodiments, one independent channel of the receive subsystem 508 may be provided for each receive transducer element of the probe 506. In other embodiments, two or more transducer elements may share a common receive channel.


In some embodiments, the ultrasound imaging system may store digital data representing the timing, phase, magnitude and/or the frequency of ultrasound echo signals received by each individual receive element in a raw data memory device 502 before performing any further beamforming, filtering, image layer combining or other image processing.


In addition to received echo data, information about one or more ultrasound transmit signals that generated a particular set of echo data may also be stored in a memory device, such as the raw data memory device 502 or another memory device. For example, when imaging with a multiple aperture ping ultrasound method as described above, it is desirable to know information about a transmitted ping that produced a particular set of echoes. Such information may include the identity and/or position of one or more a transmit elements as well as a frequency, magnitude, duration or other information describing a transmitted ultrasound signal. Transmit data is collectively referred herein to as “TX data”. In some embodiments, such TX data may be stored explicitly in the same raw data memory device in which raw echo data is stored. For example, TX data describing a transmit signal may be stored as a header before or as a footer after a set of raw echo data generated by the transmit signal. In other embodiments, TX data may be stored explicitly in a separate memory device that is also accessible to a system performing a beamforming process. In embodiments in which transmit data is stored explicitly, the phrases “raw echo data” or “raw data” may also include such explicitly stored TX data.


TX data may also be stored implicitly. For example, if an imaging system is configured to transmit consistently defined ultrasound signals (e.g., consistent magnitude, shape, frequency, duration, etc.) in a consistent or known sequence, then such information may be assumed during a beamforming process. In such cases, the only information that needs to be associated with the echo data is the position (or identity) of the transmit transducer(s). In some embodiments, such information may be implicitly obtained based on the organization of raw echo data in a raw data memory.


For example, a system may be configured to store a fixed number of echo records following each ping. In such embodiments, echoes from a first ping may be stored at memory positions 0 through ‘n’ (where ‘n’ is the number of records stored for each ping), and echoes from a second ping may be stored at memory positions n+1 through 2n+1. In other embodiments, one or more empty records may be left in between echo sets. In some embodiments received echo data may be stored using various memory interleaving techniques to imply a relationship between a transmitted ping and a received echo data point (or a group of echoes). In general, a collection of echo records corresponding to echoes of a single transmitted ping received by a single receive element may be referred to herein as a single “echo string.” A complete echo string may refer to all echoes of the single ping received by the receive element, whereas a partial string may refer to a sub-set of all echoes of the single ping received by the receive element.


Similarly, assuming data is sampled at a consistent, known sampling rate, the time at which each echo data point was received may be inferred from the position of that data point in memory. In some embodiments, the same techniques may also be used to implicitly store data from multiple receive channels in a single raw data memory device.


In other embodiments, the raw echo data stored in the raw data memory device 520 may be in any other structure as desired, provided that a system retrieving the echo data is able to determine which echo signals correspond to which receive transducer element and to which transmitted ping. In some embodiments, position data describing the position of each receive transducer element may be stored in the calibration memory device along with information that may be linked to the echo data received by that same element. Similarly, position data describing the position of each transmit transducer element may be stored in the calibration memory device along with information that may be linked to the TX data describing each transmitted ping.


In some embodiments, each echo string in the raw data memory device may be associated with position data describing the position of the receive transducer element that received the echoes and with data describing the position of one or more transmit elements of a transmit aperture that transmitted the ping that produced the echoes. Each echo string may also be associated with TX data describing characteristics of the transmitted ping.


In some embodiments, a probe may be calibrated using raw echo data stored in a memory device without raw data of a phantom image. Assuming at least one array (or one portion of an array) is known or assumed to be well-calibrated, nearly any image data with a pattern of strong reflectors may be used to calibrate second, third or further arrays or array segments. For example, echo data from the known-calibrated aperture, array or array segment may be beamformed to obtain a reference image. Stored echo data from the remaining apertures/arrays may then be calibrated using any of the methods described above to calibrate the position of the remaining arrays, apertures or array segments relative to the first. By performing a calibration process using stored echo data, a probe may be calibrated even when neither the probe itself nor the patient (or other imaged object) is physically present proximate to the device performing the re-beamforming and image processing. In such embodiments, the steps of insonifying a phantom 404, and receiving echoes 405 may be omitted from the process 400 of FIG. 6 at the time of a calibration process, since those steps were performed during the imaging session in which the raw data was captured.


Although this invention has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the present invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof. Various modifications to the above embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is intended that the scope of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above, but should be determined only by a fair reading of the claims that follow.


In particular, materials and manufacturing techniques may be employed as within the level of those with skill in the relevant art. Furthermore, reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in the appended claims, the singular forms “a,” “and,” “said,” and “the” include plural referents unless the context clearly dictates otherwise. As used herein, unless explicitly stated otherwise, the term “or” is inclusive of all presented alternatives, and means essentially the same as the commonly used phrase “and/or.” Thus, for example the phrase “A or B may be blue” may mean any of the following: A alone is blue, B alone is blue, both A and B are blue, and A, B and C are blue. It is further noted that the claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation. Unless defined otherwise herein, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.

Claims
  • 1. A method of calibrating ultrasound imaging data obtained with an ultrasound imaging device, comprising the steps of: retrieving raw echo data from a memory device, the raw echo data comprising a plurality of echo strings, each echo string comprising a collection of echo records corresponding to echoes of a single ultrasound ping transmitted from a single transmit aperture of the ultrasound imaging device and received by a single receive element of the ultrasound imaging device;retrieving first calibration data from the memory device describing a position of each receive transducer element corresponding to each echo string;retrieving second calibration data from the memory device describing a position of at least one transducer element of the ultrasound imaging device corresponding to a transmitted ping associated with each echo string;forming a reference image by beamforming a first collection of echo strings corresponding to a first group of receive transducer elements, wherein beamforming comprises triangulating a position of reflectors based on the first and second calibration data;forming a test image by beamforming a second collection of echo strings corresponding to a second group of transducer elements of the ultrasound imaging device that is not identical to the first group of transducer elements;quantifying a first error between the reference image and the test image;adjusting the first calibration data to describe adjusted positions for the transducer elements of the second group;re-beamforming the test image with the adjusted positions for the transducer elements of the second group to obtain a second test image;quantifying a second error between the second test image and the reference image; andevaluating the new error to determine whether the second error is less than the first error.
  • 2. The method of claim 1, wherein the method is performed without any physical or electronic connection to a probe used to create the raw echo data.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/400,826, filed Jan. 6, 2017, now U.S. Pat. No. 10,064,605; which application is a continuation of U.S. application Ser. No. 13/964,701, filed Aug. 12, 2013, now U.S. Pat. No. 9,572,549, which application claims the benefit of U.S. Provisional Application No. 61/681,986, filed Aug. 10, 2012, titled “Calibration of Multiple Aperture Ultrasound Probes”, the contents of which are incorporated by reference herein.

US Referenced Citations (554)
Number Name Date Kind
3174286 Erickson Mar 1965 A
3895381 Kock Jul 1975 A
3974692 Hassler Aug 1976 A
4055988 Dutton Nov 1977 A
4072922 Taner et al. Feb 1978 A
4097835 Green Jun 1978 A
4105018 Greenleaf et al. Aug 1978 A
4180792 Lederman et al. Dec 1979 A
4205394 Pickens May 1980 A
4229798 Rosie Oct 1980 A
4259733 Taner et al. Mar 1981 A
4265126 Papadofrangakis et al. May 1981 A
4271842 Specht et al. Jun 1981 A
4325257 Kino et al. Apr 1982 A
4327738 Green et al. May 1982 A
4333474 Nigam Jun 1982 A
4339952 Foster Jul 1982 A
4452084 Taenzer Jun 1984 A
4501279 Seo Feb 1985 A
4511998 Kanda et al. Apr 1985 A
4539847 Paap Sep 1985 A
4566459 Umemura et al. Jan 1986 A
4567768 Satoh et al. Feb 1986 A
4604697 Luthra et al. Aug 1986 A
4662222 Johnson May 1987 A
4669482 Ophir Jun 1987 A
4682497 Sasaki Jul 1987 A
4694434 Vonn Ramm et al. Sep 1987 A
4781199 Hirama et al. Nov 1988 A
4817434 Anderson Apr 1989 A
4831601 Breimesser et al. May 1989 A
4893284 Magrane Jan 1990 A
4893628 Angelsen Jan 1990 A
4990462 Sliwa, Jr. Feb 1991 A
5050588 Grey et al. Sep 1991 A
5062295 Shakkottai et al. Nov 1991 A
5141738 Rasor et al. Aug 1992 A
5161536 Vilkomerson et al. Nov 1992 A
5197475 Antich et al. Mar 1993 A
5226019 Bahorich Jul 1993 A
5230339 Charlebois Jul 1993 A
5269309 Fort et al. Dec 1993 A
5278757 Hoctor et al. Jan 1994 A
5293871 Reinstein et al. Mar 1994 A
5299576 Shiba Apr 1994 A
5301674 Erikson et al. Apr 1994 A
5305756 Entrekin et al. Apr 1994 A
5339282 Kuhn et al. Aug 1994 A
5340510 Bowen Aug 1994 A
5345426 Lipschutz Sep 1994 A
5349960 Gondo Sep 1994 A
5355888 Kendall Oct 1994 A
5381794 Tei et al. Jan 1995 A
5398216 Hall et al. Mar 1995 A
5409010 Beach et al. Apr 1995 A
5442462 Guissin Aug 1995 A
5454372 Banjanin et al. Oct 1995 A
5503152 Oakley et al. Apr 1996 A
5515853 Smith et al. May 1996 A
5515856 Olstad et al. May 1996 A
5522393 Phillips et al. Jun 1996 A
5526815 Granz et al. Jun 1996 A
5544659 Banjanin Aug 1996 A
5558092 Unger et al. Sep 1996 A
5564423 Mele et al. Oct 1996 A
5568812 Murashita et al. Oct 1996 A
5570691 Wright et al. Nov 1996 A
5581517 Gee et al. Dec 1996 A
5625149 Gururaja et al. Apr 1997 A
5628320 Teo May 1997 A
5673697 Bryan et al. Oct 1997 A
5675550 Ekhaus Oct 1997 A
5720291 Schwartz Feb 1998 A
5720708 Lu et al. Feb 1998 A
5744898 Smith et al. Apr 1998 A
5769079 Hossack Jun 1998 A
5784334 Sena et al. Jul 1998 A
5785654 Iinuma et al. Jul 1998 A
5795297 Daigle Aug 1998 A
5797845 Barabash et al. Aug 1998 A
5798459 Ohba et al. Aug 1998 A
5820561 Olstad et al. Oct 1998 A
5838564 Bahorich et al. Nov 1998 A
5850622 Vassiliou et al. Dec 1998 A
5862100 VerWest Jan 1999 A
5870691 Partyka et al. Feb 1999 A
5876342 Chen et al. Mar 1999 A
5891038 Seyed-Bolorforosh et al. Apr 1999 A
5892732 Gersztenkorn Apr 1999 A
5916169 Hanafy et al. Jun 1999 A
5919139 Lin Jul 1999 A
5920285 Benjamin Jul 1999 A
5930730 Marfurt et al. Jul 1999 A
5940778 Marfurt et al. Aug 1999 A
5951479 Holm et al. Sep 1999 A
5964707 Fenster et al. Oct 1999 A
5969661 Benjamin Oct 1999 A
5999836 Nelson et al. Dec 1999 A
6007499 Martin et al. Dec 1999 A
6013032 Savord Jan 2000 A
6014473 Hossack et al. Jan 2000 A
6048315 Chiao et al. Apr 2000 A
6049509 Sonneland et al. Apr 2000 A
6050943 Slayton et al. Apr 2000 A
6056693 Haider May 2000 A
6058074 Swan et al. May 2000 A
6077224 Lang et al. Jun 2000 A
6092026 Bahorich et al. Jul 2000 A
6122538 Sliwa, Jr. et al. Sep 2000 A
6123670 Mo Sep 2000 A
6129672 Seward et al. Oct 2000 A
6135960 Holmberg Oct 2000 A
6138075 Yost Oct 2000 A
6148095 Prause et al. Nov 2000 A
6162175 Marian, Jr. et al. Dec 2000 A
6166384 Dentinger et al. Dec 2000 A
6166853 Sapia et al. Dec 2000 A
6193665 Hall et al. Feb 2001 B1
6196739 Silverbrook Mar 2001 B1
6200266 Shokrollahi et al. Mar 2001 B1
6210335 Miller Apr 2001 B1
6213958 Winder Apr 2001 B1
6221019 Kantorovich Apr 2001 B1
6231511 Bae May 2001 B1
6238342 Feleppa et al. May 2001 B1
6246901 Benaron Jun 2001 B1
6251073 Imran et al. Jun 2001 B1
6264609 Herrington et al. Jul 2001 B1
6266551 Osadchy et al. Jul 2001 B1
6278949 Alam Aug 2001 B1
6289230 Chaiken et al. Sep 2001 B1
6299580 Asafusa Oct 2001 B1
6304684 Niczyporuk et al. Oct 2001 B1
6309356 Ustuner et al. Oct 2001 B1
6324453 Breed et al. Nov 2001 B1
6345539 Rawes et al. Feb 2002 B1
6361500 Masters Mar 2002 B1
6363033 Cole et al. Mar 2002 B1
6370480 Gupta et al. Apr 2002 B1
6374185 Taner et al. Apr 2002 B1
6394955 Perlitz May 2002 B1
6423002 Hossack Jul 2002 B1
6436046 Napolitano et al. Aug 2002 B1
6449821 Sudol et al. Sep 2002 B1
6450965 Williams et al. Sep 2002 B2
6468216 Powers et al. Oct 2002 B1
6471650 Powers et al. Oct 2002 B2
6475150 Haddad Nov 2002 B2
6480790 Calvert et al. Nov 2002 B1
6487502 Taner Nov 2002 B1
6499536 Ellingsen Dec 2002 B1
6508768 Hall et al. Jan 2003 B1
6508770 Cai Jan 2003 B1
6517484 Wilk et al. Feb 2003 B1
6526163 Halmann et al. Feb 2003 B1
6543272 Vitek Apr 2003 B1
6547732 Jago Apr 2003 B2
6551246 Ustuner et al. Apr 2003 B1
6565510 Haider May 2003 B1
6585647 Winder Jul 2003 B1
6597171 Hurlimann et al. Jul 2003 B2
6604421 Li Aug 2003 B1
6614560 Silverbrook Sep 2003 B1
6620101 Azzam et al. Sep 2003 B2
6652461 Levkovitz Nov 2003 B1
6668654 Dubois et al. Dec 2003 B2
6672165 Rather et al. Jan 2004 B2
6681185 Young et al. Jan 2004 B1
6690816 Aylward et al. Feb 2004 B2
6692450 Coleman Feb 2004 B1
6695778 Golland et al. Feb 2004 B2
6702745 Smythe Mar 2004 B1
6704692 Banerjee et al. Mar 2004 B1
6719693 Richard Apr 2004 B2
6728567 Rather et al. Apr 2004 B2
6752762 DeJong et al. Jun 2004 B1
6755787 Hossack et al. Jun 2004 B2
6780152 Ustuner et al. Aug 2004 B2
6790182 Eck et al. Sep 2004 B2
6835178 Wilson et al. Dec 2004 B1
6837853 Marian Jan 2005 B2
6843770 Sumanaweera Jan 2005 B2
6847737 Kouri et al. Jan 2005 B1
6854332 Alleyne Feb 2005 B2
6865140 Thomenius et al. Mar 2005 B2
6932767 Landry et al. Aug 2005 B2
7033320 Von Behren et al. Apr 2006 B2
7087023 Daft et al. Aug 2006 B2
7104956 Christopher Sep 2006 B1
7217243 Takeuchi May 2007 B2
7221867 Silverbrook May 2007 B2
7231072 Yamano et al. Jun 2007 B2
7269299 Schroeder Sep 2007 B2
7283652 Mendonca et al. Oct 2007 B2
7285094 Nohara et al. Oct 2007 B2
7293462 Lee et al. Nov 2007 B2
7313053 Wodnicki Dec 2007 B2
7366704 Reading et al. Apr 2008 B2
7402136 Hossack et al. Jul 2008 B2
7410469 Talish et al. Aug 2008 B1
7415880 Renzel Aug 2008 B2
7443765 Thomenius et al. Oct 2008 B2
7444875 Wu et al. Nov 2008 B1
7447535 Lavi Nov 2008 B2
7448998 Robinson Nov 2008 B2
7466848 Metaxas et al. Dec 2008 B2
7469096 Silverbrook Dec 2008 B2
7474778 Shinomura et al. Jan 2009 B2
7481577 Ramamurthy et al. Jan 2009 B2
7491171 Barthe et al. Feb 2009 B2
7497828 Wilk et al. Mar 2009 B1
7497830 Li Mar 2009 B2
7510529 Chou et al. Mar 2009 B2
7514851 Wilser et al. Apr 2009 B2
7549962 Dreschel et al. Jun 2009 B2
7574026 Rasche et al. Aug 2009 B2
7625343 Cao et al. Dec 2009 B2
7637869 Sudol Dec 2009 B2
7668583 Fegert et al. Feb 2010 B2
7674228 Williams et al. Mar 2010 B2
7682311 Simopoulos et al. Mar 2010 B2
7699776 Walker et al. Apr 2010 B2
7722541 Cai May 2010 B2
7744532 Ustuner et al. Jun 2010 B2
7750311 Daghighian Jul 2010 B2
7764984 Desmedt et al. Jul 2010 B2
7785260 Umemura et al. Aug 2010 B2
7787680 Ahn et al. Aug 2010 B2
7806828 Stringer Oct 2010 B2
7819810 Stringer et al. Oct 2010 B2
7822250 Yao et al. Oct 2010 B2
7824337 Abe et al. Nov 2010 B2
7833163 Cai Nov 2010 B2
7837624 Hossack et al. Nov 2010 B1
7846097 Jones et al. Dec 2010 B2
7850613 Stribling Dec 2010 B2
7862508 Davies et al. Jan 2011 B2
7876945 Lötjönen Jan 2011 B2
7880154 Otto Feb 2011 B2
7887486 Ustuner et al. Feb 2011 B2
7901358 Mehi et al. Mar 2011 B2
7914451 Davies Mar 2011 B2
7919906 Cerofolini Apr 2011 B2
7926350 Kröning et al. Apr 2011 B2
7927280 Davidsen Apr 2011 B2
7972271 Johnson et al. Jul 2011 B2
7984637 Ao et al. Jul 2011 B2
7984651 Randall et al. Jul 2011 B2
8002705 Napolitano et al. Aug 2011 B1
8007439 Specht Aug 2011 B2
8057392 Hossack et al. Nov 2011 B2
8057393 Yao et al. Nov 2011 B2
8079263 Randall et al. Dec 2011 B2
8079956 Azuma et al. Dec 2011 B2
8088067 Vortman et al. Jan 2012 B2
8088068 Yao et al. Jan 2012 B2
8088071 Hwang et al. Jan 2012 B2
8105239 Specht Jan 2012 B2
8135190 Bae et al. Mar 2012 B2
8157737 Zhang et al. Apr 2012 B2
8182427 Wu et al. May 2012 B2
8202219 Luo et al. Jun 2012 B2
8265175 Barsoum et al. Sep 2012 B2
8277383 Specht Oct 2012 B2
8279705 Choi et al. Oct 2012 B2
8412307 Willis et al. Apr 2013 B2
8414564 Goldshleger et al. Apr 2013 B2
8419642 Sandrin et al. Apr 2013 B2
8473239 Specht et al. Jun 2013 B2
8478382 Burnside et al. Jul 2013 B2
8483804 Hsieh et al. Jul 2013 B2
8532951 Roy et al. Sep 2013 B2
8582848 Funka-Lea et al. Nov 2013 B2
8602993 Specht et al. Dec 2013 B2
8627724 Papadopoulos et al. Jan 2014 B2
8634615 Brabec Jan 2014 B2
8672846 Napolitano et al. Mar 2014 B2
8684936 Specht Apr 2014 B2
9036887 Fouras et al. May 2015 B2
9072495 Specht Jul 2015 B2
9146313 Specht et al. Sep 2015 B2
9176078 Flohr et al. Nov 2015 B2
9192355 Specht et al. Nov 2015 B2
9217660 Zlotnick et al. Dec 2015 B2
9220478 Smith et al. Dec 2015 B2
9247874 Kumar et al. Feb 2016 B2
9247926 Smith et al. Feb 2016 B2
9265484 Brewer et al. Feb 2016 B2
9268777 Lu et al. Feb 2016 B2
9271661 Moghari et al. Mar 2016 B2
9277861 Kowal et al. Mar 2016 B2
9282945 Smith et al. Mar 2016 B2
9339256 Specht et al. May 2016 B2
9392986 Ning et al. Jul 2016 B2
9420994 Specht Aug 2016 B2
9510806 Smith et al. Dec 2016 B2
9526475 Specht et al. Dec 2016 B2
9572549 Belevich et al. Feb 2017 B2
9576354 Fouras et al. Feb 2017 B2
9582876 Specht Feb 2017 B2
9606206 Boernert et al. Mar 2017 B2
9659152 Mueller May 2017 B2
9668714 Call et al. Jun 2017 B2
9775511 Kumar et al. Oct 2017 B2
9788813 Adam et al. Oct 2017 B2
9883848 Specht et al. Feb 2018 B2
9986969 Call et al. Jun 2018 B2
9986975 Specht et al. Jun 2018 B2
10064605 Belevich et al. Sep 2018 B2
10130333 Specht Nov 2018 B2
10206662 Smith et al. Feb 2019 B2
10226234 Specht et al. Mar 2019 B2
10267913 Smith et al. Apr 2019 B2
10342518 Specht et al. Jul 2019 B2
10380399 Call et al. Aug 2019 B2
10401493 Call et al. Sep 2019 B2
20020035864 Paltieli et al. Mar 2002 A1
20020087071 Schmitz et al. Jul 2002 A1
20020111568 Bukshpan Aug 2002 A1
20020138003 Bukshpan Sep 2002 A1
20020161299 Prater et al. Oct 2002 A1
20030013962 Bjaerum et al. Jan 2003 A1
20030028111 Vaezy et al. Feb 2003 A1
20030040669 Grass et al. Feb 2003 A1
20030228053 Li et al. Dec 2003 A1
20040015079 Berger et al. Jan 2004 A1
20040054283 Corey et al. Mar 2004 A1
20040068184 Trahey et al. Apr 2004 A1
20040100163 Baumgartner et al. May 2004 A1
20040111028 Abe et al. Jun 2004 A1
20040122313 Moore et al. Jun 2004 A1
20040122322 Moore et al. Jun 2004 A1
20040127793 Mendlein et al. Jul 2004 A1
20040138565 Trucco Jul 2004 A1
20040144176 Yoden Jul 2004 A1
20040215075 Zagzebski et al. Oct 2004 A1
20040236217 Cerwin et al. Nov 2004 A1
20040236223 Barnes et al. Nov 2004 A1
20040267132 Podany Dec 2004 A1
20050004449 Mitschke et al. Jan 2005 A1
20050053305 Li et al. Mar 2005 A1
20050054910 Tremblay et al. Mar 2005 A1
20050061536 Proulx Mar 2005 A1
20050090743 Kawashima et al. Apr 2005 A1
20050090745 Steen Apr 2005 A1
20050111846 Steinbacher et al. May 2005 A1
20050113689 Gritzky May 2005 A1
20050113694 Haugen et al. May 2005 A1
20050124883 Hunt Jun 2005 A1
20050131300 Bakircioglu et al. Jun 2005 A1
20050147297 McLaughlin et al. Jul 2005 A1
20050165312 Knowles et al. Jul 2005 A1
20050203404 Freiburger Sep 2005 A1
20050215883 Hundley et al. Sep 2005 A1
20050240125 Makin et al. Oct 2005 A1
20050252295 Fink et al. Nov 2005 A1
20050281447 Moreau-Gobard et al. Dec 2005 A1
20050288588 Weber et al. Dec 2005 A1
20060058664 Barthe et al. Mar 2006 A1
20060062447 Rinck et al. Mar 2006 A1
20060074313 Slayton et al. Apr 2006 A1
20060074315 Liang et al. Apr 2006 A1
20060074320 Yoo et al. Apr 2006 A1
20060079759 Vaillant et al. Apr 2006 A1
20060079778 Mo et al. Apr 2006 A1
20060079782 Beach et al. Apr 2006 A1
20060094962 Clark May 2006 A1
20060111634 Wu May 2006 A1
20060122506 Davies et al. Jun 2006 A1
20060173327 Kim Aug 2006 A1
20060262961 Holsing et al. Nov 2006 A1
20060270934 Savord et al. Nov 2006 A1
20070016022 Blalock et al. Jan 2007 A1
20070016044 Blalock et al. Jan 2007 A1
20070036414 Georgescu et al. Feb 2007 A1
20070055155 Owen et al. Mar 2007 A1
20070073781 Adkins et al. Mar 2007 A1
20070078345 Mo et al. Apr 2007 A1
20070088213 Poland Apr 2007 A1
20070138157 Dane et al. Jun 2007 A1
20070161898 Hao et al. Jul 2007 A1
20070161904 Urbano Jul 2007 A1
20070167752 Proulx et al. Jul 2007 A1
20070167824 Lee et al. Jul 2007 A1
20070232914 Chen et al. Oct 2007 A1
20070238985 Smith et al. Oct 2007 A1
20070242567 Daft et al. Oct 2007 A1
20080110261 Randall et al. May 2008 A1
20080110263 Klessel et al. May 2008 A1
20080112265 Urbano et al. May 2008 A1
20080114241 Randall et al. May 2008 A1
20080114245 Randall et al. May 2008 A1
20080114246 Randall et al. May 2008 A1
20080114247 Urbano et al. May 2008 A1
20080114248 Urbano et al. May 2008 A1
20080114249 Randall et al. May 2008 A1
20080114250 Urbano et al. May 2008 A1
20080114251 Weymer et al. May 2008 A1
20080114252 Randall et al. May 2008 A1
20080114253 Randall et al. May 2008 A1
20080114255 Schwartz et al. May 2008 A1
20080125659 Wilser et al. May 2008 A1
20080181479 Yang et al. Jul 2008 A1
20080183075 Govari et al. Jul 2008 A1
20080188747 Randall et al. Aug 2008 A1
20080188750 Randall et al. Aug 2008 A1
20080194957 Hoctor et al. Aug 2008 A1
20080194958 Lee et al. Aug 2008 A1
20080194959 Wang et al. Aug 2008 A1
20080208061 Halmann Aug 2008 A1
20080242996 Hall et al. Oct 2008 A1
20080249408 Palmeri et al. Oct 2008 A1
20080255452 Entrekin Oct 2008 A1
20080269604 Boctor et al. Oct 2008 A1
20080269613 Summers et al. Oct 2008 A1
20080275344 Glide-Hurst et al. Nov 2008 A1
20080285819 Konofagou et al. Nov 2008 A1
20080287787 Sauer et al. Nov 2008 A1
20080294045 Ellington et al. Nov 2008 A1
20080294050 Shinomura et al. Nov 2008 A1
20080294052 Wilser et al. Nov 2008 A1
20080306382 Guracar et al. Dec 2008 A1
20080306386 Baba et al. Dec 2008 A1
20080319317 Kamiyama et al. Dec 2008 A1
20090010459 Garbini et al. Jan 2009 A1
20090012393 Choi Jan 2009 A1
20090015665 Willsie Jan 2009 A1
20090016163 Freeman et al. Jan 2009 A1
20090018445 Schers et al. Jan 2009 A1
20090024039 Wang et al. Jan 2009 A1
20090036780 Abraham Feb 2009 A1
20090043206 Towfiq et al. Feb 2009 A1
20090048519 Hossack et al. Feb 2009 A1
20090069681 Lundberg et al. Mar 2009 A1
20090069686 Daft et al. Mar 2009 A1
20090069692 Cooley et al. Mar 2009 A1
20090079299 Bradley et al. Mar 2009 A1
20090099483 Rybyanets Apr 2009 A1
20090112095 Daigle Apr 2009 A1
20090131797 Jeong et al. May 2009 A1
20090143680 Yao et al. Jun 2009 A1
20090148012 Altmann et al. Jun 2009 A1
20090150094 Van Velsor et al. Jun 2009 A1
20090182233 Wodnicki Jul 2009 A1
20090182237 Angelsen et al. Jul 2009 A1
20090198134 Hashimoto et al. Aug 2009 A1
20090203997 Ustuner Aug 2009 A1
20090208080 Grau et al. Aug 2009 A1
20090259128 Stribling Oct 2009 A1
20090264760 Lazebnik et al. Oct 2009 A1
20090306510 Hashiba et al. Dec 2009 A1
20090326379 Daigle et al. Dec 2009 A1
20100010354 Skerl et al. Jan 2010 A1
20100016725 Thiele Jan 2010 A1
20100036258 Dietz et al. Feb 2010 A1
20100063397 Wagner Mar 2010 A1
20100063399 Walker et al. Mar 2010 A1
20100069751 Hazard et al. Mar 2010 A1
20100069756 Ogasawara et al. Mar 2010 A1
20100085383 Cohen et al. Apr 2010 A1
20100106431 Baba et al. Apr 2010 A1
20100109481 Buccafusca May 2010 A1
20100121193 Fukukita et al. May 2010 A1
20100121196 Hwang et al. May 2010 A1
20100130855 Lundberg et al. May 2010 A1
20100145195 Hyun Jun 2010 A1
20100168566 Bercoff et al. Jul 2010 A1
20100168578 Garson, Jr. et al. Jul 2010 A1
20100174194 Chiang et al. Jul 2010 A1
20100174198 Young et al. Jul 2010 A1
20100191110 Insana et al. Jul 2010 A1
20100217124 Cooley Aug 2010 A1
20100228126 Emery et al. Sep 2010 A1
20100240994 Zheng Sep 2010 A1
20100249570 Carson et al. Sep 2010 A1
20100249596 Magee Sep 2010 A1
20100256488 Kim et al. Oct 2010 A1
20100262013 Smith et al. Oct 2010 A1
20100266176 Masumoto et al. Oct 2010 A1
20100286525 Osumi Nov 2010 A1
20100286527 Cannon et al. Nov 2010 A1
20100310143 Rao et al. Dec 2010 A1
20100317971 Fan et al. Dec 2010 A1
20100324418 El-Aklouk et al. Dec 2010 A1
20100324423 El-Aklouk et al. Dec 2010 A1
20100329521 Beymer et al. Dec 2010 A1
20110005322 Ustuner Jan 2011 A1
20110016977 Guracar Jan 2011 A1
20110021920 Shafir et al. Jan 2011 A1
20110021923 Daft et al. Jan 2011 A1
20110033098 Richter et al. Feb 2011 A1
20110044133 Tokita Feb 2011 A1
20110066030 Yao Mar 2011 A1
20110098565 Masuzawa Apr 2011 A1
20110112400 Emery et al. May 2011 A1
20110112404 Gourevitch May 2011 A1
20110125017 Ramamurthy et al. May 2011 A1
20110178441 Tyler Jul 2011 A1
20110270088 Shiina Nov 2011 A1
20110301470 Sato et al. Dec 2011 A1
20110306886 Daft et al. Dec 2011 A1
20110319764 Okada et al. Dec 2011 A1
20120004545 Ziv-Ari et al. Jan 2012 A1
20120035482 Kim et al. Feb 2012 A1
20120036934 Kröning et al. Feb 2012 A1
20120085173 Papadopoulos et al. Apr 2012 A1
20120101378 Lee Apr 2012 A1
20120114210 Kim et al. May 2012 A1
20120121150 Murashita May 2012 A1
20120137778 Kitazawa et al. Jun 2012 A1
20120140595 Amemiya Jun 2012 A1
20120141002 Urbano et al. Jun 2012 A1
20120165670 Shi et al. Jun 2012 A1
20120179044 Chiang et al. Jul 2012 A1
20120226201 Clark et al. Sep 2012 A1
20120235998 Smith-Casem et al. Sep 2012 A1
20120243763 Wen et al. Sep 2012 A1
20120253194 Tamura Oct 2012 A1
20120265075 Pedrizzetti et al. Oct 2012 A1
20120277585 Koenig et al. Nov 2012 A1
20130070062 Fouras et al. Mar 2013 A1
20130076207 Krohn et al. Mar 2013 A1
20130079639 Hoctor et al. Mar 2013 A1
20130083628 Qiao et al. Apr 2013 A1
20130088122 Krohn et al. Apr 2013 A1
20130116561 Rothberg et al. May 2013 A1
20130131516 Katsuyama May 2013 A1
20130144165 Ebbini et al. Jun 2013 A1
20130204136 Duric et al. Aug 2013 A1
20130204137 Roy et al. Aug 2013 A1
20130258805 Hansen et al. Oct 2013 A1
20130261463 Chiang et al. Oct 2013 A1
20140043933 Belevich Feb 2014 A1
20140073921 Specht et al. Mar 2014 A1
20140086014 Kobayashi Mar 2014 A1
20140147013 Shandas et al. May 2014 A1
20140243673 Anand et al. Aug 2014 A1
20150045668 Smith et al. Feb 2015 A1
20160095579 Smith et al. Apr 2016 A1
20160135783 Brewer et al. May 2016 A1
20160228090 Boctor Aug 2016 A1
20160256134 Specht et al. Sep 2016 A1
20170209121 Davies et al. Jul 2017 A1
20170219704 Call Aug 2017 A1
20170224312 Call et al. Aug 2017 A1
20180049717 Adam et al. Feb 2018 A1
20180125451 Duncan May 2018 A1
20180153511 Specht et al. Jun 2018 A1
20180279991 Call et al. Oct 2018 A1
20190021697 Specht et al. Jan 2019 A1
20190083058 Specht Mar 2019 A1
20190175152 Smith et al. Jun 2019 A1
20190200961 Specht et al. Jul 2019 A1
20190328367 Specht et al. Oct 2019 A1
Foreign Referenced Citations (135)
Number Date Country
1535243 Oct 2004 CN
1781460 Jun 2006 CN
101103927 Jan 2008 CN
101116622 Feb 2008 CN
101190134 Jun 2008 CN
101453955 Jun 2009 CN
101609150 Dec 2009 CN
101843501 Sep 2010 CN
101912278 Dec 2010 CN
102018533 Apr 2011 CN
102112047 Jun 2011 CN
102123668 Jul 2011 CN
102599930 Jul 2012 CN
102011114333 Mar 2013 DE
1949856 Jul 2008 EP
2058796 May 2009 EP
2101191 Sep 2009 EP
2182352 May 2010 EP
2187813 May 2010 EP
2198785 Jun 2010 EP
1757955 Nov 2010 EP
2325672 May 2011 EP
1462819 Jul 2011 EP
2356941 Aug 2011 EP
1979739 Oct 2011 EP
2385391 Nov 2011 EP
2294400 Feb 2012 EP
2453256 May 2012 EP
1840594 Jun 2012 EP
2514368 Oct 2012 EP
1850743 Dec 2012 EP
1594404 Sep 2013 EP
2026280 Oct 2013 EP
2851662 Aug 2004 FR
49-11189 Jan 1974 JP
54-44375 Apr 1979 JP
55-103839 Aug 1980 JP
57-31848 Feb 1982 JP
58-223059 Dec 1983 JP
59-101143 Jun 1984 JP
59-174151 Oct 1984 JP
60-13109 Jan 1985 JP
60-68836 Apr 1985 JP
01164354 Jun 1989 JP
02501431 May 1990 JP
03015455 Jan 1991 JP
03126443 May 1991 JP
04017842 Jan 1992 JP
04067856 Mar 1992 JP
05042138 Feb 1993 JP
06125908 May 1994 JP
06254092 Sep 1994 JP
07051266 Feb 1995 JP
07204201 Aug 1995 JP
08154930 Jun 1996 JP
08252253 Oct 1996 JP
09103429 Apr 1997 JP
09201361 Aug 1997 JP
2777197 May 1998 JP
10216128 Aug 1998 JP
11089833 Apr 1999 JP
11239578 Sep 1999 JP
2001507794 Jun 2001 JP
2001245884 Sep 2001 JP
2002209894 Jul 2002 JP
2002253548 Sep 2002 JP
2002253549 Sep 2002 JP
2003235839 Aug 2003 JP
2004167092 Jun 2004 JP
2004215987 Aug 2004 JP
2004337457 Dec 2004 JP
2004340809 Dec 2004 JP
2004351214 Dec 2004 JP
2005046192 Feb 2005 JP
2005152187 Jun 2005 JP
2005523792 Aug 2005 JP
2005526539 Sep 2005 JP
2006051356 Feb 2006 JP
2006061203 Mar 2006 JP
2006122657 May 2006 JP
2006130313 May 2006 JP
2006204923 Aug 2006 JP
2007325937 Dec 2007 JP
2008122209 May 2008 JP
2008513763 May 2008 JP
2008515557 May 2008 JP
2008132342 Jun 2008 JP
2008522642 Jul 2008 JP
2008259541 Oct 2008 JP
2008279274 Nov 2008 JP
2008307087 Dec 2008 JP
2009240667 Oct 2009 JP
2010005375 Jan 2010 JP
2010124842 Jun 2010 JP
2010526626 Aug 2010 JP
2011529362 Dec 2011 JP
2013121493 Jun 2013 JP
2014087448 May 2014 JP
100715132 Apr 2007 KR
1020080044737 May 2008 KR
1020090103408 Oct 2009 KR
WO9218054 Oct 1992 WO
WO9800719 Jan 1998 WO
WO0164109 Sep 2001 WO
WO02084594 Oct 2002 WO
WO2005009245 Feb 2005 WO
WO2006114735 Nov 2006 WO
WO2007127147 Nov 2007 WO
WO2008097479 Aug 2008 WO
WO2009060182 May 2009 WO
WO2010095094 Aug 2010 WO
WO2010137453 Dec 2010 WO
WO2010139519 Dec 2010 WO
WO2011004661 Jan 2011 WO
WO2011057252 May 2011 WO
WO2011064688 Jun 2011 WO
WO2011100697 Aug 2011 WO
WO2011123529 Oct 2011 WO
WO2012028896 Mar 2012 WO
WO2012049124 Apr 2012 WO
WO2012049612 Apr 2012 WO
WO2012078639 Jun 2012 WO
WO2012091280 Jul 2012 WO
WO2012112540 Aug 2012 WO
WO2012131340 Oct 2012 WO
WO2012160541 Nov 2012 WO
WO2013059358 Apr 2013 WO
WO2013109965 Jul 2013 WO
WO2013116807 Aug 2013 WO
WO2013116809 Aug 2013 WO
WO2013116851 Aug 2013 WO
WO2013116854 Aug 2013 WO
WO2013116866 Aug 2013 WO
WO2013128301 Sep 2013 WO
WO-2014031642 Feb 2014 WO
Non-Patent Literature Citations (53)
Entry
Zhang, Haichong K., et al. “Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach using beamformed ultrasound data.” Biomedical optics express 7.8 (2016): 3056-3068. (Year: 2016).
Abeysekera et al.; Alignment and calibration of dual ultrasound transducers using a wedge phantom; Ultrasound in Medicine and Biology; 37(2); pp. 271-279; Feb. 2011.
Arigovindan et al.; Full motion and flow field recovery from echo doppler data; IEEE Transactions on Medical Imaging; 26(1); pp. 31-45; Jan. 2007.
Capineri et al.; A doppler system for dynamic vector velocity maps; Ultrasound in Medicine & Biology; 28(2); pp. 237-248; Feb. 28, 2002.
Carson et al.; Measurement of photoacoustic transducer position by robotic source placement and nonlinear parameter estimation; Biomedical Optics (BiOS); International Society for Optics and Photonics (9th Conf. on Biomedical Thermoacoustics, Optoacoustics, and Acousto-optics; vol. 6856; 9 pages; Feb. 28, 2008.
Chen et al.; Maximum-likelihood source localization and unknown sensor location estimation for wideband signals in the near-field; IEEE Transactions On Signal Processing; 50(8); pp. 1843-1854; Aug. 2002.
Chen et al.; Source localization and tracking of a wideband source using a randomly distributed beamforming sensor array; International Journal of High Performance Computing Applications; 16(3); pp. 259-272; Fall 2002.
Cristianini et al.; An Introduction to Support Vector Machines; Cambridge University Press; pp. 93-111; Mar. 2000.
Dunmire et al.; A brief history of vector doppler; Medical Imaging 2001; International Society for Optics and Photonics; pp. 200-214; May 30, 2001.
Du et al.; User parameter free approaches to multistatic adaptive ultrasound imaging; 5th IEEE International Symposium; pp. 1287-1290, May 2008.
Feigenbaum, Harvey, M.D.; Echocardiography; Lippincott Williams & Wilkins; Philadelphia; 5th Ed.; pp. 482, 484; Feb. 1994.
Fernandez et al.; High resolution ultrasound beamforming using synthetic and adaptive imaging techniques; Proceedings IEEE International Symposium on Biomedical Imaging; Washington, D.C.; pp. 433-436; Jul. 7-10, 2002.
Gazor et al.; Wideband multi-source beamforming with array location calibration and direction finding; Conference on Acoustics, Speech and Signal Processing ICASSP-95; Detroit, MI; vol. 3 IEEE; pp. 1904-1907; May 9-12, 1995.
Haykin, Simon; Neural Networks: A Comprehensive Foundation (2nd Ed.); Prentice Hall; pp. 156-187; Jul. 16, 1998.
Heikkila et al.; A four-step camera calibration procedure with implicit image correction; Proceedings IEEE Computer Scociety Conference on Computer Vision and Pattern Recognition; San Juan; pp. 1106-1112; Jun. 17-19, 1997.
Hendee et al.; Medical Imaging Physics; Wiley-Liss, Inc. 4th Edition; Chap. 19-22; pp. 303-353; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) © 2002.
Hsu et al.; Real-time freehand 3D ultrasound calibration; CUED/F-INFENG/TR 565; Department of Engineering, University of Cambridge, United Kingdom; 14 pages; Sep. 2006.
Jeffs; Beamforming: a brief introduction; Brigham Young University; 14 pages; retrieved from the internet (http://ens.ewi.tudelft.nl/Education/courses/et4235/Beamforming.pdf); Oct. 2004.
Khamene et al.; A novel phantom-less spatial and temporal ultrasound calibration method; Medical Image Computing and Computer-Assisted Intervention—MICCAI (Proceedings 8th Int. Conf.); Springer Berlin Heidelberg; Palm Springs, CA; pp. 65-72; Oct. 26-29, 2005.
Kramb et al,.; Considerations for using phased array ultrasonics in a fully automated inspection system. Review of Quantitative Nondestructive Evaluation, 2004 Edition, ed. D. O. Thompson and D. E. Chimenti, American Inst. of Physics, pp. 817-825, Mar. 2004.
Ledesma-Carbayo et al.; Spatio-temporal nonrigid registration for ultrasound cardiac motion estimation; IEEE Trans. On Medical Imaging; vol. 24; No. 9; Sep. 2005.
Leotta et al.; Quantitative three-dimensional echocardiography by rapid imaging . . . ; J American Society of Echocardiography; vol. 10; No. 8; pp. I 830-839; Oct. 1997.
Li et al.; An efficient speckle tracking algorithm for ultrasonic imaging; 24; pp. 215-228; Oct. 1, 2002.
Morrison et al.; A probabilistic neural network based image segmentation network for magnetic resonance images; Proc. Conf. Neural Networks; Baltimore, MD; vol. 3; pp. 60-65; Jun. 1992.
Nadkarni et al.; Cardiac motion synchronization for 3D cardiac ultrasound imaging; Ph.D. Dissertation, University of Western Ontario; Jun. 2002.
Opretzka et al.; A high-frequency ultrasound imaging system combining limited-angle spatial compounding and model-based synthetic aperture focusing; IEEE Transactions on Ultrasonics, Ferroelectrics And Frequency Control, IEEE, US; 58(7); pp. 1355-1365; Jul. 2, 2011.
Press et al.; Cubic spline interpolation; §3.3 in “Numerical Recipes in FORTRAN: The Art of Scientific Computing”, 2nd Ed.; Cambridge, England; Cambridge University Press; pp. 107-110; Sep. 1992.
Saad et al.; Computer vision approach for ultrasound doppler angle estimation; Journal of Digital Imaging; 22(6); pp. 681-688; Dec. 1, 2009.
Sakas et al.; Preprocessing and volume rendering of 3D ultrasonic data; IEEE Computer Graphics and Applications; pp. 47-54, Jul. 1995.
Sapia et al.; Deconvolution of ultrasonic waveforms using an adaptive wiener filter; Review of Progress in Quantitative Nondestructive Evaluation; vol. 13A; Plenum Press; pp. 855-862; Jan. 1994.
Sapia et al.; Ultrasound image deconvolution using adaptive inverse filtering; 12 IEEE Symposium on Computer-Based Medical Systems, CBMS, pp. 248-253; Jun. 1999.
Sapia, Mark Angelo; Multi-dimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering; Ph.D. Dissertation; University of Connecticut; Jan. 2000.
Slavine et al.; Construction, calibration and evaluation of a tissue phantom with reproducible optical properties for investigations in light emission tomography; Engineering in Medicine and Biology Workshop; Dallas, TX; IEEE pp. 122-125; Nov. 11-12, 2007.
Smith et al.; High-speed ultrasound volumetric imaging system. 1. Transducer design and beam steering; IEEE Trans. Ultrason., Ferroelect., Freq. Contr.; vol. 38; pp. 100-108; Mar. 1991.
Specht et al.; Deconvolution techniques for digital longitudinal tomography; SPIE; vol. 454; presented at Application of Optical Instrumentation in Medicine XII; pp. 319-325; Jun. 1984.
Specht et al.; Experience with adaptive PNN and adaptive GRNN; Proc. IEEE International Joint Conf. on Neural Networks; vol. 2; pp. 1203-1208; Orlando, FL; Jun. 1994.
Specht, D.F.; A general regression neural network; IEEE Trans. On Neural Networks; vol. 2.; No. 6; Nov. 1991.
Specht, D.F.; Blind deconvolution of motion blur using LMS inverse filtering; Lockheed Independent Research (unpublished); Jun. 23, 1975.
Specht, D.F.; Enhancements to probabilistic neural networks; Proc. IEEE International Joint Conf. on Neural Networks; Baltimore, MD; Jun. 1992.
Specht, D.F.; GRNN with double clustering; Proc. IEEE International Joint Conf. Neural Networks; Vancouver, Canada; Jul. 16-21, 2006.
Specht, D.F.; Probabilistic neural networks; Pergamon Press; Neural Networks; vol. 3; pp. 109-118; Feb. 1990.
UCLA Academic Technology; SPSS learning module: How can I analyze a subset of my data; 6 pages; retrieved from the internet (http://www.ats.ucla.edu/stat/spss/modules/subset_analyze.htm) Nov. 26, 2001.
Urban et al; Implementation of vibro-acoustography on a clinical ultrasound system; IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control; 58(6); pp. 1169-1181 (Author Manuscript, 25 pgs.); Jun. 2011.
Urban et al; Implementation of vibro-acoustography on a clinical ultrasound system; IEEE Ultrasonics Symposium (IUS); pp. 326-329; Oct. 14, 2010.
Von Ramm et al.; High-speed ultrasound volumetric imaging—System. 2. Parallel processing and image display; IEEE Trans. Ultrason., Ferroelect., Freq. Contr.; vol. 38; pp. 109-115; Mar. 1991.
Wang et al.; Photoacoustic tomography of biological tissues with high cross-section resolution: reconstruction and experiment; Medical Physics; 29(12); pp. 2799-2805; Dec. 2002.
Wells, P.N.T.; Biomedical ultrasonics; Academic Press; London, New York, San Francisco; pp. 124-125; Mar. 1977.
Widrow et al.; Adaptive signal processing; Prentice-Hall; Englewood Cliffs, NJ; pp. 99-116; Mar. 1985.
Wikipedia; Point cloud; 2 pages; retrieved Nov. 24, 2014 from the internet (https://en.wikipedia.org/w/index.php?title=Point_cloud&oldid=472583138).
Wikipedia; Curve fitting; 5 pages; retrieved from the internet (http:en.wikipedia.org/wiki/Curve_fitting) Dec. 19, 2010.
Wikipedia; Speed of sound; 17 pages; retrieved from the internet (http:en.wikipedia.org/wiki/Speed_of_sound) Feb. 15, 2011.
Yang et al.; Time-of-arrival calibration for improving the microwave breast cancer imaging; 2011 IEEE Topical Conf. on Biomedical Wireless Technologies, Networks, and sensing Systems (BioWireleSS); Phoenix, AZ; pp. 67-70; Jan. 16-19, 2011.
Zhang et al.; A high-frequency high frame rate duplex ultrasound linear array imaging system for small animal imaging; IEEE transactions on ultrasound, ferroelectrics, and frequency control; 57(7); pp. 1548-1567; Jul. 2010.
Related Publications (1)
Number Date Country
20190008487 A1 Jan 2019 US
Provisional Applications (1)
Number Date Country
61681986 Aug 2012 US
Continuations (2)
Number Date Country
Parent 15400826 Jan 2017 US
Child 16121303 US
Parent 13964701 Aug 2013 US
Child 15400826 US