Ultrasound imaging with sparse array probes

Information

  • Patent Grant
  • 10856846
  • Patent Number
    10,856,846
  • Date Filed
    Friday, January 27, 2017
    7 years ago
  • Date Issued
    Tuesday, December 8, 2020
    3 years ago
Abstract
Sparse arrays of transducer elements may be beneficial in providing ultrasound transducer probes with a wide total aperture while containing a manageable number of transducer elements. Sparse arrays made with bulk piezoelectric materials or with arrays of micro-elements can be effectively with ping-based multiple aperture ultrasound imaging techniques to perform real-time volumetric imaging.
Description
INCORPORATION BY REFERENCE

All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.


FIELD

This application relates generally to the field of ultrasound imaging, and more particularly to ping-based ultrasound imaging using sparse arrays of ultrasound transducer elements.


BACKGROUND

In conventional scanline-based ultrasonic imaging, a focused beam of ultrasound energy is transmitted into body tissues to be examined and echoes returned along the same line are detected and plotted to form a portion of an image along the scanline. A complete image may be formed by repeating the process and combining image portions along a series of scanlines within a scan plane. Any information in between successive scanlines must be estimated by interpolation.


The same process has been extended to obtaining ultrasonic images of three-dimensional volumes by combining images from multiple adjacent slices (where each slice is in a different scan plane). Again, any information from any space in between successive scan planes must be estimated by interpolation. Because time elapses between capturing complete 2D (two-dimensional) slices, obtaining 3D (three-dimensional) image data for a moving object may be significantly impaired. So-called “4D” (four-dimensional) imaging systems (in which the fourth dimension is time) strive to produce moving images (i.e., video) of 3D volumetric space. Scanline-based imaging systems also have an inherent frame-rate limitation which creates difficulties when attempting 4D imaging on a moving object.


As a result of these and other factors, some of the limitations of existing 2D and 3D ultrasonic imaging systems and methods include poor temporal and spatial resolution, imaging depth, speckle noise, poor lateral resolution, obscured tissues and other such problems.


Significant improvements have been made in the field of ultrasound imaging with the creation of multiple aperture imaging, examples of which are shown and described in Applicant's prior patents and applications referenced above. Multiple aperture imaging methods and systems allow for ultrasound signals to be both transmitted and received from physically and logically separate apertures.


SUMMARY OF THE DISCLOSURE

The various embodiments of systems and methods herein provide the ability to perform high resolution three-dimensional ultrasound imaging at frame rates sufficient to capture details of moving objects. Traditional scanline-based ultrasound imaging methods are limited to relatively slow frame rates due to the need to transmit and receive many scanlines to obtain a single two-dimensional plane. Extending such techniques to obtain imaging data from a complete 3D volume results in even slower frame rates due to the need to image many 2D slices.


As an example, assume that one needs to collect data from a cube of tissue 10 cm on a side at a depth ranging from 5 cm to 15 cm. If scanlines are transmitted from a common center, the shape that would be explored would be a truncated pyramid instead of a shape with comparable thickness in the proximal and distal regions. The tissue may be sampled with beams that are 2 mm (or less) apart on the distal face of the cube. To cover the distal surface one would need at least 50×50 directed beams or 2500 directed pulses. With a maximum pulse rate of approximately 2500 pulses/sec (which may be constrained by the speed of sound in tissue, the expected signal attenuation, and the background noise level), all of the required data may be collected in about one second. This collection time may be adequate for non-moving tissue such as bone, liver, etc., but is not fast enough to capture motion in arteries, or organs such as kidneys and especially the heart, or in moving joints or muscles.


On the other hand, with ping-based imaging, a single ping, propagating substantially uniformly in three dimensions, can insonify the entire volume, and dynamic beamforming (focusing) can identify the sources of the echo returns. Using ping-based imaging techniques, a minimum of three pings may be needed to obtain data for a 3D volume, while a minimum of two pings may be needed to obtain data for a 2D slice. In practical terms, ten to fifty (or more) pings may be used to achieve a desired image quality. For example, the use of 25 pings at a rate of 2500 pings per second may require only 0.01 seconds to acquire all the data for the entire 10 cm cube of tissue. For this particular example, data collection may be 100 times faster than with the scanline-based method.


Using ping-based ultrasound imaging techniques, both 2D and 3D frame rates may be increased substantially so as to allow for imaging of 3D volumes in real-time. Furthermore, by applying multiple aperture imaging techniques (e.g., transmitting and receiving ultrasound signals through multiple, spatially or physically separated acoustic windows), the resolution of such real-time 3D images may be dramatically improved relative to single-aperture techniques.


Ping-based multiple aperture ultrasound imaging can provide very powerful real-time three-dimensional imaging capabilities as described above. The benefits of ping-based multiple aperture ultrasound imaging may be achieved by using transducer probes with overall dimensions much larger than traditional ultrasound probes. For example, ping-based multiple aperture ultrasound imaging may be beneficially used with probes having an active imaging surface in excess of 100 cm2. Traditionally, ultrasound elements in a probe are spaced as close together as possible, typically significantly less than (and generally no more than) half a wavelength of the ultrasound frequency being used.


However, using traditional element-to-element spacing in such a large probe would require a cable far too thick for the cable to be usable. Although some tricks may be used to reduce the number of individual conductors required in a cable, a better solution is to increase the allowed spacing between elements, thereby reducing the total number of elements in an array. Use of sparse arrays with traditional scanline-based imaging methods suffers from substantial complications, artifacts, and low resolution and is therefore not generally practical. Based on the research into the use of sparse arrays scanline-based phased array techniques, one would expect the use of sparse arrays with ping-based multiple aperture ultrasound imaging techniques to suffer similar difficulties, but that is unexpectedly not the case. In fact, sparse arrays can be used quite effectively with ping-based multiple aperture ultrasound imaging techniques as described herein.


In some embodiments, sparse arrays of transducer elements may be beneficial in providing an ultrasound probe with a wide total aperture while containing a manageable number of transducer elements.


The following disclosure provides various embodiments of ultrasound probe configurations, methods of making such probes, and methods of using such probes to perform high-frame-rate, high-resolution, real-time 2D, 3D and 4D ultrasound imaging.


An ultrasound transducer probe is provided, comprising an array of ultrasound transducing micro-elements, where each micro-element has a diameter less than 500 microns, a first group of micro-elements electrically connected to a first signal conductor, a second group of micro-elements electrically connected to a second signal conductor, the second signal conductor being electrically separate from the first signal conductor, and a third group of micro-elements positioned between the first group and the second group, the third group of micro-elements being permanently disconnected from any signal conductors.


In some embodiments, each micro-element has a diameter between 25 microns and 200 microns.


In other embodiments, some of the micro-elements of the first group are differently sized than other micro-elements of the first group, wherein the size of a micro-element corresponds its fundamental operating frequency.


In one embodiment, the micro-elements of the first group are connected to a first ground conductor and the micro-elements of the second group are connected to a second ground conductor not electrically connected to the first ground conductor. In another embodiment, the first group of micro-elements includes more micro-elements than the second group. In some embodiments, the first group of micro-elements collectively forms a dedicated transmit element and the second group of micro-elements collectively forms a dedicated receive element.


In one embodiment, the probe comprises a fourth group of micro-elements electrically connected to the first signal conductor by a switch that, when closed causes the fourth group to form a combined element with first group. In some embodiments, the micro-elements of the fourth group collectively surround the micro-elements of the first group. In another embodiment, the fourth group of micro-elements is adjacent to the first group of micro-elements. In one embodiment, the fourth group of micro-elements is contiguous with the first group of micro-elements.


In some embodiments, the combined element has a different shape than the first group alone. In other embodiments, the combined element has a shape that is the same as a shape of the first group but a different size.


An ultrasound imaging system is also provided, comprising a transducer probe having a first array segment and a second array segment separated from the first array segment by a gap of open space, the first and second array segments secured to at least one structural housing member rigidly holding the first and second arrays in fixed positions relative to one another, and an imaging control system containing instructions to: transmit an unfocused ultrasound ping from a transmit aperture approximating a point-source into an object to be imaged, receiving echoes of the ping from reflectors directly below gap with receive transducer elements on both the first array segment and the second array segment, producing a volumetric image of the region below the gap by combining echo data from echoes received by receive elements on both array segments.


In one embodiment, each array segment comprises an array of micro-elements as in any of the previous embodiments.


An ultrasound imaging probe is also provided, comprising a sparse array of ultrasound transducer elements in which less than 50% of potential element positions are occupied by active transducer elements, the sparse array having a first plurality of elements designated as transmit elements and a second plurality of elements designated as receive elements, and wherein no more than N of the receive elements are equidistant to any one transmit element, wherein N is an integer between 1 and 100.


In one embodiment, N is 1, 2, 3, 4, or 5.


In another embodiment, spacing between adjacent elements are pseudo-random distances. In other embodiments, spacing between adjacent elements are non-repeating distances based on a non-repeating number sequence.


In one embodiment, the transmit elements and the receive elements are made of bulk piezoelectric material.


In other embodiments, each transmit element and each receive element is made up of a plurality of micro-elements.


In one embodiment, at least one transmit element or at least one receive element is made up of two sub-groups of micro-elements.


In other embodiments, at least one transmit element or at least one receive element comprises a first plurality micro-elements operating at a first frequency and a second plurality of microelements operating at a second frequency different than the first frequency.


In additional embodiments, at least two of the designated transmit elements are configured to transmit different frequencies of ultrasound than a remainder of the transmit elements.


In other embodiments, spacing between adjacent elements is irregular.


Another ultrasound imaging probe is provided, comprising a sparse array of ultrasound transducer elements in which adjacent transducer elements are spaced by a distance of greater than half a maximum wavelength at which any element of the array is configured to operate, the sparse array having a first plurality of elements designated as transmit elements and a second plurality of elements designated as receive elements.


In one embodiment, spacing between adjacent elements are pseudo-random distances. In other embodiments, spacing between adjacent elements are non-repeating distances based on a non-repeating number sequence.


In one embodiment, the transmit elements and the receive elements are made of bulk piezoelectric material.


In another embodiment, each transmit element and each receive element is made up of a plurality of micro-elements.


In some embodiments, at least one transmit element or at least one receive element is made up of two sub-groups of micro-elements. In another embodiment, at least one transmit element or at least one receive element comprises a first plurality micro-elements operating at a first frequency and a second plurality of microelements operating at a second frequency different than the first frequency.


In additional embodiments, at least two of the designated transmit elements are configured to transmit different frequencies of ultrasound than a remainder of the transmit elements.


In other embodiments, spacing between adjacent elements is irregular.


An ultrasound imaging method is also provided, comprising transmitting a first ultrasound ping from a transmit aperture approximating a point source at a first time, receiving echoes of the first ultrasound ping with a first transducer element between the first time and a second time, the first transducer element coupled to a first receive channel by a first signal conductor, closing a switch at the second time to connect a second transducer element to the first signal conductor, the second transducer element surrounding the first transducer element, receiving echoes of the first ultrasound ping with the first transducer element and the second transducer element between the second time and a third time, and producing an image from the echoes received between the first time and the third time and displaying the image.


In some embodiments, first transducer element has a circular shape, and the second transducer element has a ring shape concentric with the first transducer element.


In an additional embodiment, the first transducer element comprises a first group of micro-elements electrically connected to the first signal conductor. In some embodiments, the second transducer element comprises a second group of micro-elements electrically connectable to the first signal conductor via the switch.


In another embodiment, echoes received between the first time and the second time are near-field echoes.


In some embodiments, the image is a volumetric image. In another embodiment, the image is a two-dimensional image.


An ultrasound imaging method is provided, comprising transmitting a first ultrasound ping from a transmit aperture approximating a point source at a first time, receiving and storing echoes of the first ultrasound ping with a first transducer element, the first transducer element coupled to a first receive channel by a first signal conductor, receiving and storing echoes of the first ultrasound ping with a second transducer element that surrounds the first transducer element, the second transducer element coupled to a second receive channel by a second signal conductor, retrieving first echoes received by the first element between a first time and a third time, retrieving second echoes received by the second element between a second time and the third time, the second time occurring between the first time and the third time, combining the first echoes received between the second time and the third time with the second echoes, and producing an image from the combined echoes and displaying the image.


In some embodiments, the first transducer element has a circular shape, and the second transducer element has a ring shape concentric with the first transducer element.


In other embodiments, the first transducer element comprises a first group of micro-elements electrically connected to the first signal conductor.


In one embodiment, the second transducer element comprises a second group of micro-elements electrically connectable to the first signal conductor via the switch.


In another embodiment, echoes received between the first time and the second time are near-field echoes.


In some embodiments, the image is a volumetric image. In another embodiment, image is a two-dimensional image.


An ultrasound imaging method is also provided, comprising transmitting a first ultrasound ping from a transmit aperture approximating a point source at a first time, receiving echoes of the first ultrasound ping with a first transducer element between the first time and a second time, the first transducer element coupled to a first receive channel by a first signal conductor, opening a switch between the first transducer element and the first signal conductor and simultaneously closing a switch between a second transducer element and the signal conductor, the second transducer element being larger than the first transducer element, receiving echoes of the first ultrasound ping with the second transducer element between the second time and a third time, and producing an image from the echoes received by both the first transducer element and the second transducer element between the first time and the third time and displaying the image.


In one embodiment, opening a switch between the first transducer element and the first signal conductor and simultaneously closing a switch between the second transducer element and the signal conductor comprises operating a single switch.


In one embodiment, the image is a volumetric image. In another embodiment, the image is a two-dimensional image.


An ultrasound imaging method is also provided, comprising transmitting a first ultrasound ping from a transmit aperture approximating a point source at a first time, receiving and storing echoes of the first ultrasound ping with a first transducer element, the first transducer element coupled to a first receive channel by a first signal conductor, receiving and storing echoes of the first ultrasound ping with a second transducer element that is larger than the first transducer element, the second transducer element coupled to a second receive channel by a second signal conductor, retrieving first echoes received by the first transducer element between a first time and a second time, retrieving second echoes received by the second transducer element between the second time and a third time, the second time occurring between the first time and the third time, producing an image from the first echoes and the second echoes, and displaying the image.


In one embodiment, the image is a volumetric image. In another embodiment, the image is a two-dimensional image.


An ultrasound imaging method is provided, comprising transmitting an unfocused ultrasound ping from a transmitter approximating a point source into an object to be imaged, receiving echoes of the ping at a first receive element and a second receive element, where a line between the first receive element and the second receive element defines an axis, retrieving position data defining a position of the first receive element and the second receive element relative to a common coordinate system, identifying a first echo sample corresponding to a first reflector received at the first element, and identifying a second echo sample corresponding to the same first reflector received at the second element, determining a first time-of-arrival at which the first sample echo was received at the first receive element, determining second time-of-arrival at which the second sample echo was received at the second receive element, comparing the first time-of-arrival and the second time-of-arrival to determine which of the echo samples corresponding to the first reflector was received first, determining that the first element is closest to the first reflector based on the comparison of times-of-arrival, assigning a greater weight to the first echo sample than the second echo sample, and producing an image of the reflector from the weighted first echo sample and the weighted second echo sample, and displaying the image.


In some embodiments, the method further comprises assigning a greater weight to the second echo sample than a third echo sample received by a third element that is further from the first element than from the second element along the axis, and producing the image of the reflector from the weighted first echo sample, the weighted second echo sample, and the weighted third echo sample.


In one embodiment, the first time-of-arrival is based on explicit timing information in stored echo data.


In another embodiment, the first time-of-arrival is based on implicit timing information in stored echo data.


An ultrasound imaging system is also provided, comprising an array of ultrasound transducer elements, a first transducer element in the array having a long axis and a short axis, wherein the first transducer element produces a first phase signature in response to ultrasound signals received from a first region of an imaged object and a second phase signature in response to ultrasound signals received from a second region of the imaged object, an imaging system configured to transmit ultrasound signals into the object from at least one transmit aperture approximating a point-source and to receive signals produced by the first transducer element in response to echoes of signals transmitted from the at least one transmit aperture, wherein the imaging system is further configured to determine whether a given reflector is located in the first region or the second region based on a phase signature produced by the first transducer element in response to an echo of the reflector received by the first transducer element, and wherein the imaging system is further configured to apply weights to echoes of the reflector received by other receive elements in the array based on the determination of the reflector being located in the first region or the second region, and to produce an image based on the weighted echoes.


In some embodiments, the first region and the second region are quadrants, and wherein the first transducer element further produces a third phase signature in response to ultrasound signals received from a third quadrant of the imaged object, and a fourth phase signature in response to ultrasound signals received from a fourth quadrant of the imaged object, and wherein the first and second quandrants correspond to regions of the object adjacent opposite ends of the short axis and the third and fourth quadrants correspond to regions of the object adjacent opposite ends of the long axis.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth with particularity in the claims that follow. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:



FIG. 1 is a schematic diagram illustrating a sparse array of ultrasound transducer elements made up of micro-elements.



FIG. 2 is a schematic diagram illustrating a sparse array of ultrasound transducer elements made up of micro-elements with no micro-elements between identified elements.



FIG. 3 is a schematic diagram illustrating rectangular a sparse array arrangement of ultrasound transducer elements represented as squares.



FIG. 4 is a schematic diagram illustrating a rectangular sparse array arrangement of ultrasound transducer elements.



FIG. 5 is a schematic diagram illustrating an elliptical sparse array arrangement of ultrasound transducer elements.



FIG. 6 is a schematic diagram illustrating a circular sparse array arrangement of ultrasound transducer elements.



FIG. 7 is a schematic diagram illustrating a concave or convex three-dimensional surface sparse array arrangement of ultrasound transducer elements.



FIG. 8 is a plan view illustration of a sparse array of regularly-spaced transmit and receive transducer elements.



FIG. 9 is a schematic diagram illustrating an example multi-frequency transmit transducer element made up of a plurality of micro-elements of different sizes.



FIG. 10A is a schematic illustration of an embodiment of an electric circuit that may be used to connect a group of concentric receive elements to separate receive channels of a receive subsystem.



FIG. 10B is a schematic illustration of an embodiment of an electric circuit that may be used to connect a group of concentric receive elements to a receive channel of a receive subsystem.



FIG. 11A is a schematic illustration of an embodiment of an electric circuit that may be used to connect a group of circular receive elements of different sizes to separate receive channels of a receive subsystem.



FIG. 11B is a schematic illustration of an embodiment of an electric circuit that may be used to connect a group of circular receive elements of different sizes to a single receive channel of a receive subsystem.



FIG. 12 is a schematic diagram illustrating a plurality of receive transducer elements in a grid pattern, showing axes for determining estimated reflector locations.



FIG. 13 is a schematic diagram illustrating an asymmetrical receive transducer element with a long axis and a short axis.



FIG. 14 is a schematic diagram illustrating a constellation configuration of ultrasound transmit elements surrounded by receive elements grouped into overlapping receive apertures based, at least in part, on proximity to transmit elements.



FIG. 15 is a schematic illustration of a circular ultrasound imaging probe having a central opening.



FIG. 16A is a schematic illustration showing a top plan view of an ultrasound imaging probe having two probe segments separated by a gap and joined to a bridge handle.



FIG. 16B is a schematic illustration showing a bottom plan view of an ultrasound imaging probe having two probe segments separated by a gap and joined to a bridge handle.



FIG. 17 is a schematic block diagram illustrating example components in an embodiment of a multiple aperture imaging system.





DETAILED DESCRIPTION

The various embodiments will be described in detail with reference to the accompanying drawings. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.


The present disclosure provides systems and methods for improving the quality of real-time two-dimensional and three-dimensional ultrasound imaging through the use of sparse arrays of various construction, including arrays in which each “element” is made up of a plurality of micro-elements arranged to be operated in concert with one another.


Although the various embodiments are described herein with reference to ultrasound imaging of various anatomic structures, it will be understood that many of the methods and devices shown and described herein may also be used in other applications, such as imaging and evaluating non-anatomic structures and objects. For example, the various embodiments herein may be applied to non-destructive testing applications such as evaluating the quality, integrity, dimensions, or other characteristics of various structures such as welds, pressure vessels, pipes, structural members, beams, etc. The systems and methods may also be used for imaging and/or testing a range of materials including human or animal tissues, solid metals such as iron, steel, aluminum, or titanium, various alloys or composite materials, etc.


Introduction to Key Terms

The following paragraphs provide useful definitions for some terms used frequently herein. Other terms may also be defined as they are used.


As used herein the terms “ultrasound transducer” and “transducer” may carry their ordinary meanings as understood by those skilled in the art of ultrasound imaging technologies, and may refer without limitation to any single component capable of converting an electrical signal into an ultrasonic signal and/or vice versa. For example, in some embodiments, an ultrasound transducer may comprise a piezoelectric device. In other embodiments, ultrasound transducers may comprise capacitive micro-machined ultrasound transducers (CMUT), other micro-machined transducers made of electroactive materials such as piezoelectric materials, ferroic materials, ferroelectric materials, pyroelectric materials, electrostrictive materials, or any other transducing material or device capable of converting ultrasound waves to and from electrical signals.


Transducers are often configured in arrays of multiple individual transducer elements. As used herein, the terms “transducer array” or “array” generally refers to a collection of transducer elements attached to a common support structure. An array may typically (though not necessarily) comprise a plurality of transducer elements mounted to a common backing plate or substrate. Such arrays may have one dimension (1D), two dimensions (2D), 1.X dimensions (1.XD) or three dimensions (3D) as those terms are used elsewhere herein and/or as they are commonly understood in the art. Other dimensioned arrays as understood by those skilled in the art may also be used. Annular arrays, such as concentric circular arrays and elliptical arrays may also be used. In some cases, transducer arrays may include irregularly-spaced transducer elements, sparsely positioned transducer elements (also referred to as sparse arrays), randomly spaced transducer elements, or any other geometric or random arrangement of transducer elements. Elements of an array need not be contiguous and may be separated by non-transducing material or empty space.


An element of a transducer array may be the smallest discretely functional component of an array. For example, in the case of an array of piezoelectric transducer elements, each element may be a single piezoelectric crystal or a single machined section of a piezoelectric crystal. As another example, in an array made up of a plurality of micro-elements (e.g., micro-machined elements, micro-dome elements, or other micro-sized elements), a group of micro-elements may be electrically coupled so as to operate collectively as a single functional element. In such a case, a group of collectively-operating micro-elements may be referred to as a single “element.”


As used herein, the terms “transmit element” and “receive element” may carry their ordinary meanings as understood by those skilled in the art of ultrasound imaging technologies. The term “transmit element” may refer without limitation to an ultrasound transducer element which at least momentarily performs a transmit function in which an electrical signal is converted into an ultrasound signal. Similarly, the term “receive element” may refer without limitation to an ultrasound transducer element which at least momentarily performs a receive function in which an ultrasound signal impinging on the element is converted into an electrical signal.


In cases where imaging is performed by transmitting “ping-based” or “point-source transmission” ultrasound signals, the term “transmit element” may refer to a single element or to a plurality of elements operated together to form the desired waveform transmission. For example, a plurality of transducer elements may be activated simultaneously or with delays selected to produce a circular or spherical waveform in the region of interest. Such a plurality of transducers, when operated together to form such a waveform, may have a collective acoustic center which is the apparent point-source origin of the transmitted waveform. Phrased differently, one or more transmit elements may approximate a point-source transmitter if an unfocused spherical waveform produced by the one or more transmit elements appears to have originated from a point source.


Transmitted ultrasound signals may be focused in a particular direction, or may be unfocused, transmitting in all directions or a wide range of directions. Transmission of ultrasound into a medium may also be referred to herein as “insonifying.” An object or structure which reflects ultrasound waves may be referred to as a “reflector” or a “scatterer.”


As used herein, terms referring to a “position” or “location” of a transducer element refer to an acoustic center position exhibited by the element. In some cases, an acoustic center position of an element may be precisely coincident with a mechanical or geometric center of the element, group of elements, or group of micro-elements. However, in many cases an acoustic center position of an element may be different than a mechanical or geometric center of the element due to various factors such as manufacturing irregularities, damage, irregular element geometries, or other factors. Acoustic center positions of elements may be determined using various calibration techniques such as those described in US Patent Application Publication 2014/0043933, titled “Calibration of Multiple Aperture Ultrasound Probes,” U.S. Pat. No. 9,282,945, titled “Calibration of Ultrasound Probes,” or other methods.


As used herein, the term “aperture” may refer to a single transducer element or a group of transducer elements that are collectively managed as a common group by imaging control electronics. For example, in some embodiments an aperture may be a grouping of elements which may be physically separate and distinct from elements of an adjacent aperture. However, adjacent apertures need not necessarily be physically separate or distinct. Conversely, a single aperture may include elements of two or more physically separate or distinct transducer arrays or elements spaced from one another by any distance or different distances. In some cases, two or more elements need not be adjacent to one another to be included in a common aperture with one another. For example, distinct groups of transducer elements (e.g., a “left aperture”) may be constructed from a left array, plus the left half of a physically distinct center array, while a “right aperture” may be constructed from a right array, plus the right half of a physically distinct center array.


As used herein, the terms “receive aperture,” “insonifying aperture,” and/or “transmit aperture” are used herein to mean an individual element, a group of elements within an array, or even entire arrays, that perform the desired transmit or receive function as a group. In some embodiments, such transmit and receive apertures may be created as physically separate components with dedicated functionality. In other embodiments, any number of transmit and/or receive apertures may be dynamically defined electronically as needed. In other embodiments, a multiple aperture ultrasound imaging system may use a combination of dedicated-function and dynamic-function apertures. In some cases, elements may be assigned to different apertures during two or more ping cycles (as defined below).


As used herein, the term “ping cycle” refers to a cycle that begins with the transmission of a ping from a transmit aperture approximating a point source and ends when all available (or all desired) echoes of that transmitted ping have been received by receive transducer elements. In many cases, ping cycles may be distinct and separated by some time period. In other cases, ping cycles may overlap one another in time. That is, an N+1th ping cycle may begin (with transmission of a ping) before an Nth ping cycle is completed (i.e., before all echoes of the Nth ping are received).


In various embodiments, a single “image” or “image frame” may be produced from the echoes of one or more transmitted pings. Therefore, an “imaging cycle” or “image cycle” may refer to a cycle that begins with the transmission of a first ping that will contribute to an image and may end with the reception of echoes of a final ping contributing to the same image. In various embodiments, a single imaging cycle may include one, two, three, four, five, or more ping cycles.


As used herein, the term “total aperture” refers to the overall size of all imaging apertures in a probe. In other words, the term “total aperture” may refer to one or more dimensions defined by a maximum distance between the furthest-most transducer elements of any combination of transmit and/or receive elements used for a particular imaging cycle. Thus, the total aperture may be made up of any number of sub-apertures designated as send or receive apertures for a particular cycle. In the case of a single-aperture imaging arrangement, the total aperture, sub-aperture, transmit aperture, and receive aperture may all have the same dimensions. In the case of a multiple aperture imaging arrangement, the dimensions of the total aperture may include the sum of the dimensions of all send and receive apertures plus any space between apertures.


In some embodiments, two apertures may be located adjacent to one another on a continuous array. In other embodiments, two apertures may overlap one another on a continuous array, such that at least one element functions as part of two separate apertures. The location, function, number of elements and physical size of an aperture may be defined dynamically in any manner needed for a particular application.


Elements and arrays described herein may also be multi-function. That is, the designation of transducer elements or arrays as transmitters in one instance does not preclude their immediate re-designation as receivers in the next instance. Moreover, embodiments of control systems herein include the capabilities for making such designations electronically based on user inputs, pre-set scan or resolution criteria, or other automatically determined criteria.


As used herein, the “image-able field” of the imaging system may be any area or volume of an imaged object or substance that may practically be imaged by the imaging system. For a ping-based imaging system as described herein, the term “image-able field” may be synonymous with the term “insonified region.” The term “region of interest” may refer to a two-dimensional or three-dimensional region within the image-able field. The extents of an image-able field relative to a probe may be imposed by physical limits (e.g., based on signal-to-noise ratios or attenuation rates) or may be chosen logical limits (e.g., based on a desired region of interest).


As used herein, the term “pixel” refers to a region of two-dimensional space within an image-able field of the imaging system. The term “pixel” is not intended to be limited to a pixel of a display device, and may represent a region of a real-world-scale object that is either larger or smaller than a display pixel. A “pixel” may represent a region of the image-able field of any real-world size, and in some cases may represent a region smaller than any resolvable object of the imaging system. Pixels may be, but need not necessarily be square or rectangular, and may have any shape allowing for contiguous two-dimensional representation of the image-able field. In some cases, data representing a pixel may not be displayed, but may still be processed as a unit and referred to as a “pixel.”


As used herein, the term “voxel” refers to a region of three-dimensional space within an image-able field of the imaging system. The term “voxel” is not intended to be limited to any particular portion of a two-dimensional or three-dimensional display device, and may represent a region of a real-world-scale object that is either larger or smaller than a display voxel. A “voxel” may represent a three-dimensional region of the image-able field of any real-world size, and in some cases may represent a region smaller than any resolvable object of the imaging system. Voxels may be, but need not necessarily be three-dimensional square or rectangular prisms. Voxels may have any shape allowing for contiguous three-dimensional representation of the image-able field. In some cases, data representing a voxel may not be displayed, but may still be processed as a unit and referred to as a “voxel.”


As used herein, the terms “pixel location” and “voxel location” (or “position”) refer to a location within the image-able field that is identifiable by a coordinate system, which may be a Cartesian coordinate system or any other coordinate system. Unless otherwise specified, references to a location of a pixel or voxel may generally refer to a center-point (e.g., center-of-mass, circular center, etc.) of the pixel or voxel.


As used herein, a pixel may be described as “intersecting” a voxel. A two-dimensional pixel may be defined as intersecting a three-dimensional voxel using any desired convention. For example, for square pixels and cubic voxels, a pixel intersecting a voxel may be a square face of the voxel or any other square or rectangle passing through the voxel. If a coordinate system used for pixels is different than a coordinate system used for voxels, then one pixel may intersect multiple voxels.


As used herein, the term “echo” refers to an ultrasound wavefront or an analog or digital representation of an ultrasound wavefront that arrives at a receive transducer element. Because imaging methods described herein allow for an extremely wide range of probe configurations, some ultrasound signals arriving at a receive transducer element may originate at a transmit transducer element on an opposite side of an imaged object. Such wavefronts are also intended to be included within the definition of an “echo” even if such wavefronts may also be described as “transmitted” or “deflected.”


As used herein, the terms “reflector” and “scatterer” refer to a physical portion of a physical object being imaged. When struck by a wavefront, reflectors and scatterers will tend to re-radiate a wavefront in a direction generally dictated by physics. The terms are not intended to limit the relative geometry or positions of transmitters, scatterers, and reflectors.


As used herein, the verb terms “reflect” and “scatter” refer to the effect of a scatterer on a propagating ultrasound wavefront. In some cases, a wavefront that is only slightly deflected (e.g., forming a combined transmit element/scatterer/receive element angle approaching 180°) by a scatterer may still be described as having been “reflected” by that scatterer (or “reflector”).


As used herein, the term “sample” refers to a digital data element in a physical volatile or non-volatile storage medium. Unless context suggests otherwise, “samples” described herein generally refer to data elements representing a discrete portion of a received ultrasound waveform. A time-varying electrical signal produced by a transducer element vibrating in response to a received ultrasound wavefront may be quantified and digitally sampled at a sample rate in order to produce a series of digital values representing the received time-varying electrical signal. Those values may be referred to as “samples.” In some cases, a “sample” may include an interpolated value in between two digitally stored sample values.


If digital sampling is done at a known sample rate (usually, but not necessarily a consistent sample rate), the position of each sample (e.g., as measured by a location in memory device, or a position in a sequence of values) may be directly related to an arrival time of the wavefront segment responsible for each sample value.


As used herein, the term “beamform” refers to the process of determining a value for pixels or voxels based on a sample value (directly stored or interpolated), the known acoustic center position of a transmit element responsible for the sample value, and the known acoustic center position of a receive element responsible for the sample value. Beamforming is described in further detail elsewhere herein.


As used herein, the term “image” (as a noun) refers to a human-visible graphical representation of a physical object or a series of non-transitory digital values stored on a physical storage medium that may be interpreted by software and/or an image processor to produce such a graphical representation. As used herein, the term “image” does not necessarily imply any particular degree of quality or human-readability. An “image” may refer to a two-dimensional representation (e.g., a cross-section, in some cases) or a three-dimensional volumetric representation of an object. Therefore, a “volumetric image” may include a visible representation of a three-dimensional point cloud or digital data representing the three-dimensional point cloud. As used herein, the terms “image” and “imaging” (in verb form) refer to a process that results in an image.


Introduction to Point-Source Transmission Ultrasound Imaging


In various embodiments, point-source transmission ultrasound imaging, otherwise referred to as ping-based ultrasound imaging, provides several advantages over traditional scanline-based imaging. Point-source transmission differs in its spatial characteristics from a “phased array transmission” which focuses energy in a particular direction from the transducer element array along a directed scanline. A point-source pulse (also referred to herein as a “ping”) may be transmitted so as to generate either a two-dimensional circular wavefront or a three-dimensional spherical wavefront, thereby insonifying as wide an area as possible in the two-dimensional or three-dimensional region of interest. Echoes from scatterers in the region of interest may return to all of the elements of receive apertures (or all of those elements not blocked by obstacles preventing transmission of the echoes). Those received echo signals may be filtered, amplified, digitized and stored in short term or long term memory (depending on the needs or capabilities of a particular system).


Images may then be reconstructed from received echoes by determining positions of reflectors responsible for received echo samples. The position of each reflector responsible for a digital echo sample may be calculated based on the arrival time of the received echo sample (which may be inferred based on a sample position and a known sampling rate) relative to the known time at which the ping was transmitted, the acoustic position of the transmit element(s) responsible for the echo sample, and the acoustic position of the receive element responsible for the echo sample. This process of determining positions of reflectors is generally referred to herein as beamforming.


Beamforming may be performed by a software-based, firmware-based, or hardware-based dynamic beamforming technique, in which a beamformer's focus may be continuously changed to focus at a particular pixel position corresponding to a reflector position. Such a beamformer may be used to plot the position of echoes received from point-source pings. In some embodiments, such a dynamic beamformer may plot the locus of each echo signal based on a round-trip travel time of the signal from the transmitter to an individual receive transducer element.


In the two-dimensional imaging case, for a given echo sample produced by a transmit transducer element and a receive transducer element, the locus of possible positions of the target reflector responsible for the echo sample will be an ellipse mathematically defined by two foci. A first focus of the ellipse will be at the position of the transmit transducer element and the second focus will be at the position of the receive transducer element. Although several other possible reflector positions lie along the same ellipse, echoes of the same target reflector will also be received by other receive transducer elements. The slightly different positions of each receive transducer element means that each receive element will define a slightly different ellipse for the target reflector. Accumulating the results by summing the ellipses for multiple receive elements at slightly different positions will indicate an intersection of the ellipses for a reflector. As echo samples from more receive elements are combined with the first, the intersecting ellipses will converge towards a point at which the target reflector is located. Similarly, echoes of pings transmitted from different transmit element positions may also be combined to further refine reflector points. The target reflector position may be correlated with a pixel location representing the reflector. The combined sample values may be used to determine a display intensity for a display pixel at the pixel location. The echo amplitudes received by any number of transmit positions and receive elements may thereby be combined to form each pixel. In other embodiments the computation can be organized differently to arrive at substantially the same result.


Various algorithms may be used for combining echo signals received by separate receive elements. For example, some embodiments may process echo-signals individually, plotting each echo signal at all possible locations along its ellipse, then proceeding to the next echo signal. Alternatively, each pixel location may be processed individually, identifying and processing all echo samples potentially contributing to that pixel location before proceeding to the next pixel location.


Image quality may be further improved by combining images formed by the beamformer from one or more subsequent transmitted pings, transmitted from the same or a different point-source (or multiple different point-sources). Still further improvements to image quality may be obtained by combining images formed by more than one receive aperture.


An important consideration is whether the summation of images from different pings, different transmit point-sources or different receive apertures should be coherent summation (phase sensitive) or incoherent summation (summing magnitude of the signals without phase information).


The decision as to whether to use coherent or incoherent summation may be influenced by the lateral extent/size of the receive aperture(s) and/or the transmit aperture(s). In some embodiments, it may be convenient to confine the size of an aperture to conform to the assumption that the average speed of sound is substantially the same for every path from a scatterer to each element of the transmit or receive aperture. For narrow receive apertures this simplifying assumption is easily met. However, as the width of the receive aperture increases, an inflection point may be reached (referred to herein as the “maximum coherent aperture width” or “maximum coherence width”), beyond which paths traveled by echoes of a common reflector will necessarily pass though different types of tissue having intrinsically different speeds of sound when returning to the elements furthest apart from one another. When this difference results in receive wavefront phase shifts approaching or exceeding 180 degrees, additional receive elements extended beyond the maximum coherence width will actually degrade the image rather than improve it.


The same maximum coherent aperture size considerations may also apply to the size of transmit apertures, which may include a plurality of transducer elements. In the case of two-dimensional transducer arrays used in three-dimensional imaging (or 3D data collection), it may be useful to define a maximum coherent aperture size in two dimensions. Thus, in various embodiments a maximum coherent aperture may be defined as a group of transducer elements in a square, circle, polygon or other two-dimensional shape with a maximum distance between any two elements such that phase cancellation will be avoided when echo data received at the elements of the aperture are coherently combined.


Therefore, in order to realize the benefits (e.g., in terms of increased spatial resolution) of a wide probe with a total aperture width far greater than the maximum coherent aperture width, the full probe width may be physically or logically divided into multiple transmit and/or receive apertures, each of which may be limited to an effective width less than or equal to the maximum coherent aperture width, and thus small enough to avoid phase cancellation of received signals.


The maximum coherence width can be different for different patients (or different test objects), for different probe positions on the same patient, and for other variables such as ultrasound frequency. In some embodiments, a compromise width may be determined for a given probe and/or imaging system. In other embodiments, a multiple aperture ultrasound imaging control system may be configured with a dynamic algorithm to subdivide the available elements into groups that are small enough to avoid significant image-degrading phase cancellation. In various embodiments, a particular coherent aperture size may be determined automatically by a control system, or manually through user input via a user control such as a dial or slider.


In some embodiments, coherent (phase sensitive) summation may be used to combine echo data received by transducer elements located on a common receive aperture resulting from one or more pings. In some embodiments, incoherent summation may be used to combine echo data or image data received by separate receive apertures if such receive apertures are sized and positioned so as to form a combined total aperture that is greater than a maximum coherence width for a given imaging target.


Two-dimensional ping-based beamforming may implicitly assume that the wavefronts emitted from the point-source are physically circular in the region of interest. In actuality, the wavefront may also have some penetration in the dimension orthogonal to the scanning plane (i.e., some energy may essentially “leak” into the dimension perpendicular to the desired two-dimensional image plane, which may have the effect of reducing the effective imaging depth). Additionally, the “circular” wavefront may actually be limited to a semicircle or a fraction of a circle less than 180 degrees ahead of the front face of the transducer according to the unique off-axis properties of the transducing material used. Similarly, a three-dimensional “spherical” wavefront may have an actual shape of a hemisphere or less than a hemisphere within the medium to be imaged.


Ping-Based Imaging for 3D Ultrasound Imaging


The above description of point-source ultrasound imaging (also referred to herein as “ping-based” imaging) predominantly describes two-dimensional imaging in which ultrasound signals are focused into a narrow field approximating a plane in a region of an image. Such two-dimensional focusing may be accomplished with lensing or other techniques. Ping-based imaging can also be extended to real-time three-dimensional imaging without adding significant complexity. Three-dimensional ping-based imaging can be performed using an ultrasound probe with transducer elements spaced from one another in two dimensions. Some example probe configurations are described elsewhere herein.


When a three-dimensional pulse is initiated from a point-source transmit transducer, the resulting semi-spherical wavefront travels into the insonified region containing a region of interest (ROI) where some of the ultrasound energy may be reflected (or deflected) by scatterers in the ROI. Some of the echoes from the scatterers may travel towards receive transducer elements of the probe, where the echoes may be detected, amplified, digitized, and stored in a short-term or long-term memory device. Each digitized sample value may represent a scatterer from the ROI. As in the 2D case, the magnitude of each received sample, along with its time of arrival and the exact positions of the transmit and receive transducers used, may be analyzed to define a locus of points identifying potential positions of the scatterer. In the 3D case, such a locus is an ellipsoid having as its foci the positions of the transmitter point source and receive transducer element. Each unique combination of transmit and receive transducer elements may define a separate view of the same reflector. Thus, by combining information from ellipsoids produced from multiple transmit-receive transducer element combinations, the actual location of each reflector may be more accurately represented.


For example, in some embodiments an image in a 3D array of voxels may be assembled in computer memory by beginning with an evaluation of a selected digital sample. The selected digitized sample value may be written into every voxel indicated by the corresponding ellipsoid described above. Proceeding to do the same with every other collected sample value, and then combining all resulting ellipsoids may produce a more refined image. Real scatterers would be indicated by the intersection of many ellipsoids whereas parts of the ellipsoids not reinforced by other ellipsoids would have low levels of signal and may be treated as noise (i.e., eliminated or reduced by filters, gain adjustments, or other image processing steps).


In other embodiments, the order of computation may be changed by beginning with a selected voxel in a final 3D image volume to be produced. For example, for a selected voxel, the closest stored sample or interpolated sample may be identified for each transmitter/receiver pair. All samples corresponding to the selected voxel may then be evaluated and summed (or averaged) to produce a final representation of the voxel. Closeness of a sample to a selected voxel may be determined by calculating the vector distance from the three-dimensional position of a transmitter (i.e., the transmitter used to produce the sample) to the selected voxel position plus the vector distance from the selected voxel position to the position of a receiver used to produce the sample. Such a linear distance may be related to the time-divided sample values by dividing the total path length by speed of sound through the imaged object. Using such a method, the samples corresponding to a calculated time may be associated with the selected voxel. Once identified, data corresponding to a common voxel may be combined according to a chosen combining algorithm.


Image Layer Combining


Techniques for determining the location for received echo samples are generally referred to herein as beamforming, while techniques for combining information obtained from multiple transmitter/receiver combinations or from multiple separate pings transmitted using the same transmitter/receiver combination may generally be referred to as image layer combining. In various embodiments, a frame may be made up of any number of combined image layers. Frames may be displayed sequentially at a desired frame-rate on a display to form a moving image or video. The above-described beamforming processes may beneficially also be used for evaluating pixel values in a 2D cross-sectional slice through a 3D volume using raw echo data. In various embodiments, such 2D slices may be taken at any arbitrary angle or along any curved path through the 3D volume. The same techniques may also be used to zoom in (i.e., increase the size of features) using raw echo data rather than enlarging processed pixels or voxels.


Images obtained from different unique combinations of one ping and one receive element and/or combinations of one ping and one receive aperture may be referred to herein as “sub-image layers.” Multiple sub-image layers may be combined coherently to improve overall image quality (e.g., by combining multiple ellipses or ellipsoids as described above). Additional image layer combining may be performed to further improve the quality of a final image. In the context of image layer combining, the term “image” may refer to a single two-dimensional pixel, a single voxel of a three-dimensional volume or a collection of any number of pixels or voxels.


Image layer combining may be described in terms of four image layer levels. These include base-level image layers, first-level image layers, second-level image layers and third-level image layers. As used herein, the phrase base-level image layer refers to an image obtained by beamforming echoes received at a single receive element from a single transmitted ping. As described above, the beamforming process defines an ellipse corresponding to each echo sample. Therefore, a base-level image may include a series of such ellipses representing all of the echoes of a single ping received by a single receive element. Such an image may not be particularly useful for diagnostic imaging purposes, but may be used for other purposes.


A first-level image layer may be formed from echoes received at a single receive aperture resulting from a single ping from a single transmit aperture (where a “transmit aperture” can be a composite point-source transmit element, a single-element transmitter, or a group of transmit elements). For a unique combination of a single ping and a single receive aperture, the echoes received by all the receive elements in the receive aperture may be summed coherently to obtain a first-level image layer. Alternatively, first-level images may be formed by combining the echoes of two or more pings received at elements of a common receive aperture.


Multiple first-level image layers resulting from echoes of multiple transmit pings (from the same or different transmit apertures) received at a single receive aperture can be summed together to produce a second-level image layer. Second-level image layers may be further processed to improve alignment or other image characteristics.


Third-level images may be obtained by combining second-level image layers formed with data from multiple receive apertures. In some embodiments, third-level images may be displayed as sequential time-domain frames to form a moving image.


In some embodiments, pixels or voxels of a first-level image layer may also be formed by summing in-phase and quadrature echo data, that is by summing each echo with an echo ¼ wavelength delayed for each receive-aperture element. In some cases, echo data may be sampled and stored as an in-phase data set and as a separate quadrature data set. In other cases, if the digital sampling rate is divisible by four, then a quadrature sample corresponding to an in-phase sample may be identified by selecting a sample at an appropriate number of samples prior to the in-phase sample. If the desired quadrature sample does not correspond to an existing whole sample, a quadrature sample may be obtained by interpolation. Combining in-phase and quadrature data for a single image (pixel, voxel or collection of pixels or voxels) may provide the advantage of increasing the resolution of the echo data without introducing blurring effects. Similarly, samples at values other than ¼ wavelength may be combined with in-phase samples in order to improve various imaging characteristics.


Combination, summation or averaging of various image layers may be accomplished either by coherent addition, incoherent addition, or a combination of the two. Coherent addition (incorporating both phase and magnitude information during image layer summation) tends to maximize lateral resolution, whereas incoherent addition (summing magnitudes only and omitting phase information) tends to average out speckle noise and minimize the effects of image layer alignment errors that may be caused by minor variations in the speed of sound through the imaged medium. Speckle noise is reduced through incoherent summing because each image layer will tend to develop its own independent speckle pattern, and summing the patterns incoherently has the effect of averaging out these speckle patterns. Alternatively, if the patterns are added coherently, they reinforce each other and only one strong speckle pattern results.


In most embodiments, echoes received by elements of a single receive aperture are typically combined coherently. In some embodiments, the number of receive apertures and/or the size of each receive aperture may be changed in order to maximize some desired combination of image quality metrics such as lateral resolution, speed-of-sound variation tolerance, speckle noise reduction, etc. In some embodiments, such alternative element-to-aperture grouping arrangements may be selectable by a user. In other embodiments, such arrangements may be automatically selected or developed by an imaging system.


Variations in the speed of sound may be tolerated by incoherent addition as follows: Summing two pixels coherently with a speed-of-sound variation resulting in only half a wavelength's delay (e.g., approximately 0.25 mm for 3 MHz ultrasound) results in destructive phase cancellation, which causes significant image data loss; if the pixels are added incoherently, the same or even greater delay causes only an insignificant spatial distortion in the image layer and no loss of image data. The incoherent addition of such image layers may result in some smoothing of the final image (in some embodiments, such smoothing may be added intentionally to make the image more readable).


At all three image layer levels, coherent addition can lead to maximum lateral resolution of a multiple aperture system if the geometry of the probe elements is known to a desired degree of precision and the assumption of a constant speed of sound across all paths is valid. Likewise, at all image layer levels, incoherent addition leads to the best averaging out of speckle noise and tolerance of minor variations in speed of sound through the imaged medium.


In some embodiments, coherent addition can be used to combine image layers resulting from apertures for which phase cancellation is not likely to be a problem, and incoherent addition can then be used where phase cancellation would be more likely to present a problem, such as when combining images formed from echoes received at different receive apertures separated by a distance exceeding some threshold.


In some embodiments, all first-level images may be formed by using coherent addition of all sub-image layers obtained from elements of a common receive aperture, assuming the receive aperture has a width less than the maximum coherent aperture width. For second and third level image layers, many combinations of coherent and incoherent summation are possible. For example, in some embodiments, second-level image layers may be formed by coherently summing contributing first-level image layers, while third-level image layers may be formed by incoherent summing of the contributing second-level image layers.


Time-domain frames may be formed by any level of image layer depending on the desired trade-off between processing time and image quality. Higher-level images will tend to be of higher quality, but may also require more processing time. Thus, if it is desired to provide real-time imaging, the level of image layer combination processing may be limited in order to display images without significant “lag” being visible to the operator. The details of such a trade-off may depend on the particular processing hardware in use as well as other factors.


2D Imaging while Collecting 3D Data


In some embodiments, a form of 2D imaging may be performed using a probe and imaging system configured for 3D imaging by simply beamforming and displaying only a 2D slice of data from the received three-dimensional echo data. For example, such techniques may be used in order to reduce a beamform calculation and simplify display for real-time imaging using an imaging device with limited processing capability, while still retaining the full 3D echo data.


For example, a two-dimensional plane may be selected (automatically or by a user) from the voxels making up a three-dimensional volumetric representation of the imaged region, voxels intersecting the selected plane may be identified. An image window may be defined by automatically or manually selecting a portion of the selected plane, and a two-dimensional image of the selected image window may then be formed by beamforming only those echo samples corresponding to the voxels intersecting the selected plane and lying within the selected image window. The selected two-dimensional image window may then be displayed in real-time while three-dimensional data of the entire insonified volume is simultaneously collected. In some cases two separate image windows in the same or different image planes may be defined and imaged simultaneously from the same set of real-time three-dimensional echo data.


The collected full 3D echo data may be beamformed and reviewed at a later time using a device with greater processing power. In some embodiments, the 2D slice to be beamformed and displayed may be automatically selected by an imaging device. Alternatively, the 2D slice to be beamformed and displayed may be selected or adjusted by an operator of the device.


Data Capture & Offline Analysis


In various embodiments, raw un-beamformed echo data resulting from a ping transmitted from point-source transmit transducers and received by one or more arrays of receive transducer elements may be captured and stored in a raw data memory device for subsequent retrieval and analysis. Alternately, captured echo data may be digitally transmitted over a network for remote processing, beamforming, and/or viewing. In addition to echo data, additional data may also be stored and/or transmitted over a network and retrieved for subsequent and/or remote image generation and analysis. Such additional data may include calibration data describing the positions of the transmitting and receiving transducer elements, and transmit data describing the identity (or position) of transmitting transducers associated with specific echo data.


After retrieving such data, a clinician may use the data to reconstruct imaging sessions in a variety of ways while making adjustments that may not have been made during a live imaging session. For example, images of a series of 2D slices through a 3D volume may be generated and shown in succession in order to simulate a 2D transducer passing across a surface of the region of interest. Examples of these and other methods are described in Applicant's US Patent Application Publication 2014/0058266, titled, “Ultrasound Imaging System Memory Architecture” and PCT Patent Application Publication WO2016/028787, titled “Network-Based Ultrasound Imaging System.”


Some embodiments of a probe configured for imaging an entire patient's body or a substantial portion of a patient's body may comprise an array of designated point-source transmitters and designated receive elements sized and arranged to cover a substantial portion of the desired region of a patient's body. For example, a probe may be sized to cover substantially half of a patient's chest or more. Such a probe may have a maximum dimension of about 8 cm to about 10 cm.


Alternatively, a much smaller probe capable of insonifying a conically-shaped volume of, for example, + or −30 degrees, can be placed on a patient's body such that an organ of interest may be included in the cone. Such a probe may be placed in more than one place to cover a larger volume of interest.


Designated Point Source Transmitters


As described above, a point-source transmitter may be approximated using a single small transducer element of a transducer array. When performing 2D ping imaging using a 1D array (an array of elements with parallel longitudinal axes, typically including a lens to focus the signal into a single imaging plane), a single element may be able to produce a ping with sufficient energy in the imaging plane to achieve imaging at a reasonable depth.


Additionally, in various embodiments, an “apparent point-source transmitter” transducer may be configured to produce a waveform that both approximates an actual point-source and has sufficient energy to produce high quality images at the desired depth. In some cases, such apparent point-source transmitters may be configured such that ultrasound power output may be limited only by safety considerations within the imaged medium.


As used herein, the phrase “point-source” refers to a point in two-dimensional (2D) space or three-dimensional (3D) space that represents a center point of a transmitted 2D or 3D ultrasound waveform, respectively. In some embodiments, such a point is ideally an infinitely small point corresponding to a produced wavefront with a consistent semi-spherical shape. In embodiments in which such a waveform is produced by a single small element, such a point may lie on the surface of the transducer element. As used herein, the terms “semi-spherical pulse” and “semi-spherical wavefront” may refer to any ultrasound wavefront with a spherical-section shape, including wavefronts with approximately spherical-section shapes greater than or less-than an ideal semi-sphere. Similarly, the terms “semi-circular pulse” and “semi-circular wavefront” may refer to any ultrasound wavefront which appears in an imaging plane to have a circular-section shape, including wavefronts with approximately circular-section shapes greater than or less-than an ideal semi-circle.


In some cases, multiple (e.g., two, three, four or more) small transducer elements from a common transmit/receive array may be excited simultaneously to produce a ping with more energy than may be produced by a single element.


In some embodiments, a designated point source transmitter may comprise a large transducer shaped and configured to produce a relatively high-power waveform that “appears” to have originated from a point-source even if the location of the apparent point source does not correspond to a physical structure responsible for producing the wavefront—in other words, an apparent point-source. When performing beamforming calculations to determine the location of reflectors based on the timing of received echoes, the location of the apparent point-source may be used as the origin of the transmitted ping wavefront. In various embodiments, an approximate location of an apparent point source may be estimated based on a physical geometry or other characteristic of a designated point source transmitter.


In some embodiments, particularly suitable shapes for designated point source transmitters may include individual elements in the shape of concave and convex spherical caps sized to deliver sufficient ultrasound power to perform real-time three-dimensional ping-based imaging. Convex spherical caps may generally be referred to herein as “dome-shaped,” while concave spherical caps may be referred to as “bowl-shaped.” Some examples of imaging probes incorporating examples of such spherical cap transducer elements (otherwise referred to as “apparent point source” transducer elements) are shown and described in U.S. Patent Application Publication No. US 2015/0080727.


Examples of Piezoelectric Materials and Manufacturing


In some cases, a designated point source transmitter may include a single structure or one or more element sub-structures arranged in a planar, convex, or concave shape. In some embodiments, each designated transmitter element or receive transducer elements may be constructed from a single contiguous piece of a piezoelectric material. Such elements may be referred to herein as “bulk” elements or as being made from “bulk piezoelectric” materials. In other embodiments, transmitter and/or receiver elements may be composed of a plurality of sub-element structures, such as cut piezoelectric materials, micro-element structures (as described further below), or other structures that may be operated collectively to form a complete element.


In some embodiments, transmitter elements or receiver elements may be in the form of a thin shell of a piezoelectric material in a planar shape or in the shape of a truncated spherical cap. Such elements may be made of any material exhibiting piezoelectric properties. Many naturally occurring and synthetic materials are known to exhibit piezoelectric properties that may be of a character suitable for use in ultrasound imaging applications. In the case of ping-based multiple aperture ultrasound imaging, ultrasound ping signals may be transmitted at frequencies commonly used in diagnostic medical ultrasound, e.g., in the range of about 1 MHz to about 20 MHz or more. Thus, apparent point-source transducers with fundamental frequencies within this range may be suitable for use in ping-based multiple aperture imaging.


Naturally-occurring piezoelectric materials include quartz, topaz and tourmaline, while man-made piezoelectric ceramic materials include lead zirconate titanate (PZT), barium titanate, lead metaniobate, & polyvinylidene difluoride (PVF2—not naturally piezoelectric, but may be made so by heating in the presence of a strong electrical field). Some man-made piezoelectric ceramic materials may be combined with non-piezoelectric polymer materials to create piezo-composites.


In the case of bulk elements made from piezoelectric materials, the thickness of a designated transmitter element, whether planar, bowl-shaped, or dome-shaped, may be directly related to the fundamental frequency of the transducer. In some cases (e.g., for some piezoelectric ceramic materials), the thickness of a transducer shell may be equal to about half a wavelength of its corresponding fundamental frequency. However, depending on the materials and/or structures used, the shell thickness may be differently related to a transducer element's fundamental frequency. Manufacturing processes may also vary depending on the piezoelectric material used and other factors.


For example, natural or man-made piezoelectric material may be machined using traditional techniques in order to form the desired shape directly from a block of material. Such machining may be performed using mechanical cutters, water jets or any other available machining technique. Alternatively, a block or sheet of piezoelectric material may be machined into a plurality of elements attached to a flexible substrate which may then be formed into the desired shape. For example, a plurality of concentric ring cuts 42 and radial cuts 44 may be made in a sheet of piezoelectric material, which may then be formed over a backing material with the desired shape (e.g., a spherical-cap). In such embodiments, the individual sub-element sections that make up the element may be electrically connected so as to transmit simultaneously without phasing.


In some embodiments, a desired shape may be molded (e.g., by injection molding, die casting, or other molding process) from a piezo-composite material. Examples of molding processes that may be adapted to forming spherical-cap elements are described in U.S. Pat. Nos. 5,340,510 and 5,625,149.


Ultrasound transducers may also be produced in desired shapes using additive manufacturing techniques (commonly known as 3D printing techniques). For example, US Patent Application Publication 2013/0076207 and US Patent Application Publication 2013/0088122 describe systems and methods for forming transducers in the shape of cylindrical posts. Similar techniques may also be adapted to form transducers with spherical-cap or other shapes. Additionally, other manufacturing techniques such as laser sintering, stereo lithography, chemical vapor deposition or any other suitable techniques may be used to produce transducers in the shapes and sizes described herein.


Capacitive Micromachined Ultrasound Transducer (CMUT) formation techniques may also be used to form transducers of desired shapes onto a pre-shaped substrate. WO 2012/112540 shows and describes some examples of structures and techniques that may be adapted to forming spherical-cap shaped transducers. Alternately, a transducer element may be made by forming an array of CMUT transducers on a substrate pre-formed into a desired shape (e.g., a concave or convex spherical cap as described above). In such embodiments, the CMUT elements may be electrically connected so as to transmit simultaneously without phasing.


In some embodiments, transducer elements may be made up of a plurality of micro-elements which may be made using lithographic techniques, thin-film deposition techniques, etching techniques, additive manufacturing techniques, surface micromachining, bulk micromachining, and/or other methods. For example, U.S. Pat. No. 6,222,304 to Bernstein (which is incorporated herein by reference) describes processes for making micro-shell transducer elements which include a substrate, an electro-active medium mounted on the substrate and including an arched section spaced from the substrate defining a chamber between the substrate and arched section, and a pair of electrodes mounted on the medium for either applying an electric field across the medium for flexing the arched section or sensing the electric field generated by a flexure of the medium. Other techniques may also be used to produce micro-elements of different types. For example, in some cases, the flexible “arched section” may be replaced by a flexible planar section or a flexible concave section that may be flexed upon application of an appropriate electrical signal.


In some embodiments, in order to build transducer elements into a probe, the “front” surface of each element (e.g., the inside surface of a concave element, the outside surface of a convex element or the outward-facing surface of a planar element) may be plated with an electrically conductive layer (such as a metal) so as to allow electrical connection to the element. In some embodiments, the entire inner surface and the entire outer surface of each element may be plated, thereby allowing the entire piezoelectric shell to be activated by applying an electrical signal across the two plated surfaces. In other embodiments, less than an entire shell may be activated by plating less than an entire inner and/or outer surface of the shell. In this way, a similar range of element sizes may be made functional from a single physical structure.


Using materials and methods available, transmitter and receiver elements may be made in a range of shapes, such as having a top surface that is planar, convex, concave, or combination of shapes. Transmitter and/or receiver elements may also have various plan-view shapes such as circular, rectangular, pentagonal, hexagonal, heptagonal, octagonal, other polygonal, or others. Elements can be electrically and logically connected to a controller so as to be dedicated transmitters, dedicated receivers, or switchable between transmit and receive functions. Elements can be arranged into various arrays, such as regular arrays, irregular arrays, sparse arrays, or arrays that may be controlled so as to be operated as regular, irregular, or sparse arrays.


Ultrasound Imaging Arrays Constructed from Micro-Elements


Ultrasound probes made up of ultrasound transmitters and receivers may be made by various techniques suitable for producing arrays of micro-elements. As used herein, the term “micro-element” refers to very small transducer elements with sizes (e.g., diameters) on the order of less than about 1,000 microns, in some cases about 10 microns to about 500 microns, in some cases between about 50 microns to about 200 microns, in some cases between about 50 and about 150 microns, in some cases between about 10 microns and about 25 microns, in some cases between about 25 microns to about 50 microns, and in some cases between about 50 microns and about 100 microns. In some cases, a micro-element may be defined as any element that has a diameter equal to less than half a wavelength of the ultrasound frequency in the imaged material at which the micro-element will be used.


Typically, micro-elements may be too small to be mechanically machined from traditional bulk piezoelectric materials, and may therefore be made using lithographic techniques, thin-film deposition techniques, etching techniques, additive manufacturing techniques, surface micromachining, bulk micromachining, and/or other methods. For example, U.S. Pat. No. 6,222,304 to Bernstein (which is incorporated herein by reference) describes processes for making micro-shell transducer elements which include a substrate, an electro-active medium mounted on the substrate and including an arched section spaced from the substrate defining a chamber between the substrate and arched section, and a pair of electrodes mounted on the medium for either applying an electric field across the medium for flexing the arched section (i.e., in order to transmit an ultrasound signal) or sensing the electric field generated by a flexure of the medium (i.e., in order to receive an ultrasound signal impinging on the micro-element). Other techniques may also be used to produce micro-elements of different types. For example, in some cases, the flexible “arched section” may be replaced by a flexible planar section or a flexible concave section that may be flexed upon application of an appropriate electrical signal and/or may be sensitive to received impulses.


In some embodiments, it may be desirable to create arrays of micro-elements capable of conforming to a desired shape. In such embodiments, a pattern of micro-elements may be formed on a semiconductor substrate using one or more of the above techniques, a back-surface (i.e., opposite the surface on which micro-elements lie) of the substrate may be thinned to make the substrate flexible to a desired degree, and the substrate may then be conformed to a new shape and secured to a supporting material. In various embodiments, such thinning and re-shaping may be performed over an entire substrate surface or in localized regions.


There are generally four methods for wafer thinning common in the art of semiconductor manufacturing. They include mechanical grinding, chemical mechanical polishing (CMP), wet etching and atmospheric downstream plasma (ADP), and dry chemical etching (DCE).


Using such techniques (or combinations thereof), one or more sections of an array of micro-elements may be conformed so as to form entire arrays or array sections in desired shapes. For example, arrays may be shaped so as to conform to a portion of a human anatomy, such as a breast, a joint (e.g., a knee, elbow, wrist, ankle, etc.), a skull, or other anatomical structure. In other examples, arrays may be shaped so as to conform to mechanical, structural, or industrial parts. In other examples, array sections may be shaped to form three-dimensional element groups arranged to exhibit desired transmit patterns. For example, such techniques may be used to form transducer element groups configured to transmit apparent point-source waveforms.


In some embodiments, an array of micro-elements may be conformed so as to form dome-shaped sections or dish-shaped sections, such as the spherical section elements described herein. For example, convex or concave dome-shaped transmit elements optimized for near-field imaging may be formed from a plurality of micro-elements on a thinned section of substrate and conformed to a desired dome shape.


In some embodiments, a dome-shape may be optimized for near-field imaging based on transmit angles of micro-elements that will make up the dome. Each micro-element may have a transmit angle relative to a centerline of a transmitted wavefront. The transmit angle of a micro-element defines the angle at which the energy of a transmitted wavefront drops below some threshold value.


In some embodiments, a convex dome shape optimized for near-field imaging may be defined by a spherical section with a cut elevation selected such that wavefronts transmitted from micro-elements at a low region of the dome (i.e., a region of the spherical section adjacent an attachment point with the surrounding micro-element array) will not tend to impinge directly on adjacent micro-elements. The micro-elements of such a dome-shaped group may be electrically connected such that they may be operated as a single element. As in various other examples herein, the spherical center point of such an element may be used as the acoustic center point for the purpose of beamforming calculations for pings transmitted from such a spherical-section shaped transmitter element.


In some embodiments, a concave dish shape optimized for far-field imaging may be defined by a spherical section with a cut elevation selected such that wavefronts transmitted from micro-elements within the dish will not tend to impinge directly on other micro-elements within the dish-shaped element. The micro-elements of such a dish-shaped group may be electrically connected such that they may be operated together to collectively form a single transmitter and/or receiver element. As in various other examples, the spherical center point of such an element group may be used as the acoustic center point for the purpose of beamforming calculations for pings transmitted from such a spherical-section shaped transmitter element or receiver element.


In some embodiments, an overall shape of a substrate supporting an array of micro-elements may be conformed into a desired shape. For example, in some embodiments a substrate supporting an array of micro-elements may be formed into complex overall shapes. In some cases, an array of micro-elements may include one or more surfaces with inflected sections, such as to conform to an anatomical or other structure.


In various embodiments, micro-elements may be grouped to form shaped transducer elements such as those described elsewhere herein. Groups of micro-elements may be treated collectively as a single element in the various processes described herein, and may be referred to herein as “element groups” or “micro-element groups.” For example, as shown in FIG. 1 and FIG. 2, a plurality of adjacent micro-elements may be logically and electrically treated as a single unit so as to form transducer elements of a desired size and shape, and in a desired location. In some embodiments, spherical cap or dome-shaped elements may be formed from a plurality of micro-elements.


One advantage of forming transducer elements from a group of such micro-elements is that the position of each micro-element may be known very precisely based on the precision of manufacturing techniques used to produce the micro-elements. Therefore, the position of each micro-element-group may also be known very precisely.


In some embodiments, a group of micro-elements forming a single transducer element group may collectively share a single electrical ground conductor and a single electrical signal conductor. In some embodiments, each of the ground conductor and the signal conductor for each transducer element group may form a differential pair independent of signal and ground conductors for other elements of the array. In other words, micro-elements in a group of micro-elements forming an element group may share a single ground conductor that is not shared with ground conductors of micro-elements belonging to other element groups between the probe and a connector joining the probe to control electronics. Similarly, micro-elements of an element group may share a single signal conductor that is not shared with signal conductors of micro-elements of other element groups between the probe and a connector joining the probe to control electronics. In some embodiments, some micro-elements (e.g., a sub-set of micro-elements forming an element group) may be joined to signal and/or ground conductors of other micro-elements via a switch configured to allow the sub-set of micro-elements to be switchably joined to signal and/or ground conductors of other micro-elements.


Alternately, some or all micro-element groups may share a common ground conductor and/or a common signal conductor. For example, in some embodiments micro-element groups that form elements that are collectively part of a single aperture may share a common ground conductor and/or a common signal conductor. In other embodiments, one or more signal and/or ground conductors may be shared in a multiplexed arrangement, such as time-division-multiplexed communications, or other multiplexing method.


Sparse Array Probes for Real Time 3D Imaging


Ping-based multiple aperture ultrasound imaging can provide very powerful real-time three-dimensional imaging capabilities as described above. The benefits of ping-based multiple aperture ultrasound imaging may be achieved by using transducer probes with overall dimensions much larger than traditional ultrasound probes. For example, ping-based multiple aperture ultrasound imaging may be beneficially used with probes having an active imaging surface in excess of 100 cm2. Traditionally, ultrasound elements in a probe are spaced as close together as possible, typically significantly less than (and generally no more than) half a wavelength of the ultrasound frequency being used.


However, using traditional element-to-element spacing in such a large probe would require a cable far too thick for the cable to be usable. Although some tricks may be used to reduce the number of individual conductors required in a cable, a better solution is to increase the allowed spacing between elements, thereby reducing the total number of elements in an array. Use of sparse arrays with traditional scanline-based imaging methods suffers from substantial complications, artifacts, and low resolution and is therefore not generally practical. Based on the research into the use of sparse arrays scanline-based phased array techniques, one would expect the use of sparse arrays with ping-based multiple aperture ultrasound imaging techniques to suffer similar difficulties, but that is unexpectedly not the case. In fact, sparse arrays can be used quite effectively with ping-based multiple aperture ultrasound imaging techniques as described herein.


In some embodiments, sparse arrays of transducer elements may be beneficial in providing an ultrasound probe with a wide total aperture while containing a manageable number of transducer elements.


In other fields, a “sparse array” is generally defined as an array in which the majority of array positions have a zero or null value. In some cases, a similar definition may be applied to an array of ultrasound transducer elements. In the context of an array of ultrasound transducer elements, a sparse array may be defined as an array of potential element positions in which a majority of the element positions contain no active elements. For example, the inactive element positions may contain no transducers, or may contain transducers that are not active at a specified time, if ever. For example, inactive element positions may comprise transducer elements that are temporarily or permanently electrically disconnected from imaging control electronics. In other examples, inactive element positions may comprise empty spaces (or spaces filled with non-transducing material) of sizes equivalent to a transducer element.


In some cases, an ultrasound transducer array in which less than a majority of element positions contain no active elements may also be considered “sparse” if average spacing between adjacent elements exceeds a threshold distance. For example, in some cases, an ultrasound transducer array may be considered “sparse” if all (or nearly all) adjacent elements of the array are spaced from one another by a distance of at least half a wavelength of the ultrasound transmitted and/or received by the elements. In still other cases, an ultrasound transducer array may be considered “sparse” if at least a majority of elements of the array are spaced from adjacent elements by a distance of at least a threshold distance. If an array is configured to operate at more than one ultrasound frequency, then a threshold distance may be half of the maximum wavelength at which any part of the array is configured to operate.


In some various, ultrasound probes comprising a sparse array of ultrasound transmitters and receivers may be made using one or more of the various micro-element or sub-element configurations described herein. For example, in some embodiments, a sparse array may be formed from a continuous array of micro-elements by electrically assigning micro-elements to element-groups where the element groups are located in a sparse arrangement. Examples of such embodiments are described below with reference to FIG. 1. In some such embodiments, micro-elements that are not assigned to an element-group may simply be electrically inactive.


In some embodiments, instead of a continuous array of micro-elements (as in FIG. 1), a sparse-array transducer probe may comprise a plurality of groups of micro-elements formed in desired locations on a substrate without necessarily forming additional micro-elements between micro-element groups. FIG. 2 illustrates an example of such a sparse array 480 made up of separated micro-element groups 481-486.


In some embodiments, ultrasound probes comprising a sparse array of ultrasound transmitters and receivers may be made by forming individual transducer elements from bulk piezoelectric material, mechanically picking up and placing each element in a precise location on a substrate, securing each element to the substrate, and making electrical connections to each element. Bulk piezoelectric elements may be made in any shape or size as described herein.


In some embodiments, a sparse array probe may be effectively used by 3D multiple aperture imaging techniques as described herein, and in some cases may be preferred to a continuous array densely populated with transducer elements, particularly when spacing between receivers and transmitters avoid certain patterns. In certain circumstances, using a probe having a sparse two-dimensional or three-dimensional array of regularly-spaced transducer elements with the ping-based multiple aperture imaging beamforming techniques described herein may result in the production of self-reinforcing artifacts that may undesirably degrade image quality. Such artifacts may be the result of phase cancellation causing certain returning echo signals to cancel one another out before reaching some receive elements thereby producing distortions in the form of extremely dark and/or extremely bright bands in the resulting image.


In some embodiments, such artifacts may be avoided or mitigated by positioning transducer elements at inconsistent distances from one another in a sparse array. In other words, phase cancellation artifacts may be avoided or mitigated by positioning transmit and receive elements such that no two transducer elements are an equal distance from any single third element. In some cases, a small amount of element position equivalence may be acceptable. In examples of such embodiments, a probe may be constructed such that no more than “N” receive elements are located an equal distance from a single transmitter, where “N” is an integer value between 1 and 1,000 or more, more commonly between about 1 and 100, and in some specific examples N may be 1, 2, 3, 4, 5, 10, 100 or more and wherein all of the receiver elements receive the same frequency of ultrasound signal.


In some embodiments, transmit elements may be regularly-spaced relative to one another while spacing receive elements at irregular distances relative to the transmit elements. In other embodiments, both transmit elements and receive elements may be irregularly positioned in a sparse array.


In some embodiments, a two-dimensional sparse array of transducer elements positioned to mitigate phase cancellation artifacts may include elements spaced from one another at “random” distances. FIG. 3 and FIG. 4 illustrate examples of two-dimensional sparse array element positions in which a distance between any two adjacent elements is not exactly equal to any other distance between two adjacent elements. In the example of FIG. 3 and FIG. 4, the distances between adjacent elements are effectively random in the sense that there is no mathematical pattern relating the distances.



FIG. 3 illustrates a two-dimensional array 401 made up of elements 410 unevenly spaced from one another in two dimensions according to a pseudo-random pattern. In various embodiments, the array 401 illustrated in FIG. 3 may have an overall length dimension 413 and/or an overall width dimension 414 of about 3 cm to about 10 cm or more. Each element 410 of the array 401 may have a square shape, a circular shape, a polygonal shape, or any other regular or irregular shape. For example, FIG. 4 illustrates a sparse array 402 embodiment in which designated transmit elements are shown as circular elements 411 while designated receive elements are shown as square elements 412. In other embodiments, all elements may be circular, square, or otherwise-shaped elements even if some are designated transmitters and others are designated receivers.


In addition to the generally rectangular arrays shown in FIG. 3 and FIG. 4, sparse arrays 401, 402 of ultrasound elements with uneven spacing may be arranged in other shapes, such as a generally oval shaped array 403 as shown in FIG. 5, or a generally circular shaped array 404 as shown in FIG. 6.


Sparse arrays of ultrasound elements with uneven spacing may also be arranged in a generally planar configuration, or a generally concave or convex configuration 405 as shown in FIG. 7. The arrangement of transducer elements shown in FIG. 7 may be made convex by a supporting substrate layer positioned on the concave side of the pattern of elements. Similarly, supporting the elements with a substrate on the convex side of the pattern will provide a concave array. In other embodiments, sparse arrays may be arranged for use in intra-venous ultrasound probes or other specialized probes.


In some embodiments, spacing distances between elements may be non-repeating random or pseudo-random distances obtained with use of a random or pseudo-random number generation algorithm. In other examples, spacing distances between elements may be irregular values which may be based on non-repeating values from an integer sequence such as the Fibonacci sequence or any other non-repeating numeric sequence. In some cases, an algorithm may be applied to values from a numeric sequence in order to maintain element spacing distances within a desired range. For example, in some embodiments distances between adjacent elements may be constrained to a range such as 1 mm to 10 mm (or 1 mm to 20 mm or more).


In various embodiments, the spacing between transducer elements need only be un-equal by an amount at least as great as a manufacturing tolerance of a manufacturing method used to construct the array. For example, if an array manufacturing process is capable of positioning elements on a substrate within a distance of +/1 100 microns of an intended position, then it may be desirable to design the two spacing distances so as to be different by at least 100 microns. In other embodiments, the spacing between transducer elements need only be un-equal by an amount based on a frequency of ultrasound used.


In some medical imaging applications, variations in tissues may introduce enough randomness to signals to substantially avoid or minimize most phase cancellation artifacts. Therefore, in some cases a sparse array probe may include a larger number of regularly-spaced transducer elements if other factors can be expected to minimize phase cancellation artifacts. For example, FIG. 8 illustrates an example array 302 of dedicated transmit elements 310 and receive elements 312. In various embodiments, the transmit elements 310 and/or the receive elements 312 may be replaced with any other transmit or receive elements described elsewhere herein. FIG. 15 and FIG. 16B also illustrate elements with examples of regularly spaced elements which may contain elements of any construction such as those described herein.


Multiple Frequency Sparse Arrays


In some cases, phase cancellation artifacts may be avoided or mitigated by constructing and/or operating a sparse array probe such that only a small number of regularly-spaced elements operate at the same ultrasound frequency as one another. For example, some embodiments of sparse array probes may be constructed and/or operated such that no more than “N” receive elements operating at the same ultrasound frequency (or combination of frequencies) are equidistant to any one transmit element. In such embodiments, N may be an integer between 1 and 100, in some specific examples, N may be 1, 2, 3, 4, 5 or more. Thus, for example, a probe may contain any number of receive elements equidistant to one or more transmit elements as long as no more than N of the receivers are operated at the same ultrasound frequency (or combination of frequencies).


The ultrasound frequency (or combination of frequencies) at which any given transmit or receive element may be operated may be based on structural characteristics of the element (e.g., material, thickness, diameter, or other dimensions) and/or variable operational characteristics such as an electric voltage or signal shape applied to the element. Which and to what degree such factors may change an element's operating frequency may depend on the element material and/or the process by which it is manufactured. For example, while many transducers may have a fundamental frequency, many can also be driven (i.e., operated in transmit or receive) at frequencies other than their fundamental frequency. The range of frequencies at which any particular transducer element may be driven may depend on many factors such as the material, construction, available power, etc.


For example, in various embodiments, an operating frequency of a micro-element may be determined by a diameter of a flexible section of material, such as the “arched section” described by Bernstein referenced above or a similarly-configured flexible planar or concave section. Thus, in some embodiments, micro-element groups may be made up entirely of micro-elements configured to operate at the same frequency, or may be made up of micro-elements configured to operate at different frequencies. For example, in some embodiments an outer micro-element group (e.g., 464 in FIG. 1) may be configured to operate at a different (e.g., a higher frequency or a lower frequency) than micro-elements making up a central group 466, either by being physically constructed differently or by being operated differently.



FIG. 9 illustrates an example of a group of micro-elements of different sizes arranged to form a multi-frequency transmit element. As described above, the fundamental transmit frequency of a micro-element may be a function of the size of a flexible surface member. Therefore, a multi-frequency transmit element group may be formed by providing micro-elements of varying sizes arranged so as to be controllable as a common element group. The group 500 of micro-elements in FIG. 9 includes micro-elements of three different sizes. In other embodiments, micro-elements of two, four, five, six, or more different sizes may be grouped into a transmitter or receiver group. Other patterns or numbers of micro-elements of each size may also be used, depending on the waveform characteristics desired.


The variously-sized micro-elements of FIG. 9 may be electrically connected so as to be activated simultaneously as a group, such as by providing a single electrical signal conductor and a single electrical ground conductor common to all of the micro-elements of a transmit group such as that shown in FIG. 9.


In some embodiments, a micro-element array may comprise a plurality of transmit element groups containing micro-elements of various sizes. In some embodiments, different transmit elements may be provided with different mixes of micro-element sizes in order to produce pings with different multi-frequency combinations. In this way, each transmit group may have a unique frequency signature. If different transmitters in a probe have different mixes of micro-element sizes to produce a different frequency signature, then pings transmitted from one transmitter may be distinguished from pings transmitted by a second transmitter, even if pings from the two transmitters are transmitted at the same time or during overlapping ping cycles.


In other words, when a probe is configured to include at least two transmitters configured to transmit multi-frequency waveforms and receive elements of the probe are sensitive to all of the transmitted frequencies, then echoes received by each receive element of the probe may be mapped to the transmitter that produced the echo based only on the frequency signature of the received echoes. This may be tremendously beneficial in increasing ping rates and/or frame rates well beyond the limits imposed by single-frequency imaging.


Sparse Arrays with Varied or Variable Element Sizes


In some embodiments, a size of transducer elements may be varied instead of or in addition to varying spacing between elements. For example, in some embodiments, a sparse ultrasound array may be entirely made up of transducer elements of various (and/or variable) sizes as shown for example in FIG. 1, FIG. 10A, FIG. 10B, FIG. 11A, FIG. 11B.


In some embodiments, micro-element groups may be switched between a first configuration in which a first element group includes a first group of micro-elements and a second configuration in which the first element group includes a second group of micro-elements in addition to (or subtracted from) the first group of micro-elements. In some embodiments, such switching between a first configuration and a second configuration may be performed in between ping cycles. That is, a first ping may be transmitted and echoes of the first ping may be received by the micro-elements of the first configuration of the element group. Then, a second ping may be transmitted, and echoes of the second ping may be received by the second configuration in which the first element group includes a second group of micro-elements in addition to the first group of micro-elements.


Similarly, a transmit element group may be configured to be switchable between a first configuration and a second configuration that is larger, smaller, or differently shaped than the first configuration.


Alternatively, element groups may be switched between a first configuration and a second configuration within a single ping cycle. In such embodiments, a first ping may be transmitted, a first plurality of echoes of the first ping may be received by the micro-elements of the first configuration of the element group, and the switch may then be closed so that a second plurality of echoes of the first ping may be received by the micro-elements of the second configuration of the element group in addition to or instead of the micro-elements of the first configuration. In some embodiments, the first plurality of echoes may be relatively near-field echoes that are produced by reflectors closer than a threshold distance to the receiver micro-elements, and the second plurality of echoes may be mid-field or far-field echoes that are produced by reflectors further than a threshold distance from the receiver micro-elements. Assuming an approximately constant speed of sound, the threshold may be a time value, such that the first plurality of echoes may be received with the first configuration of micro-elements until a switch-time at which a switch may activate the second configuration of micro-elements to allow the second plurality of echoes to be received after the switch-time until the end of the ping cycle.


In some embodiments, individual micro-elements or groups of micro-elements may be electrically switchable so as to be selectively included in or excluded from a micro-element group forming an element or an aperture. Transmit element groups or receive element groups may be configured to be switchable between a first configuration and a second configuration, where the second configuration is larger, smaller, or differently shaped than the first configuration.


For example, FIG. 1 illustrates an example section of a continuous array 450 of micro-elements 460. Some of the micro-elements 460 are indicated as having been assigned to an element-group 462, 464, 466, 468, 470, and 472. FIG. 1 further illustrates an outer group of micro-elements 464 identified by stippling surrounding a central group 466 of micro-elements. In one embodiment of the illustrated example, a larger element group 468 may be used as a transmit element, and the smaller element groups 462, 464, 470, and 472 may be used as receive elements.


In some embodiments, the outer group 464 may be switchable as a group so as to selectively form a larger element in combination with the central group 466. Similar element groups may be formed with any number of adjacent micro-element groups in any desired configuration. Such a variably-sized element may be used as a transmit element, as a receive element, or as both.


Another example is provided in the groups 472 and 474 which provide for switchable configurations with different shapes and sizes. One or more switches may be provided to allow the group 474 to be included with the group 472 so that both groups 472 and 474 may operate together as a single element (e.g., as a receive element or as a transmit element). The elongated element formed by the combination of sub-groups 472 and 474 may be beneficially used in combination with the process for estimating a position of a reflector using an elongated receive element.


In some embodiments, ping-based receive beamforming calculations (as described herein) may be performed using the position of the circular center of the center micro-element (e.g., 744) of a micro-element group as the acoustic center position of the transducer element group. In embodiments in which a micro-element group is arranged such that it does not include a centrally-located micro-element, the position of the micro-element group may be defined at a center-of-mass point or a geometric center point of the group of micro-elements. In other embodiments, various calibration processes may be used to measure, determine, and/or to refine an acoustic center position for each transducer element group. Examples of suitable calibration processes are described in US Patent Application Publication US 2014/0043933 titled “Calibration of Multiple Aperture Ultrasound Probes,” U.S. Pat. No. 9,282,945 titled “Calibration of Ultrasound Probes,” and U.S. Pat. No. 9,510,806 titled “Alignment of Ultrasound Transducer Arrays and Multiple Aperture Probe Assembly,” each of which is incorporated by reference herein.



FIGS. 10A and 10B illustrate examples of radially symmetrical groups 342 of switchable concentric sub-elements 344, 346, 348 which may be operated in concert with one another. The group 342 may include a central circular receive element 344 surrounded by one or more concentric-ring receive elements 346, 348 providing the ability to obtain the benefits of elements of various sizes in a small physical footprint while maintaining a consistent acoustic center position. In various embodiments, a center element 344 may be surrounded by two, three or more rings, depending on the needs of a particular application.


In some embodiments, the space (or “kerf”) between the center circular element 344 and the inner ring 346 and the space/kerf between rings 344, 348 may be as small as possible so as to provide as much of a seamless transition between adjacent ring elements as possible. In one example, the inner circular element 344 may have a diameter of about 1 mm, the inner ring 346 may have a width of about 0.5 mm, and the outer ring 348 may have a width of about 0.5 mm. In such an example, the center element 344 and inner ring element 346 may be combined to mimic a circular element with a diameter of about 2 mm, and the center element 344, inner ring element 346, and outer ring element 348 may be combined to mimic a circular element with a diameter of about 3 mm.


As shown in FIG. 10A, in some embodiments, each element 344, 346, 348 may be individually connected to a separate receive channel 331, 332, 333 of a receive subsystem so as to allow echoes received with each concentric element to be stored separately. Using such an arrangement, a complete image may be formed from received echoes after echoes have been received with all receive elements of all sizes. This may allow for a “digital switching” process in which echo data received with the sub-elements may be selectively combined to improve a final image. Stored echoes may be retrieved from the memory device and combined so as to obtain an optimal image based on the timing of received echoes and/or the location of the transmitted ping.


In such embodiments, echo data received with the center element 344 alone may be used for beamforming near-field reflectors. Echo data received with the center element 344 may be coherently combined with echo data received with the inner ring element 346 for beamforming mid-field reflectors. Similarly, echo data received with the center element 344 may be coherently combined with echo data received with the inner ring element 346 and the outer ring element 348 for beamforming far-field reflectors.


As with other embodiments using receivers of various sizes, the transitions between “near-field,” “mid-field,” and “far-field” (or between “near-field” and “far-field” for systems with only two sizes of receive elements) may be defined based on optimal characteristics of the intended imaging application, the particular sizes of the elements to be used, transmitted ultrasound frequencies, and other factors. One advantage of the digital switching methods described herein is that the transition between “near-field,” “mid-field,” and “far-field” may be changed and redefined after echo data has been received and stored. This may allow for iterative adjustment of such transitions for optimization of image quality or other desired characteristics.


Alternatively, as illustrated in FIG. 10B, each group of three ring sections 342 may be electrically connected to a common receive channel of a receive subsystem so as to allow for electrical switching between “small,” “medium,” and “large” receive elements. When the three ring sections 344, 346, 348 are arranged in a concentric pattern, the location of the circular center of each of the sections 344, 346, 348 will be the same, thereby simplifying beamforming operations.


As illustrated in FIG. 10B, a center circular element 344 and one or more concentric ring sections 346, 348 may be electrically connected to a common receive channel 330 via switches 352, 354. The switches 352, 354 may be any remotely operable electrical switch, and may include any suitable electromechanical, MEMS, semiconductor or other components. The sub-elements 344, 346, and 348 may be electrically connected in parallel when the switches 352, 354 are closed. In some embodiments, each sub-element 344, 346, 348 may have an independent ground conductor. In other embodiments, the sub-elements 344, 346, 348 may share a common ground conductor.


In use, a ping may be transmitted from a transmit element at a time=“to,” and the transition between “near-field,” “mid-field,” and “far-field” may be defined in terms of time, where “t1” is the time at which the transition from near-field to mid-field occurs and “t2” is the time at which the transition from mid-field to far-field occurs. In this example, both switches 352, 354 may be open during and immediately following transmission of a ping at time t0. Then, at time t1, the inner-ring switch 352 may be closed, thereby electrically combining the signal generated by the inner circular element 344 with the signal generated by the inner-ring transducer element 346. At time t2, the outer-ring switch 354 may also be closed, leaving the inner-ring switch 352 also closed, thereby electrically combining the signal generated by the inner circular element 344 with the signal generated by the inner-ring transducer element 346 and the outer-ring transducer element 348. The resulting echo data string produced by the ping transmitted at t0 will then contain near-field echoes received by the inner circle element 344 alone, mid-field echoes received by the combined inner circular element 344 and the inner ring element 346, and far-field echoes received by all three 344, 346, 348 elements combined.


As with other embodiments described herein, receive elements of various sizes may be made using any suitable manufacturing process or processes. For example, continuous circular disc-shaped elements and ring-shaped elements may be made of a bulk PZT material, other piezoelectric materials, or from arrays of micro-elements or other sub-elements using any of the various manufacturing techniques identified herein. The transducer elements 344, 346, 348 may also have shapes other than circular, such as polygonal or amorphous shapes. Each of the elements 344, 346, 348 may also be made up of multiple micro-elements.


In some embodiments, a center element and one or more concentric ring elements may be formed from a single continuous piece of PZT (or other piezoelectric material) that is plated with an electrically conductive material to form a center element 344 and one or more distinctly-operable rings 346, 348. For example, the ring elements may be defined by rings of plated electrically conductive material in the shape of the desired ring(s) with regions of un-plated material in between adjacent ring elements or between an inner ring and a center element. In some embodiments, ring-shaped plated regions may be formed on both a top and a bottom surface of the piezoelectric material. In other embodiments, ring-shaped sections may be plated on a top surface, and a bottom surface may be continuously plated. In some embodiments using such a continuous piezoelectric structure with plated and un-plated regions, a bias voltage may be applied to an inner ring 346 and/or an outer ring 348 while receiving echoes with a center element in order to dampen unwanted oscillation of the outer region of PZT.


In some embodiments, a switchable group of sub-elements such as that illustrated in FIG. 10B may be used as a transmit element of variable size. For example, differently-shaped waveforms or waveforms of different power levels may be produced by transmitting from the center element 344 alone than by transmitting simultaneously from both the center element 344 and the inner ring element 346, which will in turn be different than a waveform transmitted from all three sub-elements 344, 346, 348.


In some embodiments, the center element 344 may also be switchably connected to a receive system or a transmit system. In some embodiments, the arrangement illustrated in FIG. 10A may be combined with the arrangement in FIG. 10B. That is, each sub-element may be switchable between one of three states: connected to its own receive (or transmit) channel, connected in electrical parallel with one another, or disconnected (i.e., at open circuit relative to the other sub-elements).


In some embodiments, elements of different sizes may be grouped in different constellations allowing for a different mode of operation. FIG. 11A illustrates elements of multiple sizes arranged in a group 322. In some embodiments, elements of varying sizes may be grouped together, each group 322 including a small element 324, a medium element 326, and a large element 328 positioned close together. In some embodiments, a sparse array may comprise groups of elements of different sizes in which the elements in each group may be spaced from one another by a distance of less than half a wavelength of the ultrasound transmitted and/or received by the elements. In other words, groups 322 of elements may be sparsely positioned relative to other groups, while the elements of each group may be non-sparsely spaced from one another.


In one example, the small element 324 may have a diameter of about 1 mm, the medium element 326 may have a diameter of about 2 mm, and the large element 328 may have a diameter of about 3 mm (any other sizes may also be desired, depending on the needs of a particular application). In other examples, an element constellation 322 may include receive elements of only two sizes or elements of four or more sizes. The individual and relative element sizes used and the relative positions of elements may also be varied depending on the needs of a particular application. In some embodiments, each element of a constellation group 322 may comprise concentric elements as described above with reference to FIG. 10A and/or 10B.


In some embodiments, each element 324, 326, 328 of the constellation of FIG. 11A and the constellation of FIG. 11B may be a micro-element (e.g., a micro-dome as described above or another micro-element structure) or a group of micro-elements.


As described above, smaller-diameter elements may provide optimal receive characteristics for echoes returned by relatively shallow (or near-field) reflectors, while larger-diameter elements may provide optimal receive characteristics for echoes returned by relatively deep (or far-field) reflectors. Therefore, in some embodiments, an imaging system may be configured to switch between using information from elements of various size such that small elements 324 may be used primarily for forming images from echoes received from near-field reflectors, medium-sized elements 326 may be used primarily for forming images from echoes received from mid-field reflectors, and large elements 328 may be used primarily for forming images from echoes received from far-field reflectors.


In some embodiments, this switching may be accomplished digitally by forming a complete image from received echoes after echoes have been received with all receive elements of all sizes. To achieve this digital switching in some embodiments, each of the receive elements 324, 326, 328 of the constellation of FIG. 11A may be individually electrically connected to a separate channel 331, 332, 333 of a receive subsystem. In such embodiments, echoes received by the elements of varying sizes may be digitized and stored in a memory device separately. Stored echoes may then be retrieved from the memory device and combined so as to obtain an optimal image based on the timing of received echoes and/or the location of the transmitted ping.


Because each received echo sample can be mapped to a three-dimensional position within the imaged volume, one or more threshold depths may be established in order to determine which regions of a volume (or 2D image) should be formed with echoes received by small elements, which regions should be formed with echoes received by medium-sized elements, which regions should be formed with echoes received by large elements, and which regions should be formed by combining echoes from small and medium elements or by combining echoes from medium and large elements. Such information can be stored in an imaging controller and used during live imaging or during reconstruction of an image from stored echo data. As with other embodiments described herein, receive element constellations may be grouped into receive apertures, each receive aperture having an overall size selected such that echoes received by multiple elements of a common receive aperture may be combined coherently without phase cancellation. Information obtained from two or more receive apertures may then be combined incoherently.


The digital switching method described above relies on the echoes received by each receive element of each size being individually stored at least temporarily. In alternative embodiments, the number of receive channels used by receive elements of various sizes may be reduced by incorporating switches. F or example, as illustrated in FIG. 11B, a receive group 322 including a large element 328, a medium element 326, and a small element 324 may be electrically connected to a single receive channel 330 of a receive subsystem via switches S1, S2, S3. Each of the large 328, medium 326, and small 324 elements may be switched “on” (closed-circuit) during times when that element is expected to return usable (or beneficially-contributing) information. During times when an element of a particular size is not expected to return usable information, the element may be switched “off” (open-circuit) in favor of switching on an element of a different size.


Switching of receive elements will be described with reference to an example. Assume that a ping is transmitted from a transmit element at a time=“t0.” If the transition between “near-field,” “mid-field,” and “far-field” is defined in terms of time, where “t1” is the time at which the transition from near-field to mid-field occurs and “t2” is the time at which the transition from mid-field to far-field occurs, then a single group of three differently-sized receive elements may be switched as follows: only the small element 324 is switched on (e.g., S1 is closed, S2 and S3 are open) from time t0 to t1, only the medium element 326 is switched on (S2 is closed, S1 and S3 are open) from time t1 to t2, and only the large element 328 is switched on (S3 is closed, S1 and S2 are open) from time t2 until the next ping is transmitted (or until such time as all receivable echoes can be expected to have returned). In some embodiments, two or more of the switches S1, S2, S3 may be combined into a single multi-position switch.


In some embodiments, when using differently-sized receive elements in combination with digital or electrical switching, an image formation subsystem may use a physical location of each individual element 324, 326, 328 to identify echo samples corresponding to a particular pixel or voxel location within an imaged region. In the case of digital switching, the position of the circular center of each receive element may be individually stored and associated with the corresponding receive channel for use during beamforming operations. Even in the electrical switching case, because the time at which switching occurs is known, the samples corresponding to the small, medium, and large elements can be determined based on times associated with the data samples, and the appropriate element circular center position information may be used for beamforming echoes received by the elements of different sizes.


In some embodiments, transducer elements of different sizes in a pattern such as that shown in FIG. 1, FIG. 11A, FIG. 11B, FIG. 10A, or FIG. 10B or any other pattern, may be used as transmit elements. For example, planar, concave, or convex elements of any circular, polygonal, or other shape may be provided in two, three, or more different sizes for use as dedicated transmit elements.


In various embodiments, switches used for switching individual micro-elements or groups of micro-elements may include microelectromechanical systems (MEMS) switches of any suitable type. In some embodiments, MEMS switches may be formed on an opposite side of same substrate as the micro-elements. MEMS or other switches may be controllable by transmit subsystems, receive subsystems, or both, as appropriate for a given application.


In some embodiments, low noise amplifiers (LNAs) may also be provided on the back-side of a substrate supporting an array of micro-elements. In some embodiments, one LNA may be provided for each receive element (or group of micro-elements controlled collectively). In other cases, one LNA may be provided for each receive aperture or group of receive elements. In various embodiments, LNAs may also be controlled by one or more elements of an imaging control system, such as a transmit subsystem or a receive subsystem.


Detecting Reflector Position Based on Pattern or Shape of Receive Elements


As described herein, beamforming calculations produce a locus of possible locations for a reflector based on a known position of a transmitter element and a receiver element, and the loci obtained from multiple elements are combined to converge towards an actual location for a given reflector. Therefore, any additional information about the likely position of reflectors may be used to further enhance image quality.


In some embodiments, a known pattern of receiver elements may be used to determine an approximate direction from which echoes return to the receiver elements. This may be understood with reference to the illustration in FIG. 12, which shows a pattern 600 of six receive elements 602, each made up of a plurality of micro-elements. The elements 602 are shown arranged in a regular grid pattern for clarity of illustration, but may be irregularly spaced and may be asymmetrically aligned relative to one another. Various axes may be drawn through any two or more transducer elements arranged in the pattern. FIG. 12 illustrates a vertical axis 610, a horizontal axis 606, and two diagonal axes 604, 608. Several other axes could also be drawn, any of which could be used with the same methods.


For echoes of a given reflector arriving at the receive elements 602, the beamforming process will determine the locus of possible location points for the reflector based on the position of the transmitter and the positions of each of the receive elements 602. In some embodiments, the system may also compare the absolute time-of-arrival of the echoes of the given reflector at elements along one or more of the axes (604, 606, 608, 610). For example, if the upper-right element along the diagonal axis 604 receives an echo of the given reflector at an earlier time than the same echoes arrive at the center element (a time measured in nanoseconds), then it may be reasonable to conclude that the reflector is located in a portion of the region of interest closer to the upper-right quadrant of the array section.


This location estimate information may be supported or confirmed by comparing the arrival time of the given reflector echoes at the upper-right element with the arrival time of the same given reflector echoes at the lower left element. Additional elements along the same axis may also be used to further confirm an estimate of an origin of the reflector. Comparing arrival times of the same given reflector along other axes may provide further information about the approximate location of the reflector.


In some embodiments, a process of axis-based direction estimation may include: transmitting an unfocused ultrasound ping from a transmitter approximating a point source into an object to be imaged. Echoes of the transmitted ping may then be received at a first receive element and a second receive element, where a line between the first receive element and the second receive element defines an axis. The first receive element and the second receive element may be located at known positions relative to a common coordinate system (e.g., based on position data retrieved from a data store). The process may proceed by identifying a first echo sample corresponding to a first reflector received at the first element, and identifying a second echo sample corresponding to the same first reflector received at the second element. A first time-of-arrival may be determined for the time at which the first sample echo was received at the first receive element. The first time-of-arrival may be based on explicit or implicit timing information in the stored echo data. Explicit timing information may include specific clock times recorded along with each received echo sample. Implicit timing information may include a known sample rate and a sample position (or interpolated sample position) of a particular sample relative to some baseline (e.g., a start-of-ping time). A second time-of-arrival may be determined for the time at which the second sample echo was received at the second receive element. The first and second times-of-arrival may then be compared to determine which of the elements first received the echo sample corresponding to the first reflector. The element that received the first reflector's echo sample first is closest to the reflector along the axis. This information may then be used for other elements along the same axis.


Based on the estimated given reflector position information obtained above, echoes of the given reflector received by elements that are further away from the estimated reflector position may be weighted lower than echoes of the same reflector received by elements closer to the estimated position of the reflector. It the first reflector is closer to the first receiver element along the axis, then the echoes of the first reflector received by the first element may be weighted higher than the echoes of the first reflector received by the second element when the echoes are combined to form an image. The same information may also be used to weight echoes of the same first reflector received by other elements along the same axis. That is, echo contributions from receive elements determined to be closer to the reflector along the axis may be given more weight (i.e., a larger weighting factor) than echo contributions from receive elements further away from the reflector along the axis.


In some embodiments, one or more asymmetrically-shaped receive elements may be used to estimate an approximate location of a reflector in order to improve image quality. For example, FIG. 13 illustrates receive transducer element 650 with a generally elliptical shape having a long axis 654 and a short axis 652. Echoes with strong directional components along one or both axes of such an asymmetrical element will produce recognizable phase patterns due to the asymmetry. The different phase patterns may be due to phase differences of echoes arriving predominantly along each axis.


For example, assume an echo returning to the element 650 with a strong component along the long axis 656 arrives at the top point 656 at a first time, then arrives at the center point 658 at a second time after the first time, and finally arrives at the bottom point 660 at a third time after the second time. The echoes of a single reflector arriving to the upper point 656 will be slightly out of phase with the echoes of the same reflector arriving at the middle 658 and lower points 660. In the same way, echoes arriving at different times at different points along the short axis 652 may also exhibit a unique phase pattern.


For a given shaped asymmetrical receive element, the phase pattern along each axis can be calibrated by transmitting pings from known transmit positions relative to the asymmetrical receive element and measuring the phase response. Then the approximate direction of each arriving echo can be estimated based on the phase pattern response of the asymmetrical receiver(s). The calibration set of various phase patterns corresponding to various origin points may be stored, and used during imaging to estimate the approximate location of reflectors by comparing echo phase patterns with the calibration set phase patterns.


In various embodiments, a wide range of asymmetrical receive element shapes may be used to estimate approximate reflector locations. For example, any shape with at least one generally long axis and one generally shorter axis may be used. Such shapes may include elongated irregular polygons such as rectangles or rectangles with rounded corners, oblong shapes, oval shapes, or generally elongated amorphous shapes.


In various embodiments, each receive aperture may include only one, two, three, four, five, or more asymmetrical receivers. In other embodiments, an entire array of receive elements may be asymmetrical. Asymmetrical elements need not all be aligned with long axes in the same direction as one another, and in some embodiments it may be desirable to provide asymmetrical elements with long axes perpendicular to one another.


In various embodiments, asymmetrical transducer elements may be formed of any transducing material and using any manufacturing process, including those described elsewhere herein. For example, asymmetrical elements may be formed from a group of micro-elements in a micro-element array.


Although the examples of shaped receive elements above are described relative to arrays made up of micro-elements, the same techniques and principles may also be applied using elements of different constructions made by various other manufacturing processes. For example, the techniques may also be applied using machined bulk PZT elements.


Sparse Arrays with Overlapping Micro-Element Groups


In some embodiments, it may be desirable to configure a ping-based ultrasound imaging probe with dedicated transmitter elements and receive elements grouped into overlapping receive apertures in a constellation configuration. For example, FIG. 14 illustrates an array 700 of micro-elements 710 in which micro-elements are grouped into transmit element groups and receive element groups.


As shown, each transmit element group 702 (indicated by “X” hatched micro-elements) may be surrounded by a plurality of receive element groups 704 (indicated by “/” hatched micro-elements). In various embodiments, receive elements 704 may be grouped into receive apertures 706 based on their proximity to a transmit element 702. Receive apertures 706 are indicated by the lines drawn around groups of receive elements 704. As shown, some elements may participate in two or more different receive apertures 704.


As with other embodiments described herein, each receive element may be connected to a separate channel in a receive subsystem such that echoes received by each receive element may be independently stored. Therefore, receive elements may be assigned to apertures after echo data has been collected. In some cases, receive elements may be assigned to apertures based on the known positions of receive elements relative to known positions of transmit elements.


In any of the probe embodiments described in this disclosure, point source transmitters may take any suitable form, such as a single bulk piezoelectric element, a segmented piezoelectric element, a coordinated group of bulk piezoelectric elements, or a dedicated transmitter group of micro-elements which may be operated to transmit spherical waveform pings from an apparent point source at a geometric center of the transmitter group. Micro-elements making up a transmitter group may be arranged in a planar arrangement relative to one another. Alternately, micro-elements making up a transmitter group of micro-elements may be arranged on a locally concave or convex substrate so as to form an apparent-point-source transducer element.


Sparse Array Probes with Physical Gaps


In various embodiments, transmitter micro-element groups and/or receiver micro-element groups may be made in any other planar, convex, concave, concave and convex, or amorphous shape. For example, micro-element groups may be formed in approximately square shapes, approximately circular shapes, approximately polygonal shapes, approximately concentric ring shapes, etc. Overall dimensions of transmitter micro-element groups and receiver micro-element groups may be sized as described elsewhere in this disclosure. In some cases, transmitter micro-element groups may be the same size as receiver micro-element groups, e.g., with dimensions of between about 150 microns and about 0.5 mm or more.


In various embodiments, the use of a sparse array configuration may allow for other uses of regions in between transducer elements. For example, various treatment structures may be provided in spaces between transducer elements. Such treatment structures may include high frequency ultrasound (HIFU) transmitters for delivering targeted ablative US treatment, radiation or drug delivery elements, lasers or radio frequency (RF) delivery elements, or any other structures for treating or diagnosing a patient that may be positioned in spaces between transducer elements contributing to a unified image.


In some embodiments, vacuum ports may be provided between transducer elements for causing a tissue or other medium to be drawn into contact with a surface of the imaging probe. Vacuum ports may comprise holes and/or channels in the substrate, support structures, matching layers, lensing layers, etc.


In some embodiments, sparse array probes may be configured with openings or gaps through which instruments or tools may be inserted. For example, probes may be configured with gaps sized and configured to receive instruments or tools such as scalpels, scrapers, biopsy tools, robotic arms, needles, surgical tools, or other implements.



FIG. 15 illustrates an ultrasound probe made up of a circular array 800 of transducer elements 802 with a gap 810 in the center. The circular array 800 may be made up of array segments 812, 814, 816, 818. For example, FIG. 15 shows a probe array made up of four pie-slice segments each in the shape of a quarter-circular segment. The segments may be rigidly fixed in a housing, bracket, or other structural support configured to hold the segments in a consistent position relative to one another. The central gap 810 may be sized to allow insertion of various tools or instruments as discussed above.


Using ping-based multiple aperture imaging techniques, the region immediately under the probe gap 810 may be imaged by transmitting spherical ping signals that insonify the region under the gap, and receiving echoes with receive elements 802 near the gap 810. In some embodiments, transducer elements closer to the central gap 810 may be spaced more closely to one another so as to provide more receive elements adjacent to the gap, thereby increasing the number of receivers that may receive echoes from reflectors lying under the gap 810.


Transmit transducer elements will tend to produce spherical waveforms that propagate in all directions extending from the point-source transmitter element into the imaged object. At some angle from normal, the strength of the transmitted energy will typically tend to drop off dramatically, thereby defining a threshold angle. Rotating a single angle about the point-source defines a “signal cone.” Reflectors within the signal cone will tend to provide sufficiently high signal-to-noise reflections that they may be reliably imaged, while reflectors outside of the signal cone may tend to return echoes with too little energy to provide valuable contributions to the final image. The signal cone for any particular transmit transducer element may be determined empirically by experimentation. Receive elements may have a similar signal cone of positively-contributing reflectors. As described herein, the angle of a transducer element's signal cone may be related to the size of the element with smaller elements generally having a wider signal cone than larger elements. On the other hand, larger elements may produce more energy (or may be sensitive to weaker received signals) than smaller elements.


Based on the size of the gap and the angles of transmitter signal cones and receiver signal cones, transmitter elements and receiver elements may be positioned relative to the gap 810 so as to allow effective imaging of the volume below the gap 810. For example, in some cases, transmitter elements and receiver elements with wide signal cones (also referred to as “look angles”) may be positioned in a higher density adjacent to the gap, while transmit and receive elements with narrower signal cones may be positioned further from the gap. As described in various examples herein, transmitter elements and receiver elements of various or variable sizes may be used based on the energy requirements and signal cone shapes.



FIG. 16A and FIG. 16B illustrate another example of an ultrasound probe 850 with a physical gap 852 separating array segments 854, 856. The probe 850 of FIG. 16A and FIG. 16B is generally configured to provide real-time volumetric imaging of a region below both array segments including the region below the gap 852 between the segments. The gap 852 may be used for performing various surgical, diagnostic, or interventional procedures.


The probe 850 of FIG. 16A and FIG. 16B may include a bridge handle 860 rigidly joining the two array segments 854, 856 and containing conduits for electrical connections. The bridge handle may take any shape and structure as needed in order to rigidly hold the array segments in consistent positions relative to one another. Based on the size of the gap 852 and the angles of transmitter signal cones and receiver signal cones, transmitter elements and receiver elements may be positioned relative to the gap 852 so as to allow effective imaging of the volume below the gap 852. As above, transmitter and receiver elements 862 of various or variable sizes may be used based on the energy requirements and signal cone shapes.


Multiple Aperture Ultrasound Imaging System Components


The block diagram of FIG. 17 illustrates components of an ultrasound imaging system 200 that may be used in combination with various embodiments of systems and methods as described herein. The system 200 of FIG. 17 may include several subsystems: a transmit control subsystem 204, a probe subsystem 202, a receive subsystem 210, an image generation subsystem 230, and a video subsystem 240. In various embodiments, the system 200 may also include one or more memory devices for containing various data for use during one or more ultrasound imaging steps. Such memory devices may include a raw echo data memory 220, a weighting factor memory 235, a calibration data memory 238, an image buffer 236 and/or a video memory 246. In various embodiments all data (including software and/or firmware code for executing any other process) may be stored on a single memory device. Alternatively, separate memory devices may be used for one or more data types.


The transmission of ultrasound signals from elements of the probe 202 may be controlled by a transmit control subsystem 204. In some embodiments, the transmit control subsystem 204 may include any combination of analog and digital components for controlling transducer elements of the probe 202 to transmit un-focused ultrasound pings at desired frequencies and intervals from selected transmit apertures according to a desired imaging algorithm. In some embodiments a transmit control system 204 may be configured to transmit ultrasound pings at a range of ultrasound frequencies. In some (though not all) embodiments, the transmit control subsystem may also be configured to control the probe in a phased array mode, transmitting focused (i.e., transmit beamformed) ultrasound scanline beams.


In some embodiments, a transmit control sub-system 204 may include a transmit signal definition module 206 and a transmit element control module 208. The transmit signal definition module 206 may include suitable combinations of hardware, firmware and/or software configured to define desired characteristics of a signal to be transmitted by an ultrasound probe. For example, the transmit signal definition module 206 may establish (e.g., based on user inputs or on pre-determined factors) characteristics of an ultrasound signal to be transmitted such as a pulse start time, pulse length (duration), ultrasound frequency, pulse power, pulse shape, pulse direction (if any), pulse amplitude, transmit aperture location, or any other characteristics.


The transmit element control module 208 may then take information about the desired transmit pulse and determine the corresponding electrical signals to be sent to the appropriate transducer elements in order to produce this signal. In various embodiments, the signal definition module 206 and the transmit element control module 208 may comprise separate electronic components, or may include portions of one or more common components.


Upon receiving echoes of transmitted signals from a region of interest, the probe elements may generate time-varying electrical signals corresponding to the received ultrasound vibrations. Signals representing the received echoes may be output from the probe 202 and sent to a receive subsystem 210. In some embodiments, the receive subsystem may include multiple channels, each of which may include an analog front-end device (“AFE”) 212 and an analog-to-digital conversion device (ADC) 214. In some embodiments, each channel of the receive subsystem 210 may also include digital filters and data conditioners (not shown) after the ADC 214. In some embodiments, analog filters prior to the ADC 214 may also be provided. The output of each ADC 214 may be directed into a raw data memory device 220. In some embodiments, an independent channel of the receive subsystem 210 may be provided for each receive transducer element of the probe 202. In other embodiments, two or more transducer elements may share a common receive channel.


In some embodiments, an analog front-end device 212 (AFE) may perform certain filtering processes before passing the signal to an analog-to-digital conversion device 214 (ADC). The ADC 214 may be configured to convert received analog signals into a series of digital data points at some pre-determined sampling rate. Unlike most ultrasound systems, some embodiments of the ultrasound imaging system of FIG. 17 may then store digital data representing the timing, phase, magnitude and/or the frequency of ultrasound echo signals received by each individual receive element in a raw data memory device 220 before performing any further receive beamforming, filtering, image layer combining or other image processing.


In order to convert the captured digital samples into an image, the data may be retrieved from the raw data memory 220 by an image generation subsystem 230. As shown, the image generation subsystem 230 may include a beamforming block 232 and an image layer combining (“ILC”) block 234. In some embodiments, a beamformer 232 may be in communication with a calibration memory 238 that contains probe calibration data. Probe calibration data may include information about the precise position, operational quality, and/or other information about individual probe transducer elements. The calibration memory 238 may be physically located within the probe, within the imaging system, or in location external to both the probe and the imaging system.


In some embodiments, after passing through the image generation block 230, image data may then be stored in an image buffer memory 236 which may store beamformed and (in some embodiments) layer-combined image frames. A video processor 242 within a video subsystem 240 may then retrieve image frames from the image buffer, and may process the images into a video stream that may be displayed on a video display 244 and/or stored in a video memory 246 as a digital video clip, e.g., as referred to in the art as a “cine loop”.


In some embodiments, the AFE 212 may be configured to perform various amplification and filtering processes to a received analog signal before passing the analog signal to an analog-to-digital conversion device. For example, an AFE 212 may include amplifiers such as a low noise amplifier (LNA), a variable gain amplifier (VGA), a bandpass or lowpass/anti-aliasing filter, and/or other amplification or filtering devices. In some embodiments, an AFE device 212 may be configured to begin passing an analog signal to an ADC 214 upon receiving a trigger signal. In other embodiments, an AFE device can be “free running”, continuously passing an analog signal to an ADC.


In some embodiments, each analog-to-digital converter 214 may generally include any device configured to sample a received analog signal at some consistent, predetermined sampling rate. For example, in some embodiments, an analog-to-digital converter may be configured to record digital samples of a time-varying analog signal at 25 MHz, which is 25 million samples per second or one sample every 40 nanoseconds. Thus, data sampled by an ADC may simply include a list of data points, each of which may correspond to a signal value at a particular instant. In some embodiments, an ADC 214 may be configured to begin digitally sampling an analog signal upon receiving a trigger signal. In other embodiments, an ADC device can be “free running”, continuously sampling a received analog signal.


In some embodiments, the raw data memory device 220 may include any suitable volatile or non-volatile digital memory storage device. In some embodiments, the raw data memory 220 may also comprise communication electronics for transmitting raw digital ultrasound data to an external device over a wired or wireless network. In such cases, the transmitted raw echo data may be stored on the external device in any desired format. In other embodiments, the raw data memory 220 may include a combination of volatile memory, non-volatile memory and communication electronics.


In some embodiments, the raw data memory device 220 may comprise a temporary (volatile or non-volatile) memory section, and a long-term non-volatile memory section. In an example of such embodiments, the temporary memory may act as a buffer between the ADC 214 and the beamformer 232 in cases where the beamformer 232 may be unable to operate fast enough to accommodate data at the full rate from the ADC 214. In some embodiments, a long-term non-volatile memory device may be configured to receive data from a temporary memory device or directly from the ADC 214. Such a long-term memory device may be configured to store a quantity of raw echo data for subsequent processing, analysis or transmission to an external device.


In some embodiments, the beamforming block 232 and the image layer combining block 234 may each include any digital signal processing and/or computing components configured to perform the specified processes (e.g., as described below). For example, in various embodiments the beamforming 232 and image layer combining 234 may be performed by software running on a single GPU, on multiple GPUs, on one or more CPUs, on combinations of CPUs & GPUs, on single or multiple accelerator cards or modules, on a distributed processing system, or a clustered processing system. Alternatively, these or other processes may be performed by firmware running on an FPGA (Field Programmable Gate Array) architecture or one or more dedicated ASIC (Application-Specific Integrated Circuit) devices.


In some embodiments, the video processor 242 may include any video processing hardware, firmware and software components that may be configured to assemble image frames into a video stream for display and/or storage.


In any embodiment, a plurality of elements may share one or more conductors in a multiplexed arrangement, such as time-division-multiplexed communications, or other multiplexing methods. Multiplexing signals may allow for a cable size to be reduced without sacrificing the benefits of individual channels for each element.


In any embodiment, the exact acoustic location of each transmit element and each receive element may be determined by precision manufacturing, by calibration, or some combination of both. Such element location information may be stored and made available to an image-formation and/or beamforming system.


Certain Terminology

Although this invention has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the present invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof. Various modifications to the above embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is intended that the scope of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above, but should be determined only by a fair reading of the claims that follow.


In particular, materials and manufacturing techniques may be employed as within the level of those with skill in the relevant art. Furthermore, reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in the appended claims, the singular forms “a,” “and,” “said,” and “the” include plural referents unless the context clearly dictates otherwise. Also as used herein, unless explicitly stated otherwise, the term “or” is inclusive of all presented alternatives, and means essentially the same as the commonly used phrase “and/or.” It is further noted that the claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation. Unless defined otherwise herein, all technical and scientific tennis used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.


Certain features that are described in this disclosure in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Although features may be described above as acting in certain combinations, one or more features from a claimed combination can, in some cases, be excised from the combination, and the combination may be claimed as any subcombination or variation of any subcombination. Further, the claims may be drafted to exclude any disclosed element. As such, the foregoing sentence is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.


Moreover, while operations may be depicted in the drawings or described in the specification in a particular order, such operations need not be performed in the particular order shown or in sequential order, and all operations need not be performed, to achieve the desirable results. Other operations that are not depicted or described can be incorporated in the example methods and processes. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the described operations. Further, the operations may be rearranged or reordered in other implementations. Also, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described components and systems can generally be integrated together in a single product or packaged into multiple products. Additionally, other implementations are within the scope of this disclosure.


Some embodiments have been described in connection with the accompanying drawings. Some of the figures may be drawn to scale, but such scale should not be limiting, since dimensions and proportions other than what are shown are contemplated and are within the scope of the disclosed invention. Distances, angles, etc. are merely illustrative and do not necessarily bear an exact relationship to actual dimensions and layout of the devices illustrated. Components can be added, removed, and/or rearranged. Further, the disclosure herein of any particular feature, aspect, method, property, characteristic, quality, attribute, element, or the like in connection with various embodiments can be used in all other embodiments set forth herein. Additionally, any methods described herein may be practiced using any device suitable for performing the recited steps.

Claims
  • 1. A ping-based ultrasound transducer probe comprising: an array of ultrasound transducing micro-elements, where each micro-element has a diameter less than 500 microns;a first group of micro-elements electrically connected to a first signal conductor;a second group of micro-elements electrically connected to a second signal conductor, the second signal conductor being electrically separate from the first signal conductor; anda third group of micro-elements positioned between the first group and the second group, the third group of micro-elements being permanently disconnected from any signal conductorswherein at least a majority of micro-elements of the array are spaced from adjacent micro-elements by a distance of half of a maximum operating wavelength of the array.
  • 2. The transducer probe of claim 1 wherein each micro-element has a diameter between 25 microns and 200 microns.
  • 3. The transducer probe of claim 1 wherein some of the micro-elements of the first group are differently sized than other micro-elements of the first group, wherein the size of a micro-element corresponds its fundamental operating frequency.
  • 4. The transducer probe of claim 1 wherein the micro-elements of the first group are connected to a first ground conductor and the micro-elements of the second group are connected to a second ground conductor not electrically connected to the first ground conductor.
  • 5. The transducer probe of claim 1 wherein the first group of micro-elements includes more micro-elements than the second group.
  • 6. The transducer probe of claim 1 wherein the first group of micro-elements collectively forms a dedicated transmit element and the second group of micro-elements collectively forms a dedicated receive element.
  • 7. The transducer probe of claim 1 further comprising a fourth group of micro-elements electrically connected to the first signal conductor by a switch that, when closed causes the fourth group to form a combined element with first group.
  • 8. The transducer probe of claim 7, wherein the micro-elements of the fourth group collectively surround the micro-elements of the first group.
  • 9. The transducer probe of claim 7, wherein the fourth group of micro-elements is adjacent to the first group of micro-elements.
  • 10. The transducer probe of claim 7, wherein the combined element has a different shape than the first group alone.
  • 11. The transducer probe of claim 7, wherein the combined element has a shape that is the same as a shape of the first group but a different size.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 62/287,694, filed on Jan. 27, 2016, titled “Ultrasound Imaging Using Apparent Point-Source Transmit Transducer”, and U.S. Provisional Patent Application No. 62/310,482, filed on Mar. 18, 2016, titled “Real-Time Three-Dimensional Ultrasound Imaging” both of which are herein incorporated by reference in their entirety.

US Referenced Citations (564)
Number Name Date Kind
3174286 Erickson Mar 1965 A
3895381 Kock Jul 1975 A
3974692 Hassler Aug 1976 A
4055988 Dutton Nov 1977 A
4072922 Taner et al. Feb 1978 A
4097835 Green Jun 1978 A
4105018 Greenleaf et al. Aug 1978 A
4180792 Lederman et al. Dec 1979 A
4205394 Pickens May 1980 A
4229798 Rosie Oct 1980 A
4259733 Taner et al. Mar 1981 A
4265126 Papadofrangakis et al. May 1981 A
4271842 Specht et al. Jun 1981 A
4325257 Kino et al. Apr 1982 A
4327738 Green et al. May 1982 A
4333474 Nigam Jun 1982 A
4339952 Foster Jul 1982 A
4452084 Taenzer Jun 1984 A
4501279 Seo Feb 1985 A
4511998 Kanda et al. Apr 1985 A
4539847 Paap Sep 1985 A
4566459 Umemura et al. Jan 1986 A
4567768 Satoh et al. Feb 1986 A
4604697 Luthra et al. Aug 1986 A
4662222 Johnson May 1987 A
4669482 Ophir Jun 1987 A
4682497 Sasaki Jul 1987 A
4694434 Vonn Ramm et al. Sep 1987 A
4781199 Hirama et al. Nov 1988 A
4817434 Anderson Apr 1989 A
4831601 Breimesser et al. May 1989 A
4893284 Magrane Jan 1990 A
4893628 Angelsen Jan 1990 A
4990462 Sliwa, Jr. Feb 1991 A
5050588 Grey et al. Sep 1991 A
5062295 Shakkottai et al. Nov 1991 A
5141738 Rasor et al. Aug 1992 A
5161536 Vilkomerson et al. Nov 1992 A
5197475 Antich et al. Mar 1993 A
5226019 Bahorich Jul 1993 A
5230339 Charlebois Jul 1993 A
5269309 Fort et al. Dec 1993 A
5278757 Hoctor et al. Jan 1994 A
5293871 Reinstein et al. Mar 1994 A
5299576 Shiba Apr 1994 A
5301674 Erikson et al. Apr 1994 A
5305756 Entrekin et al. Apr 1994 A
5339282 Kuhn et al. Aug 1994 A
5340510 Bowen Aug 1994 A
5345426 Lipschutz Sep 1994 A
5349960 Gondo Sep 1994 A
5355888 Kendall Oct 1994 A
5381794 Tei et al. Jan 1995 A
5398216 Hall et al. Mar 1995 A
5409010 Beach et al. Apr 1995 A
5442462 Guissin Aug 1995 A
5454372 Banjanin et al. Oct 1995 A
5503152 Oakley et al. Apr 1996 A
5515853 Smith et al. May 1996 A
5515856 Olstad et al. May 1996 A
5522393 Phillips Jun 1996 A
5526815 Granz et al. Jun 1996 A
5544659 Banjanin Aug 1996 A
5558092 Unger et al. Sep 1996 A
5564423 Mele et al. Oct 1996 A
5568812 Murashita et al. Oct 1996 A
5570691 Wright et al. Nov 1996 A
5581517 Gee et al. Dec 1996 A
5625149 Gururaja et al. Apr 1997 A
5628320 Teo May 1997 A
5673697 Bryan et al. Oct 1997 A
5675550 Ekhaus Oct 1997 A
5720291 Schwartz Feb 1998 A
5720708 Lu et al. Feb 1998 A
5744898 Smith et al. Apr 1998 A
5769079 Hossack Jun 1998 A
5784334 Sena et al. Jul 1998 A
5785654 Iinuma et al. Jul 1998 A
5795297 Daigle Aug 1998 A
5797845 Barabash et al. Aug 1998 A
5798459 Ohba et al. Aug 1998 A
5820561 Olstad et al. Oct 1998 A
5838564 Bahorich et al. Nov 1998 A
5850622 Vassiliou et al. Dec 1998 A
5862100 VerWest Jan 1999 A
5870691 Partyka et al. Feb 1999 A
5876342 Chen et al. Mar 1999 A
5891038 Seyed-Bolorforosh et al. Apr 1999 A
5892732 Gersztenkorn Apr 1999 A
5916169 Hanafy et al. Jun 1999 A
5919139 Lin Jul 1999 A
5920285 Benjamin Jul 1999 A
5930730 Marfurt et al. Jul 1999 A
5940778 Marfurt et al. Aug 1999 A
5951479 Holm et al. Sep 1999 A
5964707 Fenster et al. Oct 1999 A
5969661 Benjamin Oct 1999 A
5999836 Nelson et al. Dec 1999 A
6007499 Martin et al. Dec 1999 A
6013032 Savord Jan 2000 A
6014473 Hossack et al. Jan 2000 A
6048315 Chiao et al. Apr 2000 A
6049509 Sonneland et al. Apr 2000 A
6050943 Slayton et al. Apr 2000 A
6056693 Haider May 2000 A
6058074 Swan et al. May 2000 A
6077224 Lang et al. Jun 2000 A
6092026 Bahorich et al. Jul 2000 A
6122538 Sliwa, Jr. et al. Sep 2000 A
6123670 Mo Sep 2000 A
6129672 Seward et al. Oct 2000 A
6135960 Holmberg Oct 2000 A
6138075 Yost Oct 2000 A
6148095 Prause et al. Nov 2000 A
6162175 Marian, Jr. et al. Dec 2000 A
6166384 Dentinger et al. Dec 2000 A
6166853 Sapia et al. Dec 2000 A
6193665 Hall et al. Feb 2001 B1
6196739 Silverbrook Mar 2001 B1
6200266 Shokrollahi et al. Mar 2001 B1
6210335 Miller Apr 2001 B1
6213958 Winder Apr 2001 B1
6221019 Kantorovich Apr 2001 B1
6222304 Bernstein Apr 2001 B1
6224556 Schwartz May 2001 B1
6231511 Bae May 2001 B1
6238342 Feleppa et al. May 2001 B1
6246901 Benaron Jun 2001 B1
6251073 Imran et al. Jun 2001 B1
6264609 Herrington et al. Jul 2001 B1
6266551 Osadchy et al. Jul 2001 B1
6278949 Alam Aug 2001 B1
6279399 Holm Aug 2001 B1
6289230 Chaiken et al. Sep 2001 B1
6299580 Asafusa Oct 2001 B1
6304684 Niczyporuk et al. Oct 2001 B1
6309356 Ustuner et al. Oct 2001 B1
6324453 Breed et al. Nov 2001 B1
6345539 Rawes et al. Feb 2002 B1
6361500 Masters Mar 2002 B1
6363033 Cole et al. Mar 2002 B1
6370480 Gupta et al. Apr 2002 B1
6374185 Taner et al. Apr 2002 B1
6394955 Perlitz May 2002 B1
6423002 Hossack Jul 2002 B1
6436046 Napolitano et al. Aug 2002 B1
6449821 Sudol et al. Sep 2002 B1
6450965 Williams et al. Sep 2002 B2
6468216 Powers et al. Oct 2002 B1
6471650 Powers et al. Oct 2002 B2
6475150 Haddad Nov 2002 B2
6480790 Calvert et al. Nov 2002 B1
6487502 Taner Nov 2002 B1
6499536 Ellingsen Dec 2002 B1
6503204 Sumanaweera Jan 2003 B1
6508768 Hall et al. Jan 2003 B1
6508770 Cai Jan 2003 B1
6517484 Wilk et al. Feb 2003 B1
6526163 Halmann et al. Feb 2003 B1
6543272 Vitek Apr 2003 B1
6547732 Jago Apr 2003 B2
6551246 Ustuner et al. Apr 2003 B1
6565510 Haider May 2003 B1
6582367 Robinson Jun 2003 B1
6585647 Winder Jul 2003 B1
6597171 Hurlimann et al. Jul 2003 B2
6604421 Li Aug 2003 B1
6614560 Silverbrook Sep 2003 B1
6620101 Azzam et al. Sep 2003 B2
6652461 Levkovitz Nov 2003 B1
6668654 Dubois et al. Dec 2003 B2
6672165 Rather et al. Jan 2004 B2
6681185 Young et al. Jan 2004 B1
6690816 Aylward et al. Feb 2004 B2
6692450 Coleman Feb 2004 B1
6695778 Golland et al. Feb 2004 B2
6702745 Smythe Mar 2004 B1
6704692 Banerjee et al. Mar 2004 B1
6719693 Richard Apr 2004 B2
6728567 Rather et al. Apr 2004 B2
6752762 DeJong et al. Jun 2004 B1
6755787 Hossack et al. Jun 2004 B2
6780152 Ustuner et al. Aug 2004 B2
6790182 Eck et al. Sep 2004 B2
6835178 Wilson et al. Dec 2004 B1
6837853 Marian Jan 2005 B2
6843770 Sumanaweera Jan 2005 B2
6847737 Kouri et al. Jan 2005 B1
6854332 Alleyne Feb 2005 B2
6865140 Thomenius Mar 2005 B2
6932767 Landry et al. Aug 2005 B2
7033320 Von Behren et al. Apr 2006 B2
7087023 Daft et al. Aug 2006 B2
7104956 Christopher Sep 2006 B1
7217243 Takeuchi May 2007 B2
7221867 Silverbrook May 2007 B2
7231072 Yamano et al. Jun 2007 B2
7269299 Schroeder Sep 2007 B2
7283652 Mendonca et al. Oct 2007 B2
7285094 Nohara et al. Oct 2007 B2
7293462 Lee Nov 2007 B2
7313053 Wodnicki Dec 2007 B2
7366704 Reading et al. Apr 2008 B2
7402136 Hossack et al. Jul 2008 B2
7410469 Talish et al. Aug 2008 B1
7415880 Renzel Aug 2008 B2
7443765 Thomenius et al. Oct 2008 B2
7444875 Wu et al. Nov 2008 B1
7447535 Lavi Nov 2008 B2
7448998 Robinson Nov 2008 B2
7466848 Metaxas et al. Dec 2008 B2
7469096 Silverbrook Dec 2008 B2
7474778 Shinomura et al. Jan 2009 B2
7481577 Ramamurthy et al. Jan 2009 B2
7491171 Barthe et al. Feb 2009 B2
7497828 Wilk et al. Mar 2009 B1
7497830 Li Mar 2009 B2
7510529 Chou et al. Mar 2009 B2
7514851 Wilser et al. Apr 2009 B2
7549962 Dreschel et al. Jun 2009 B2
7574026 Rasche et al. Aug 2009 B2
7625343 Cao et al. Dec 2009 B2
7637869 Sudol Dec 2009 B2
7668583 Fegert et al. Feb 2010 B2
7674228 Williams et al. Mar 2010 B2
7682311 Simopoulos et al. Mar 2010 B2
7699776 Walker et al. Apr 2010 B2
7722541 Cai May 2010 B2
7744532 Ustuner et al. Jun 2010 B2
7750311 Daghighian Jul 2010 B2
7764984 Desmedt et al. Jul 2010 B2
7785260 Umemura et al. Aug 2010 B2
7787680 Ahn et al. Aug 2010 B2
7806828 Stringer Oct 2010 B2
7819810 Stringer et al. Oct 2010 B2
7822250 Yao et al. Oct 2010 B2
7824337 Abe et al. Nov 2010 B2
7833163 Cai Nov 2010 B2
7837624 Hossack et al. Nov 2010 B1
7846097 Jones et al. Dec 2010 B2
7850613 Stribling Dec 2010 B2
7862508 Davies et al. Jan 2011 B2
7876945 Lötjönen Jan 2011 B2
7880154 Otto Feb 2011 B2
7887486 Ustuner et al. Feb 2011 B2
7901358 Mehi et al. Mar 2011 B2
7914451 Davies Mar 2011 B2
7919906 Cerofolini Apr 2011 B2
7926350 Kröning et al. Apr 2011 B2
7927280 Davidsen Apr 2011 B2
7972271 Johnson et al. Jul 2011 B2
7984637 Ao et al. Jul 2011 B2
7984651 Randall et al. Jul 2011 B2
8002705 Napolitano et al. Aug 2011 B1
8007439 Specht Aug 2011 B2
8057392 Hossack et al. Nov 2011 B2
8057393 Yao et al. Nov 2011 B2
8079263 Randall et al. Dec 2011 B2
8079956 Azuma et al. Dec 2011 B2
8088067 Vortman et al. Jan 2012 B2
8088068 Yao et al. Jan 2012 B2
8088071 Hwang et al. Jan 2012 B2
8105239 Specht Jan 2012 B2
8135190 Bae et al. Mar 2012 B2
8157737 Zhang et al. Apr 2012 B2
8182427 Wu et al. May 2012 B2
8202219 Luo et al. Jun 2012 B2
8265175 Barsoum et al. Sep 2012 B2
8277383 Specht Oct 2012 B2
8279705 Choi et al. Oct 2012 B2
8412307 Willis et al. Apr 2013 B2
8414564 Goldshleger et al. Apr 2013 B2
8419642 Sandrin et al. Apr 2013 B2
8473239 Specht et al. Jun 2013 B2
8478382 Burnside et al. Jul 2013 B2
8483804 Hsieh et al. Jul 2013 B2
8532951 Roy et al. Sep 2013 B2
8582848 Funka-Lea et al. Nov 2013 B2
8602993 Specht et al. Dec 2013 B2
8627724 Papadopoulos et al. Jan 2014 B2
8634615 Brabec Jan 2014 B2
8672846 Napolitano et al. Mar 2014 B2
8684936 Specht Apr 2014 B2
9036887 Fouras et al. May 2015 B2
9072495 Specht Jul 2015 B2
9146313 Specht et al. Sep 2015 B2
9152761 Bhatia et al. Oct 2015 B2
9176078 Flohr et al. Nov 2015 B2
9192355 Specht et al. Nov 2015 B2
9217660 Zlotnick et al. Dec 2015 B2
9220478 Smith et al. Dec 2015 B2
9247874 Kumar et al. Feb 2016 B2
9247926 Smith et al. Feb 2016 B2
9265484 Brewer et al. Feb 2016 B2
9268777 Lu et al. Feb 2016 B2
9271661 Moghari et al. Mar 2016 B2
9277861 Kowal et al. Mar 2016 B2
9282945 Smith et al. Mar 2016 B2
9339239 Wang et al. May 2016 B2
9339256 Specht et al. May 2016 B2
9392986 Ning et al. Jul 2016 B2
9420994 Specht Aug 2016 B2
9510806 Smith et al. Dec 2016 B2
9526475 Specht et al. Dec 2016 B2
9526485 Yang Dec 2016 B2
9572549 Belevich et al. Feb 2017 B2
9576354 Fouras et al. Feb 2017 B2
9582876 Specht Feb 2017 B2
9606206 Boernert et al. Mar 2017 B2
9659152 Mueller May 2017 B2
9706972 Ahn et al. Jul 2017 B1
9775511 Kumar et al. Oct 2017 B2
9901407 Breisacher et al. Feb 2018 B2
10342518 Specht et al. Jul 2019 B2
10380399 Call et al. Aug 2019 B2
20020035864 Paltieli et al. Mar 2002 A1
20020073781 Hashimoto Jun 2002 A1
20020087071 Schmitz et al. Jul 2002 A1
20020111568 Bukshpan Aug 2002 A1
20020138003 Bukshpan Sep 2002 A1
20020161299 Prater et al. Oct 2002 A1
20030013962 Bjaerum et al. Jan 2003 A1
20030028111 Vaezy et al. Feb 2003 A1
20030040669 Grass et al. Feb 2003 A1
20030220554 Grenon Nov 2003 A1
20030228053 Li et al. Dec 2003 A1
20040015079 Berger et al. Jan 2004 A1
20040054283 Corey et al. Mar 2004 A1
20040068184 Trahey et al. Apr 2004 A1
20040100163 Baumgartner et al. May 2004 A1
20040111028 Abe et al. Jun 2004 A1
20040122313 Moore et al. Jun 2004 A1
20040122322 Moore et al. Jun 2004 A1
20040127793 Mendlein et al. Jul 2004 A1
20040138565 Trucco Jul 2004 A1
20040144176 Yoden Jul 2004 A1
20040215075 Zagzebski et al. Oct 2004 A1
20040236217 Cerwin et al. Nov 2004 A1
20040236223 Barnes et al. Nov 2004 A1
20040267132 Podany Dec 2004 A1
20050004449 Mitschke et al. Jan 2005 A1
20050053305 Li et al. Mar 2005 A1
20050054910 Tremblay et al. Mar 2005 A1
20050061536 Proulx Mar 2005 A1
20050090743 Kawashima et al. Apr 2005 A1
20050090745 Steen Apr 2005 A1
20050111846 Steinbacher et al. May 2005 A1
20050113689 Gritzky May 2005 A1
20050113694 Haugen et al. May 2005 A1
20050124883 Hunt Jun 2005 A1
20050131300 Bakircioglu et al. Jun 2005 A1
20050147297 McLaughlin et al. Jul 2005 A1
20050165312 Knowles et al. Jul 2005 A1
20050203404 Freiburger Sep 2005 A1
20050215883 Hundley et al. Sep 2005 A1
20050240125 Makin et al. Oct 2005 A1
20050252295 Fink et al. Nov 2005 A1
20050281447 Moreau-Gobard et al. Dec 2005 A1
20050288588 Weber et al. Dec 2005 A1
20060058664 Barthe et al. Mar 2006 A1
20060062447 Rinck et al. Mar 2006 A1
20060074313 Slayton et al. Apr 2006 A1
20060074315 Liang et al. Apr 2006 A1
20060074320 Yoo et al. Apr 2006 A1
20060079759 Vaillant et al. Apr 2006 A1
20060079778 Mo et al. Apr 2006 A1
20060079782 Beach et al. Apr 2006 A1
20060094962 Clark May 2006 A1
20060111634 Wu May 2006 A1
20060122506 Davies et al. Jun 2006 A1
20060173327 Kim Aug 2006 A1
20060262961 Noising et al. Nov 2006 A1
20060270934 Savord et al. Nov 2006 A1
20070016022 Blalock et al. Jan 2007 A1
20070016044 Blalock et al. Jan 2007 A1
20070036414 Georgescu et al. Feb 2007 A1
20070055155 Owen et al. Mar 2007 A1
20070073781 Adkins et al. Mar 2007 A1
20070078345 Mo et al. Apr 2007 A1
20070088213 Poland Apr 2007 A1
20070138157 Dane et al. Jun 2007 A1
20070161898 Hao et al. Jul 2007 A1
20070161904 Urbano Jul 2007 A1
20070167752 Proulx et al. Jul 2007 A1
20070167824 Lee et al. Jul 2007 A1
20070232914 Chen et al. Oct 2007 A1
20070238985 Smith et al. Oct 2007 A1
20070242567 Daft et al. Oct 2007 A1
20080110261 Randall et al. May 2008 A1
20080110263 Klessel et al. May 2008 A1
20080112265 Urbano et al. May 2008 A1
20080114241 Randall et al. May 2008 A1
20080114245 Randall et al. May 2008 A1
20080114246 Randall et al. May 2008 A1
20080114247 Urbano et al. May 2008 A1
20080114248 Urbano et al. May 2008 A1
20080114249 Randall et al. May 2008 A1
20080114250 Urbano et al. May 2008 A1
20080114251 Weymer et al. May 2008 A1
20080114252 Randall et al. May 2008 A1
20080114253 Randall et al. May 2008 A1
20080114255 Schwartz et al. May 2008 A1
20080125659 Wilser et al. May 2008 A1
20080181479 Yang et al. Jul 2008 A1
20080183075 Govari et al. Jul 2008 A1
20080188747 Randall et al. Aug 2008 A1
20080188750 Randall et al. Aug 2008 A1
20080194957 Hoctor et al. Aug 2008 A1
20080194958 Lee et al. Aug 2008 A1
20080194959 Wang et al. Aug 2008 A1
20080208061 Halmann Aug 2008 A1
20080242996 Hall et al. Oct 2008 A1
20080249408 Palmeri et al. Oct 2008 A1
20080255452 Entrekin Oct 2008 A1
20080269604 Boctor et al. Oct 2008 A1
20080269613 Summers et al. Oct 2008 A1
20080275344 Glide-Hurst et al. Nov 2008 A1
20080285819 Konofagou et al. Nov 2008 A1
20080287787 Sauer et al. Nov 2008 A1
20080294045 Ellington et al. Nov 2008 A1
20080294050 Shinomura et al. Nov 2008 A1
20080294052 Wilser et al. Nov 2008 A1
20080306382 Guracar et al. Dec 2008 A1
20080306386 Baba et al. Dec 2008 A1
20080319317 Kamiyama et al. Dec 2008 A1
20090010459 Garbini et al. Jan 2009 A1
20090012393 Choi Jan 2009 A1
20090015665 Willsie Jan 2009 A1
20090016163 Freeman et al. Jan 2009 A1
20090018445 Schers et al. Jan 2009 A1
20090024039 Wang et al. Jan 2009 A1
20090036780 Abraham Feb 2009 A1
20090043206 Towfiq et al. Feb 2009 A1
20090048519 Hossack et al. Feb 2009 A1
20090069681 Lundberg et al. Mar 2009 A1
20090069686 Daft et al. Mar 2009 A1
20090069692 Cooley et al. Mar 2009 A1
20090079299 Bradley et al. Mar 2009 A1
20090099483 Rybyanets Apr 2009 A1
20090112095 Daigle Apr 2009 A1
20090131797 Jeong et al. May 2009 A1
20090143680 Yao et al. Jun 2009 A1
20090148012 Altmann et al. Jun 2009 A1
20090150094 Van Velsor et al. Jun 2009 A1
20090182233 Wodnicki Jul 2009 A1
20090182237 Angelsen Jul 2009 A1
20090198134 Hashimoto et al. Aug 2009 A1
20090203997 Ustuner Aug 2009 A1
20090208080 Grau et al. Aug 2009 A1
20090259128 Stribling Oct 2009 A1
20090264760 Lazebnik et al. Oct 2009 A1
20090306510 Hashiba et al. Dec 2009 A1
20090326379 Daigle et al. Dec 2009 A1
20100010354 Skerl et al. Jan 2010 A1
20100016725 Thiele Jan 2010 A1
20100036258 Dietz et al. Feb 2010 A1
20100063397 Wagner Mar 2010 A1
20100063399 Walker et al. Mar 2010 A1
20100069751 Hazard et al. Mar 2010 A1
20100069756 Ogasawara et al. Mar 2010 A1
20100085383 Cohen et al. Apr 2010 A1
20100106431 Baba et al. Apr 2010 A1
20100109481 Buccafusca May 2010 A1
20100121193 Fukukita et al. May 2010 A1
20100121196 Hwang et al. May 2010 A1
20100130855 Lundberg et al. May 2010 A1
20100145195 Hyun Jun 2010 A1
20100168566 Bercoff et al. Jul 2010 A1
20100168578 Garson, Jr. et al. Jul 2010 A1
20100174194 Chiang et al. Jul 2010 A1
20100174198 Young et al. Jul 2010 A1
20100191110 Insana et al. Jul 2010 A1
20100217124 Cooley Aug 2010 A1
20100228126 Emery et al. Sep 2010 A1
20100240994 Zheng Sep 2010 A1
20100249570 Carson et al. Sep 2010 A1
20100249596 Magee Sep 2010 A1
20100256488 Kim et al. Oct 2010 A1
20100262013 Smith et al. Oct 2010 A1
20100266176 Masumoto et al. Oct 2010 A1
20100286525 Osumi Nov 2010 A1
20100286527 Cannon et al. Nov 2010 A1
20100310143 Rao et al. Dec 2010 A1
20100317971 Fan et al. Dec 2010 A1
20100324418 El-Aklouk et al. Dec 2010 A1
20100324423 El-Aklouk et al. Dec 2010 A1
20100329521 Beymer et al. Dec 2010 A1
20110005322 Ustuner Jan 2011 A1
20110016977 Guracar Jan 2011 A1
20110021920 Shafir et al. Jan 2011 A1
20110021923 Daft et al. Jan 2011 A1
20110033098 Richter et al. Feb 2011 A1
20110044133 Tokita Feb 2011 A1
20110066030 Yao Mar 2011 A1
20110098565 Masuzawa Apr 2011 A1
20110112400 Emery et al. May 2011 A1
20110112404 Gourevitch May 2011 A1
20110125017 Ramamurthy et al. May 2011 A1
20110178441 Tyler Jul 2011 A1
20110270088 Shiina Nov 2011 A1
20110301470 Sato et al. Dec 2011 A1
20110306886 Daft et al. Dec 2011 A1
20110319764 Okada et al. Dec 2011 A1
20120004545 Ziv-Ari et al. Jan 2012 A1
20120035482 Kim et al. Feb 2012 A1
20120036934 Kröning et al. Feb 2012 A1
20120085173 Papadopoulos et al. Apr 2012 A1
20120095347 Adam et al. Apr 2012 A1
20120101378 Lee Apr 2012 A1
20120114210 Kim et al. May 2012 A1
20120121150 Murashita May 2012 A1
20120137778 Kitazawa et al. Jun 2012 A1
20120140595 Amemiya Jun 2012 A1
20120141002 Urbano et al. Jun 2012 A1
20120165670 Shi et al. Jun 2012 A1
20120179044 Chiang et al. Jul 2012 A1
20120226201 Clark et al. Sep 2012 A1
20120235998 Smith-Casem et al. Sep 2012 A1
20120243763 Wen et al. Sep 2012 A1
20120253194 Tamura Oct 2012 A1
20120265075 Pedrizzetti et al. Oct 2012 A1
20120277585 Koenig et al. Nov 2012 A1
20130070062 Fouras et al. Mar 2013 A1
20130076207 Krohn et al. Mar 2013 A1
20130079639 Hoctor et al. Mar 2013 A1
20130083628 Qiao et al. Apr 2013 A1
20130088122 Krohn et al. Apr 2013 A1
20130116561 Rothberg et al. May 2013 A1
20130128702 Degertekin May 2013 A1
20130131516 Katsuyama May 2013 A1
20130144165 Ebbini et al. Jun 2013 A1
20130144166 Specht et al. Jun 2013 A1
20130204136 Duric et al. Aug 2013 A1
20130204137 Roy et al. Aug 2013 A1
20130253325 Call et al. Sep 2013 A1
20130258805 Hansen et al. Oct 2013 A1
20130261463 Chiang et al. Oct 2013 A1
20140058266 Call et al. Feb 2014 A1
20140073921 Specht et al. Mar 2014 A1
20140086014 Kobayashi Mar 2014 A1
20140147013 Shandas et al. May 2014 A1
20140243673 Anand et al. Aug 2014 A1
20150045668 Smith et al. Feb 2015 A1
20150080727 Specht et al. Mar 2015 A1
20150172878 Luna et al. Jun 2015 A1
20150224346 Coviello Aug 2015 A1
20160095579 Smith et al. Apr 2016 A1
20160135783 Brewer et al. May 2016 A1
20160157833 Smith et al. Jun 2016 A1
20160256134 Specht et al. Sep 2016 A1
20160354059 Specht Dec 2016 A1
20170074982 Smith et al. Mar 2017 A1
20170079621 Specht et al. Mar 2017 A1
20170119352 Anand et al. May 2017 A1
20180049717 Adam et al. Feb 2018 A1
20180153511 Specht et al. Jun 2018 A1
20180279991 Call et al. Oct 2018 A1
20190008487 Belevich et al. Jan 2019 A1
20190021697 Specht et al. Jan 2019 A1
20190083058 Specht Mar 2019 A1
20190175152 Smith et al. Jun 2019 A1
20190200961 Specht et al. Jul 2019 A1
20190328367 Specht et al. Oct 2019 A1
20190370522 Call et al. Dec 2019 A1
Foreign Referenced Citations (135)
Number Date Country
1535243 Oct 2004 CN
1781460 Jun 2006 CN
101103927 Jan 2008 CN
101116622 Feb 2008 CN
101190134 Jun 2008 CN
101453955 Jun 2009 CN
100545650 Sep 2009 CN
101609150 Dec 2009 CN
101843501 Sep 2010 CN
101912278 Dec 2010 CN
102018533 Apr 2011 CN
102112047 Jun 2011 CN
102123668 Jul 2011 CN
102599930 Jul 2012 CN
102011114333 Mar 2013 DE
1949856 Jul 2008 EP
2058796 May 2009 EP
2101191 Sep 2009 EP
2182352 May 2010 EP
2187813 May 2010 EP
2198785 Jun 2010 EP
1757955 Nov 2010 EP
2325672 May 2011 EP
1462819 Jul 2011 EP
2356941 Aug 2011 EP
1979739 Oct 2011 EP
2385391 Nov 2011 EP
2294400 Feb 2012 EP
2453256 May 2012 EP
1840594 Jun 2012 EP
2514368 Oct 2012 EP
1850743 Dec 2012 EP
1594404 Sep 2013 EP
2026280 Oct 2013 EP
2851662 Aug 2004 FR
49-11189 Jan 1974 JP
54-44375 Apr 1979 JP
S55103839 Aug 1980 JP
57-31848 Feb 1982 JP
58-223059 Dec 1983 JP
59-101143 Jun 1984 JP
59-174151 Oct 1984 JP
60-13109 Jan 1985 JP
60-68836 Apr 1985 JP
01164354 Jun 1989 JP
02501431 May 1990 JP
03015455 Jan 1991 JP
03126443 May 1991 JP
04017842 Jan 1992 JP
04067856 Mar 1992 JP
05042138 Feb 1993 JP
06125908 May 1994 JP
06254092 Sep 1994 JP
07051266 Feb 1995 JP
070204201 Aug 1995 JP
08154930 Jun 1996 JP
08252253 Oct 1996 JP
09103429 Apr 1997 JP
09201361 Aug 1997 JP
2777197 May 1998 JP
10216128 Aug 1998 JP
11089833 Apr 1999 JP
11239578 Sep 1999 JP
2001507794 Jun 2001 JP
2001245884 Sep 2001 JP
2002209894 Jul 2002 JP
2002253548 Sep 2002 JP
2002253549 Sep 2002 JP
2003235839 Aug 2003 JP
2004167092 Jun 2004 JP
2004215987 Aug 2004 JP
2004337457 Dec 2004 JP
2004340809 Dec 2004 JP
2004351214 Dec 2004 JP
2005046192 Feb 2005 JP
2005152187 Jun 2005 JP
2005523792 Aug 2005 JP
2005526539 Sep 2005 JP
2006051356 Feb 2006 JP
200661203 Mar 2006 JP
2006122657 May 2006 JP
2006130313 May 2006 JP
2006204923 Aug 2006 JP
2007325937 Dec 2007 JP
2008122209 May 2008 JP
2008513763 May 2008 JP
2008515557 May 2008 JP
2008132342 Jun 2008 JP
2008522642 Jul 2008 JP
2008259541 Oct 2008 JP
2008279274 Nov 2008 JP
2008307087 Dec 2008 JP
2009240667 Oct 2009 JP
2010-5375 Jan 2010 JP
2010124842 Jun 2010 JP
2010526626 Aug 2010 JP
2011529362 Dec 2011 JP
2013121493 Jun 2013 JP
2014087448 May 2014 JP
100715132 Apr 2007 KR
1020080044737 May 2008 KR
1020090103408 Oct 2009 KR
WO9218054 Oct 1992 WO
WO9800719 Jan 1998 WO
WO0164109 Sep 2001 WO
WO0208459 Oct 2002 WO
WO2005009245 Feb 2005 WO
WO2006114735 Nov 2006 WO
WO2007127147 Nov 2007 WO
WO2008097479 Aug 2008 WO
WO2009060182 May 2009 WO
WO2010095094 Aug 2010 WO
WO2010137453 Dec 2010 WO
WO2010139519 Dec 2010 WO
WO2011004661 Jan 2011 WO
WO2011057252 May 2011 WO
WO2011064688 Jun 2011 WO
WO2011100697 Aug 2011 WO
WO2011123529 Oct 2011 WO
WO2012028896 Mar 2012 WO
WO2012049124 Apr 2012 WO
WO2012049612 Apr 2012 WO
WO2012078639 Jun 2012 WO
WO2012091280 Jul 2012 WO
WO2012112540 Aug 2012 WO
WO2012131340 Oct 2012 WO
WO2012160541 Nov 2012 WO
WO2013059358 Apr 2013 WO
WO2013109965 Jul 2013 WO
WO2013116807 Aug 2013 WO
WO2013116809 Aug 2013 WO
WO2013116851 Aug 2013 WO
WO2013116854 Aug 2013 WO
WO2013116866 Aug 2013 WO
WO2013128301 Sep 2013 WO
Non-Patent Literature Citations (55)
Entry
Call et al.; U.S. Appl. No. 15/495,591 entitled “Systems and methods for improving ultrasound image quality by applying weighting factors,” filed Apr. 24, 2017.
Abeysekera et al.; Alignment and calibration of dual ultrasound transducers using a wedge phantom; Ultrasound in Medicine and Biology; 37(2); pp. 271-279; Feb. 2011.
Arigovindan et al.; Full motion and flow field recovery from echo doppler data; IEEE Transactions on Medical Imaging; 26(1); pp. 31-45; Jan. 2007.
Capineri et al.; A doppler system for dynamic vector velocity maps; Ultrasound in Medicine & Biology; 28(2); pp. 237-248; Feb. 28, 2002.
Carson et al.; Measurement of photoacoustic transducer position by robotic source placement and nonlinear parameter estimation; Biomedical Optics (BiOS); International Society for Optics and Photonics (9th Conf. on Biomedical Thermoacoustics, Optoacoustics, and Acousto-optics; vol. 6856; 9 pages; Feb. 28, 2008.
Chen et al.; Maximum-likelihood source localization and unknown sensor location estimation for wideband signals in the near-field; IEEE Transactions on Signal Processing; 50(8); pp. 1843-1854; Aug. 2002.
Chen et al.; Source localization and tracking of a wideband source using a randomly distributed beamforming sensor array; International Journal of High Performance Computing Applications; 16(3); pp. 259-272; Fall 2002.
Cristianini et al.; An Introduction to Support Vector Machines; Cambridge University Press; pp. 93-111; Mar. 2000.
Dunmire et al.; A brief history of vector doppler; Medical Imaging 2001; International Society for Optics and Photonics; pp. 200-214; May 30, 2001.
Du et al.; User parameter free approaches to multistatic adaptive ultrasound imaging; 5th IEEE International Symposium; pp. 1287-1290, May 2008.
Feigenbaum, Harvey, M.D.; Echocardiography; Lippincott Williams & Wilkins; Philadelphia; 5th Ed.; pp. 482, 484; Feb. 1994.
Fernandez et al.; High resolution ultrasound beamforming using synthetic and adaptive imaging techniques; Proceedings IEEE International Symposium on Biomedical Imaging; Washington, D.C.; pp. 433-436; Jul. 7-10, 2002.
Gazor et al.; Wideband multi-source beamforming with array location calibration and direction finding; Conference on Acoustics, Speech and Signal Processing ICASSP-95; Detroit, MI; vol. 3 IEEE; pp. 1904-1907; May 9-12, 1995.
Haykin, Simon; Neural Networks: A Comprehensive Foundation (2nd Ed.); Prentice Hall; pp. 156-187; Jul. 16, 1998.
Heikkila et al.; A four-step camera calibration procedure with implicit image correction; Proceedings IEEE Computer Scociety Conference on Computer Vision and Pattern Recognition; San Juan; pp. 1106-1112; Jun. 17-19, 1997.
Hendee et al.; Medical Imaging Physics; Wiley-Liss, Inc. 4th Edition; Chap. 19-22; pp. 303-353; (year of pub. sufficiently earlier than effective US filed and any foreign priority date) © 2002.
Hsu et al.; Real-time freehand 3D ultrasound calibration; CUED/F-INFENG/TR 565; Department of Engineering, University of Cambridge, United Kingdom; 14 pages; Sep. 2006.
Jeffs; Beamforming: a brief introduction; Brigham Young University; 14 pages; retrieved from the internet (http://ens.ewi.tudelft.nl/Education/courses/et4235/Beamforming.pdf); Oct. 2004.
Khamene et al.; A novel phantom-less spatial and temporal ultrasound calibration method; Medical Image Computing and Computer-Assisted Intervention—MICCAI (Proceedings 8th Int. Conf.); Springer Berlin Heidelberg; Palm Springs, CA; pp. 65-72; Oct. 26-29, 2005.
Kramb et al,.; Considerations for using phased array ultrasonics in a fully automated inspection system. Review of Quantitative Nondestructive Evaluation, 2004 Edition, ed. D. O. Thompson and D. E. Chimenti, American Inst. of Physics, pp. 817-825, Mar. 2004.
Ledesma-Carbayo et al.; Spatio-temporal nonrigid registration for ultrasound cardiac motion estimation; IEEE Trans. On Medical Imaging; vol. 24; No. 9; Sep. 2005.
Leotta et al.; Quantitative three-dimensional echocardiography by rapid imaging . . . ; J American Society of Echocardiography; vol. 10; No. 8; ppl 830-839; Oct. 1997.
Li et al.; An efficient speckle tracking algorithm for ultrasonic imaging; 24; pp. 215-228; Oct. 1, 2002.
Morrison et al.; A probabilistic neural network based image segmentation network for magnetic resonance images; Proc. Conf. Neural Networks; Baltimore, MD; vol. 3; pp. 60-65; Jun. 1992.
Nadkarni et al.; Cardiac motion synchronization for 3D cardiac ultrasound imaging; Ph.D. Dissertation, University of Western Ontario; Jun. 2002.
Opretzka et al.; A high-frequency ultrasound imaging system combining limited-angle spatial compounding and model-based synthetic aperture focusing; IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, IEEE, US; 58(7); pp. 1355-1365; Jul. 2, 2011.
Press et al.; Cubic spline interpolation; §3.3 in “Numerical Recipes in FORTRAN: The Art of Scientific Computing”, 2nd Ed.; Cambridge, England; Cambridge University Press; pp. 107-110; Sep. 1992.
Saad et al.; Computer vision approach for ultrasound doppler angle estimation; Journal of Digital Imaging; 22(6); pp. 681-688; Dec. 1, 2009.
Sakas et al.; Preprocessing and volume rendering of 3D ultrasonic data; IEEE Computer Graphics and Applications; pp. 47-54, Jul. 1995.
Sapia et al.; Deconvolution of ultrasonic waveforms using an adaptive wiener filter; Review of Progress in Quantitative Nondestructive Evaluation; vol. 13A; Plenum Press; pp. 855-862; Jan. 1994.
Sapia et al.; Ultrasound image deconvolution using adaptive inverse filtering; 12 IEEE Symposium on Computer-Based Medical Systems, CBMS, pp. 248-253; Jun. 1999.
Sapia, Mark Angelo; Multi-dimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMs) inverse filtering; Ph.D. Dissertation; University of Connecticut; Jan. 2000.
Slavine et al.; Construction, calibration and evaluation of a tissue phantom with reproducible optical properties for investigations in light emission tomography; Engineering in Medicine and Biology Workshop; Dallas, TX; IEEE pp. 122-125; Nov. 11-12, 2007.
Smith et al.; High-speed ultrasound volumetric imaging system. 1. Transducer design and beam steering; IEEE Trans. Ultrason., Ferroelect., Freq. Contr.; vol. 38; pp. 100-108; Mar. 1991.
Specht et al.; Deconvolution techniques for digital longitudinal tomography; SPIE; vol. 454; presented at Application of Optical Instrumentation in Medicine XII; pp. 319-325; Jun. 1984.
Specht et al.; Experience with adaptive PNN and adaptive GRNN; Proc. IEEE International Joint Conf. on Neural Networks; vol. 2; pp. 1203-1208; Orlando, FL; Jun. 1994.
Specht, D.F.; A general regression neural network; IEEE Trans. On Neural Networks; vol. 2.; No. 6; Nov. 1991.
Specht, D.F.; Blind deconvolution of motion blur using LMS inverse filtering; Lockheed Independent Research (unpublished); Jun. 23, 1975.
Specht, D.F.; Enhancements to probabilistic neural networks; Proc. IEEE International Joint Conf. on Neural Networks; Baltimore, MD; Jun. 1992.
Specht, D.F.; GRNN with double clustering; Proc. IEEE International Joint Conf. Neural Networks; Vancouver, Canada; Jul. 16-21, 2006.
Specht, D.F.; Probabilistic neural networks; Pergamon Press; Neural Networks; vol. 3; pp. 109-118; Feb. 1990.
UCLA Academic Technology; SPSS learning module: How can I analyze a subset of my data; 6 pages; retrieved from the internet (http://www.ats.ucla.edu/stat/spss/modules/subset_analyze.htm) Nov. 26, 2001.
Urban et al; Implementation of vibro-acoustography on a clinical ultrasound system; IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control; 58(6); pp. 1169-1181 (Author Manuscript, 25 pgs.); Jun. 2011.
Urban et al; Implementation of vibro-acoustography on a clinical ultrasound system; IEEE Ultrasonics Symposium (IUS); pp. 326-329; Oct. 14, 2010.
Von Ramm et al.; High-speed ultrasound volumetric imaging-System. 2. Parallel processing and image display; IEEE Trans. Ultrason., Ferroelect., Freq. Contr.; vol. 38; pp. 109-115; Mar. 1991.
Wang et al.; Photoacoustic tomography of biological tissues with high cross-section resolution: reconstruction and experiment; Medical Physics; 29(12); pp. 2799-2805; Dec. 2002.
Wells, P.N.T.; Biomedical ultrasonics; Academic Press; London, New York, San Francisco; pp. 124-125; Mar. 1977.
Widrow et al.; Adaptive signal processing; Prentice-Hall; Englewood Cliffs, NJ; pp. 99-116; Mar. 1985.
Wikipedia; Point cloud; 2 pages; retrieved Nov. 24, 2014 from the internet (https://en.wikipedia.org/w/index.php?title=Point_cloud&oldid=472583138).
Wikipedia; Curve fitting; 5 pages; retrieved from the internet (http:en.wikipedia.org/wiki/Curve_fitting) Dec. 19, 2010.
Wikipedia; Speed of sound; 17 pages; retrieved from the internet (http:en.wikipedia.org/wiki/Speed_of_sound) Feb. 15, 2011.
Yang et al.; Time-of-arrival calibration for improving the microwave breast cancer imaging; 2011 IEEE Topical Conf. on Biomedical Wireless Technologies, Networks, and sensing Systems (BioWireleSS); Phoenix, AZ; pp. 67-70; Jan. 16-19, 2011.
Zang et al.; A high-frequency high frame rate duplex ultrasound linear array imaging system for small animal imaging; IEEE transactions on ultrasound, ferroelectrics, and frequency control; 57(7); pp. 1548-1567; Jul. 2010.
Belevich et al.; U.S. Appl. No. 15/400,826 entitled “Calibration of multiple aperture ultrasound probes,” filed Jan. 6, 2017.
Call et al.; U.S. Appl. No. 15/500,933 entitled “Network-based ultrasound imaging system,” filed Feb. 1, 2017.
Related Publications (1)
Number Date Country
20170209121 A1 Jul 2017 US
Provisional Applications (2)
Number Date Country
62310482 Mar 2016 US
62287694 Jan 2016 US