The invention relates to the field of imaging and more particularly to a novel architecture and method for capturing data for forming a 3D ultrasound dataset.
Approximately 90% of trauma deaths occur in an accident zone prior to medical or surgical interventions. This used to be the obvious result of untreatable massive injury, but advances in medicine now allow a lot of traumatic injuries to be treated when diagnosed and addressed early enough.
Unfortunately, the lack of intelligent diagnostic tools that are capable of providing rapid and accurate diagnosis of non-visible internal injuries is the major challenge facing medical personnel, especially in mass casualty situations, under-served regions, and far forward operations within the defense sector. To this day, there is no portable system that provides relevant image data and automated diagnostic tools for use at the site of a trauma. Such a system with minimal training requirements should be capable of detecting life threatening injuries within the so-called “golden hour” of trauma diagnosis. The stress, commotion and the non-specific signs—symptoms—of trauma and the variability of patient reactions to injury result in frequently unreliable physical examinations in trauma settings. This in turn has been known to lead to catastrophic results.
Ultrasounds have been widely used for medical diagnosis in hospital and doctor's offices. “Pre-hospital” ultrasound has been used in emergency ambulances and helicopters mainly in North America and Central Europe since the late 1990s. The use of “in field” ultrasound has also been considered for mass casualty incidents. In all cases its use is restricted to specialists or well-trained staff, which are not available in most emergency crews. Although ultrasound systems with a high degree of mobility—smart-phone-sized, PDA based-systems, e.g. VSCAN or Signos—are available, these systems have limited diagnostic utility, offer no automated diagnosis options, and cannot be used reliably by first responders. In practice, experienced first respondents are using their long-term experience and knowledge during triage. More specifically, to find a relevant internal anatomical region using 2D ultrasound imaging, placement of an ultrasonic probe and the assessment of a 2D image—B-scan image—requires training and long-term experience. This is simply not possible in typical paramedic situations.
As well as for a civilian emergency care market, both the US and Canadian Armed Forces desire a field-deployable compact 4D (3D+Time) ultrasound imaging system capable of providing rapid diagnosis and triage of non-visible internal injuries. The need for such a system has been identified by both the Canadian Forces Surgeon General and the US-Army Director of the Combat Casualty Care Program. NATO allies, U.K. and Germany have issued similar requirements. Unfortunately, a portable and field operable real time 3D ultrasound imaging system is unavailable.
It would be advantageous to provide a portable trauma detection system for use by first responders.
In accordance with the invention there is provided a method comprising: firing a multidimensional array of ultrasound transducers comprising N transducers at a target; sensing first reflected signals with at least some of the N transducers to produce first output signals; providing the first output signals from only a first portion of the N transducers less than all the N transducers for storage and adaptive beamforming; firing the multidimensional array of ultrasound transducers comprising N transducers at the target another time to produce second reflected signals; sensing the second reflected signals with at least some of the N transducers to produce second output signals; and providing the second output signals from only a second portion of the N transducers less than all the N transducers and different from the first portion for storage and adaptive beamforming.
In some embodiments, the first reflected signals are digitized to form the first output signals and wherein the second reflected signals are digitized to form the second output signals.
In some embodiments, the method comprises: performing adaptive beamforming on data derived from the first output signals and from the second output signals to determine a sensed topography; and displaying an image of the sensed topography.
In some embodiments, the method comprises: firing the multidimensional array of ultrasound transducers comprising N transducers at the target another time; sensing third reflected signals with at least some of the N transducers to produce third output signals; and providing the third output signals from only a third portion of the N transducers less than all the N transducers and different from the first portion and different from the second portion for storage and adaptive beamforming.
In some embodiments, the first reflected signals are digitized to form the first output signals, wherein the second reflected signals are digitized to form the second output signals, and wherein the third reflected signals are digitized to form the third output signals.
In some embodiments, the method comprises: performing adaptive beamforming on data derived from the first output signals, from the second output signals, and from the third output signals to determine a sensed topography; and displaying an image of the sensed topography.
In some embodiments, firing the multidimensional array of ultrasound transducers comprises forming a same illumination pattern with the multidimensional array of ultrasound transducers each time.
In some embodiments, the illumination pattern is conical.
In some embodiments, the illumination pattern is formed by beam steering and wherein each of the output signals used in adaptive beam forming for same output image data are captured relying upon a same illumination pattern.
In some embodiments, groups of the multidimensional array of ultrasound transducers are coupled via a multiplexer and addressed simultaneously, the multiplexing allowing the entire multidimensional array of ultrasound transducers to be read in n successive operations by incrementing the multiplexer addressing between operations.
In some embodiments, each multiplexer addresses four different ultrasound transducer elements of the multidimensional array of ultrasound transducers.
In some embodiments, adaptive beam forming is performed on an image comprising data from all of the transducer elements within the multidimensional array of ultrasound transducers.
In some embodiments, adaptive beamforming is performed in reliance upon two previously captured images, each of the previously captured images captured relying on a same illumination pattern.
In some embodiments, adaptive beam forming is performed in the frequency domain.
In accordance with the invention there is provided an ultrasound system comprising: a multidimensional transducer array comprising a plurality of ultrasound transducer elements arranged in an array, each of the plurality N of ultrasound transducers arranged for transmitting a beam steered signal together, and each of the plurality of transducers coupled to a multiplexer for switching between n of the plurality of ultrasound transducers such that there are at least N/n mulitplexers; the multiplexers coupled for providing an information output signal from a selected one of the n transducers coupled therewith in response to a selection signal, each multiplexer for selecting between n transducers coupled therewith such that sampling of the information output signal from the N transducers is performed in n operations.
In some embodiments, each of the plurality of ultrasound transducers is arranged for transmitting a steered signal simultaneously and only N/n ultrasound transducers are for being read simultaneously.
In some embodiments, the multiplexers are each a 4:1 multiplexer.
In some embodiments, the system comprises an analog to digital converter coupled to an output port of each of the at least N/n mulitplexers.
In accordance with the invention there is provided a method comprising: providing a multidimensional array of ultrasound transducers comprising a plurality of ultrasound transducer elements for transmitting ultrasound signals and for sensing reflected signals, the ultrasound transducer elements addressable in parallel for providing transmit signals to support beam steering and addressable in at least three sets of ultrasound transducer elements fewer than all the ultrasound transducer elements for transferring received reflected signal information therefrom; providing at least a signal for forming a steered beam transmitted from all of the ultrasound transducer elements within the array; and providing an address for addressing one set of the at least three sets to read received signal information therefrom.
In accordance with the invention there is provided a method comprising: firing a multidimensional array of ultrasound transducers comprising N transducers at a target; sensing first reflected signals with at least some of the N transducers to produce first output signals; digitizing the first output signals to produce digitized first output signals; transmitting some of the first output signals to a first processing circuit for performing Csteer processing; transmitting others of the first output signals to a second other processing circuit for performing Csteer processing; transmitting an output signal from the Csteer from each of the first processing circuit and the second processing circuit to a third processor for performing Rsteer processing; and transmitting an output signal from the Csteer from each of the first processing circuit and the second processing circuit to a fourth other processor for performing Rsteer processing.
In accordance with the invention there is provided a method comprising: capturing first received ultrasound image information from a first set of ultrasound transducer elements less than all the ultrasound transducer elements within a multidimensional ultrasound transducer array; constructing a first superset comprising information from each of the ultrasound transducer elements within the multidimensional ultrasound transducer array and including the captured first received ultrasound image information, some of the information captured previously; performing adaptive beamforming on the first superset; capturing second received ultrasound image information from a second set of ultrasound transducer elements less than all the ultrasound transducer elements within a multidimensional ultrasound transducer array and different from the first set; constructing a second superset comprising information from each of the ultrasound transducer elements within a multidimensional ultrasound transducer array including the second received ultrasound image information, some of the information captured previously; and performing adaptive beamforming on the second superset.
In accordance with yet another embodiment of the invention there is provided an ultrasound system comprising: a plurality of ultrasound transducers arranged in an array, each of the plurality of ultrasound transducers arranged for transmitting a signal together, and a first group of the plurality of ultrasound transducers arranged for receiving a reflected signal at a first time and a second group of the plurality of ultrasound transducers arranged for receiving a reflected signal at a second time, the first group different from the second group; a multiplexer for switching between the first group of the plurality of ultrasound transducers and the second group of the plurality of ultrasound transducers for providing information therefrom to a digitizing circuit for providing digitized sensed data; a processor configured for: assembling the digitized sensed information from the first group of the plurality of ultrasound transducers and the digitized sensed information from the second group of the plurality of ultrasound transducers to form a digitized sensed data set; processing the digitized sensed data set by performing adaptive beamforming on the digitized sensed data set to provide image data; and displaying the image data.
In some embodiments, displaying the image data is performed in real time.
In accordance with yet another embodiment of the invention there is provided a method comprising: firing an array of sensors comprising N sensors at a target; sensing reflected signals with at least some of the N sensors; digitizing the first output of at least some of the N sensors; converting the digitized first output from each of the at least some sensors into the frequency domain; providing the converted digital output from only a first portion of the N sensors less than all the N sensors for storage and adaptive beamforming; firing the array of sensors comprising N sensors at the target; sensing reflected signals with at least others of the N sensors; digitizing the second output of at least the others of the N sensors; providing the digital output from only a second portion of the N sensors less than all the N sensors and different from the first portion for storage and adaptive beamforming; performing adaptive beamforming on the data; and displaying an image of the sensed topography within the sensors range.
In some embodiments, displaying the image data is performed in real time.
Exemplary embodiments of the invention will now be described in conjunction with the following drawings, wherein similar reference numerals denote similar elements throughout the several views, in which:
The following description is presented to enable a person skilled in the art to make and use the invention and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the embodiments disclosed but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Object under test (OUT): An object under test is any object, objects, and/or body that is being imaged using ultrasound imaging techniques.
Two-dimensional (2D): Two-dimensional implies that a result is traversable along each of two separate dimensions. Many different two-dimensional coordinate systems are known including Cartesian co-ordinates, radial co-ordinates, etc.
Three-dimensional (3D): Three-dimensional implies that a result is traversable along each of three separate dimensions. Many different three-dimensional coordinate systems are known including Cartesian co-ordinates (along each of three orthogonal vectors), cylindrical co-ordinates, conical coordinates, etc.
Four-dimensional (4D): Four-dimensional implies that a result is traversable along each of three separate dimensions and time, providing a three-dimensional image for each of a series of instances in time.
No portable ultrasound systems are presently available that provide three-dimensional imaging in sequence, thereby supporting four dimensions—three special dimensions and time. Further, there is no system that addresses portable, self-contained, rugged, field operable equipment for civilian needs and also for military needs at a front line of combat operations.
Available portable ultrasound systems provide two-dimensional images. Unfortunately, two-dimensional images are quite limited in their representation of internal scanned objects and, as such, require a trained medical professional to mentally integrate multiple images to develop a three-dimensional impression of internal scanned objects. Not only does two-dimensional ultrasound imaging require significant training to be useful, but it is also time-consuming and inefficient, making it very poorly suited to use in war zones and at the scene of traumatic events. Making a system that involves less human concentration and less time is important for triage type applications. Making a system that requires less training is important for wider adoption and deployment, as well as for versatility and scalability of system utilization.
Problematically, prior art solutions to 3D volumetric imaging are slow and require extensive hardware that is not portable in nature. The resulting systems are ill suited to mobile applications such as for use in triage situations, including those in both civilian settings and in military combat situations. Further, miniaturization of these systems if it is possible would only make them slower and even more poorly suited to battlefield use.
Three-dimensional ultrasound imaging systems have three components: an image acquisition circuit, a data reconstruction circuit for constructing a three-dimensional or four dimensional image, and a display circuit for displaying the resulting image data. The three-dimensional image reconstruction process is achievable by a mechanical scanning technique as depicted in
For mechanical scanning, a linear ultrasound transducer 110 is mounted for being moved by a motor 120 as shown in the top two images 101 and 102 of
Three-dimensional image acquisition processes for matrix planar array ultrasound probes, are technically more challenging. Deployed planar arrays include a large number of transducers, typically in the range of 64×64=4096 elements; however, processing and architectural limitations result in the number of elements used being approximately 8×8=64 for a 3D beamformer. Thus, the array gain of a small size sub-array of typically 8×8=64 elements is reduced by approximately: 10×Log10(64)=18 dBs when compared to the full array gain that would have been provided by the full planar array of 64×64=4096 elements, had it been usable.
For the sake of simplicity and without any loss of generality, the three-dimensional ultrasound beamformer coherently processes a received signal of only 8×8=64-elements or 4×4=16 elements, which is a sub-aperture of the 64×64=4096-elements within the β planar array shown at 200 in
When an active transmission is completed, the receiving 8×8=64-element sub-aperture is shifted to the left or right by a few elements. Thus, to make use of all the 4096-elements of the deployed probe, the 8×8=64-element beamforming process is repeated at least 32 to 64 times, generating numerous beams at numerous times. For a 4×4=16 sub-aperture, there is at least 4 times as many beamforming processes over the 8×8 aperture resulting in 128-256 imaging operations.
As a result, resulting angular resolution characteristics of reconstructed image data are defined by array gain of the three-dimensional beamformer of the 8×8=64 element sub-aperture as opposed to by the 64×64=4096-element planar array.
The current solution does not support a mobile, light-weight, easy-to-use, easy-to-operate, trauma detection and assessment system. Instead, it requires significant hardware and complexity and due to its complexity and performance is ill suited to battlefield and triage applications.
By recording a three dimensional volumetric region with deep penetration (i.e. 24 cm) and wide 3D angle coverage 80°×80°, relevant anatomical structures of interest will be included in the volumetric image and therefore, convenience in locating an anatomical structure and a probability that such a structure is missed by a paramedic with knowledge of human anatomy, will be very low. Moreover, by enhancing image quality and including a temporal component in the sample volume—a time line of successive images, a rapid diagnostic assessment in remote areas becomes more reliable and robust, even without specialized training in system operation.
Alternatively, as shown in
In the present embodiment, the following elements are combined to provide a 4D digital ultrasound system.
A two-dimensional Matrix Array Probe forms part of the present embodiment. Though this is a challenging development, there are transducers available such as from Fraunhofer-IBMT (St. Ingbert, Germany), which has built a 32×32 transducer 2D planar array probe that has been integrated with an experimental prototype, detailed hereinbelow. Vermon, in France, has also built a multi-dimensional planar array with 32×32 elements. However, a 16×16 or 64×64 planar array is also suitable to the embodiment.
Preferably, the planar array incorporates the following structures in order to result in substantial miniaturization and improved simplicity of the remaining system architecture.
A multiplexer (MUX) for multiplexing planar array elements within the probe. In the preferred embodiment, as shown in
In an exemplary embodiment with a 32×32=1024 planar array, 1024 Analog to Digital Conversion (ADC) operations are performed. Because of the inclusion of MUX for addressing transducers, the ADC operations are performed in four (4) sequential sets of operations thereby relying upon 256 Analog to Digital Conversion (ADC) circuits with each circuit performing 4 Analog to digital conversions in series—one after another. In
The four (4) 16×16 (256) groups of transducers 701 coupled to each MUX 702, are capable of active transmission of digital active signals through 256-channels of Digital to Analog Conversion (DAC) and reception of acoustic signals through 256-channels of ADC. As a result, pre-amplification functionality provides a protection for the active elements to minimize interference for the remaining reception channels of the matrix array. Referring to
Though the exemplary embodiment is described with reference to 8 connector boards, and a 4:1 MUX, other configurations are also applicable and optionally depend on the geometry of the planar array. For example, if an 8:1 MUX is used then only 4 connector boards as described hereinabove are connected. Alternatively, each connector board supports only half as many channels and a same number of connector boards are used. Further, a radial array may be best served by a different ratio of multiplexer.
Referring to
The resulting illumination from the above matrix planar array structure is as follows: A conical volumetric segment is imaged with an opening angle of 80 degrees×80 degrees to a maximum depth of 24 cm (sample volume) with an angular resolution of 0.5 degrees and a rate of 20 volumes per second (Vps). A 2D-phased array probe with 32×32 single elements working at a center frequency of 3.0 MHz is used; the frequency is based on the probe design and with some probes, frequencies such as 7-9MHz are used. All 1024 elements are active during the transmit phase, forming an illumination pattern such as that shown in
Four receive operations are necessary to acquire ultrasound responses from a whole volume of interest with all 1024 transducer elements. This is done with 4 transmit operations—transmitting from all transducer elements—each followed by a receive operation—each for receiving information from ¼ of the transducers—wherein addressing of the MUX circuits is incremented between operations. Alternatively, another order of addressing the MUX circuits is also supported.
Switching of the elements between transmit and receive phase and the correct choice of the relevant receive elements is done by addressing and operation of the multiplexers shown in
On a monitor is shown an image and a graphical user interface. In some embodiments, a result of automated detection of free fluid—intraperitoneal free fluid or blood—inside the sample volume is also shown as shown in
In U.S. Pat. Nos. 6,719,696 & 6,482,160, a 3D adaptive beamforming method is disclosed. Each of the U.S. Pat. Nos. 6,719,696 & 6,482,160 are incorporated herein by reference. The references taken together define the signal processing structure of an adaptive multidimensional beamformer having instantaneous convergence for ultrasound imaging systems deploying multidimensional sensor arrays. The method provides for enhanced angular resolution of the resulting beamformer. Unfortunately, the method and system are complex and are not amenable to portability and mobile application.
An explanation of a method of digital adaptive beamforming is now presented.
Conventional (Time Delay) 3D Beamformers in Frequency Domain:
Wr,m is the (r,m)th term of matrix W(θ,φ) comprising weights of a three dimensional spatial window to suppress sidelobe structures. Xr,m(f) is the (r,m)th term of matrix X(f) expressing the Fourier Transform of a sensor time series of the mth sensor on the rth circular array. D(f,θs,φsis a steering matrix having its (r,m)th phase term for a plane wave signal expressed by dr,m(f,θs,φs)=exp(j2πf(rδz cos φs+R sin φscos (θs θm))/c), with R being a radius of the circular sensor array 30, δz being a distance between subsequent circular arrays in z-direction and θm=2πm/M, m=0,1, . . . , M 1 indicating a position of a sensor 8 on the circular array 30. A re-arranged form of equation (3), is expressed as follows:
dr(f,θsφs)=exp(j2πfrδz cos φs/c) is the rth term of steering vector
The decomposition process substantially facilitates cylindrical array beamforming. The number of mathematical operations and the amount of memory required to perform these operations are substantially reduced by expressing the cylindrical beamformer as a product of two sums, instead of a double summation as expressed by equation (3). This allows application of advanced beamforming processes for multidimensional arrays.
The circular and line array beamformers resulting from the decomposition process may be executed in parallel. This allows real time applications of an ultrasound imaging system using the architecture disclosed herein.
Decomposition processes for planar and spherical arrays are very similar to the decomposition process for the cylindrical array described above.
The beamforming process for line sensor arrays and circular sensor arrays 30, respectively, is a time delay beamforming estimator—basically a spatial filter. However, optimum beamforming relies upon beamforming filter coefficients chosen based on characteristics of noise received by the sensor array in order to optimize sensor array response. Processes for optimum beamforming using characteristics of noise received by the sensor array are called adaptive beamformers. Beamforming filter coefficients of these processes are chosen based on a covariance matrix of correlated noise received by the sensor array. However, if the knowledge of the noise's characteristic is inaccurate, performance of the adaptive beamformer will degrade significantly and may even result in cancellation of a desired signal. Therefore, it is very difficult to implement useful adaptive beamformers in real time operational systems. Furthermore, for post processing such as matched filter processing the adaptive beamformer has to provide coherent beam time series. In particular, the matched filter processing requires near-instantaneous convergence of the beamformer, producing a continuous beam time series correlating with a reference signal.
In adaptive beamforming, beamformer response is optimized to contain minimum contributions due to noise and signals arriving from directions other than a direction of a desired signal. For optimization, a linear filter vector Z is found, which is a solution to a constraint minimization problem that allows signals from a desired direction to pass with a specified gain. The solution of the minimization problem is expressed by
D is the conventional steering vector. R(fi) is a spatial correlation matrix of received sensor time series with elements Rnm(f,δnm)=E{Xn(f)Xm(f)}, wherein E{ . . . } denotes an expectation operator and δnm is sensor spacing between nth and mth sensor.
Equation (6) provides adaptive steering vectors for beamforming signals received by a N-sensor array. In a frequency domain, an adaptive beam at a steering angle θs is then defined by
B(fi,θs)=(Vector Z)*(fi,θs)(vector X)(fi), (7)
corresponding to conventional beams.
Thus output data from a conventional beamformer in a frequency domain is known and a corresponding output ξ(ti,θs) in a time domain is then expressed as a weighted sum of steered sensor outputs
Since ξ(ti,θs) is an inverse fast Fourier transformation (IFFT) of B(f,θs), continuous beam time sequences are obtained from the output data of a frequency domain beamformer using fast Fourier transformation (FFT) and fast convolution processes.
Using the beamformer output data an expected broadband beam power B(θ) is determinable and is defined as the STCM in time domain and is assumed to be independent of ti in stationary conditions. Supposing that Xn(fi) is the Fourier transform of the sensor time series and assuming that the sensor time series are approximately band limited, a vector of steered sensor outputs xn(ti,τn(θs)) is expressed by
T(fk,θ) is a diagonal steering matrix with elements identical to the elements of the conventional steering vector D and is expressed as follows:
The STCM follows then directly from the above equations as
wherein the index k=1, 1+1, . . . , 1+H refers to frequency bins in a band of interest Δf and R(fk) is the CDSM for the frequency bin fk.
In steered minimum variance algorithms (STMV) a spectral estimate of broadband spatial power is generally used, however, estimates do not provide coherent beam time series because they represent the broadband beam power output of an adaptive process. Therefore, according to the invention the estimation process of the STMV has been modified to determine complex coefficients of Φ(Δf,θs) for all frequency bins in a frequency band of interest.
Accordingly, a STMV process is used in its original form to generate an estimate of Φ(Δf,θ) for all frequency bands Δf of a received signal. Assuming stationarity across the frequency bins of a band Δf the estimate of the STMV is considered to be approximately the same as a narrowband estimate Φ(f0,θ) for a center frequency f0 of the band Δf. Narrowband adaptive coefficients are then derived from
Phase variations across the frequency bins are modeled and using adaptive steering weights wn(Δf,θ) adaptive beams are formed.
b(ti,θs,φs)=IFFT{B(fi,θs,φs)},
wherein overlap and concatenation of segments are discarded to form a continuous beam time series.
Matrix inversion is a major issue for implementing adaptive beamformers in real time applications. Standard numerical methods for solving systems of linear equations can be applied to solve for the adaptive weights. The numerical methods include:
QR decomposition of the received vector
SVD (Singular Value Decomposition) method. The SVD method is the most stable factorization technique but requires three time more computational effort than the QR decomposition method.
For investigative studies of the beamforming process Cholesky factorization and QR decomposition techniques have been applied. No noticeable differences in performance concerning stability have been found between these methods. Of course, for real time applications a fastest algorithm is often preferred.
Implementation of an adaptive beamformer with a large number of adaptive weights for a large number of sensors requires very long convergence periods eliminating dynamical characteristics of the adaptive beamformer to detect time varying characteristics of a received signal of interest. This limitation is avoided by reducing the number of adaptive weights. A reduction of the number of adaptive weights is achieved by introducing a sub-aperture processing scheme.
A sub-aperture configuration for a line array of sensors is described. The line array is divided into a plurality of overlapping sub-arrays. In a first stage the sub-arrays are beamformed using a conventional beamformer generating a number of sets of beams equal to the number of sub-arrays for each steering direction. In a second stage adaptive beamforming is performed on each set of beams steered in a same direction in space but each beam belonging to a different sub-array. A set of beams is equivalent to a line array consisting of directional sensors steered at a same direction with sensor spacing being equal to space separation between two contiguous sub-arrays and with the number of sensors being equal to the number of sub-arrays.
A second stage of beamforming comprises an adaptive beamformer on a line array consisting of, for example, G=3 beam time series bg(ti,θsφs), g=1,2, . . . , G. For a given pair of azimuth and elevation steering angles {θs,φs}, the cylindrical adaptive beamforming process is reduced to an adaptive line array beamformer. The adaptive line array beamformer comprises only three beam time series bg(tiθs,φs),g=1,2,3 with spacing δ=[(R2π/M)2+δz2]1/2 between two contiguous sub-aperture cylindrical cells, wherein (R2π/M) is the sensor spacing on each ring and wherein δz is the distance between each ring along the z-axis of the cylindrical array. The adaptive line array beamformer provides one or more adaptive beam time series with steering centered on the pair of azimuth and elevation steering angles {θs,φs}.
Because of the very small number of degrees of freedom in each sub-aperture the adaptation process experiences near-instantaneous convergence. Furthermore, the multidimensional sub-aperture beamforming process according to the invention may include a wide variety of adaptive noise cancellation techniques such as MVDR and GSC.
Furthermore, the sub-aperture configuration is applicable to other multidimensional arrays such as cylindrical arrays and spherical arrays. Decomposition, sub-aperture formation as well as implementation of adaptive beamformers for cylindrical and spherical arrays are similar to corresponding steps for planar arrays.
This decomposition process provides a foundation for an efficient signal processing implementation of the 3D Adaptive Beamformer and allows for a scalable and fully digital computing architecture design in accordance with the present embodiment.
The present embodiment includes a highly parallelized computing architecture for real time ultrasound imaging systems deploying 2D and/or 3D multidimensional ultrasound transducer array probes. The probes have planar, cylindrical or spherical geometrical sensor configurations. 3D adaptive signal processing flow and computing architecture layout of the present embodiment are applicable to 3D ultrasound imaging systems deploying either matrix (planar), cylindrical or spherical array ultrasound probes as are known. Alternatively, the probes have other configurations.
Referring to
Thus, the architecture of
As detailed in
Relying on 3-snapshots by the 3D Adaptive beamformer, to achieve near instantaneous convergence, might be considered as an impediment in that it reduces the rate of the volumetric image reconstruction output images by a factor of 3. The current processing capacity of the computing architecture allows for the reconstruction of 20 volumes/second using the conventional time delay 3D beamformer. Thus, the time interval between two snapshots (i.e. two successive volumetric images) is 50.0 ms. As a result, the time interval between two successive volumetric image outputs of the 3D adaptive beamformer will be 150.0 ms.
This kind of impediment, however, is reduced by introducing, as detailed in
Referring to
The design principles of the computing architecture of
This highly parallelized architecture accommodates processing of from low to highly populated transducer arrays. For example, for a planar array probe with 16×16=256 transducers, the complexity of the computing architecture in
Numerous other embodiments may be envisaged without departing from the scope of the invention.
Number | Date | Country | |
---|---|---|---|
62777321 | Dec 2018 | US |