This application is a national filing of PCT application Serial No. PCT/M2016/051124, filed Mar. 1, 2016, published as WO2017/149352 on Sep. 8, 2017. This application claims priority to PCT application Serial No. PCT/IB2016/051124, published as WO2017/149352 on Sep. 8, 2017.
The following generally relates to ultrasound imaging and more particularly to three-dimensional (3-D) ultrasound imaging with multiple, single-element transducers and ultrasound signal propagation correction.
Ultrasound imaging can be used to provide one or more real-time images of information about the interior of a subject, such as an organ, tissue, blood, etc., or an object. This includes two-dimensional and/or three-dimensional images. Three-dimensional probes acquire data that can be processed to generate three-dimensional images. One such probe includes two, single-element transducers disposed on a shaft one hundred and eighty degrees apart. The shaft is configured to translate and rotate, which translates and rotates the two, single-element transducers. Concurrently translating and rotating the shaft moves the two, single-element transducers along a helix trajectory during data acquisition, collecting data for three-dimensional imaging.
When the two, single-element transducers are operated at two different center frequencies and focused at two different depths, a sonographer can choose whether to use the higher frequency for nearer field imaging, or the lower frequency for farer field imaging. Synthetic aperture focusing has been used to increase image quality of images acquired with a single-element transducer. However, the mechanical movement of the two, single-element transducer probe gives rise to jitter, and jitter and along with transducer acceleration result in a calculated transducer position that is different from the actual transducer position. Unfortunately, this difference introduces error in the delay in delay and sum beamforming, which can degrade image quality.
Aspects of the application address the above matters, and others.
In one aspect, a method is for ultrasound imaging with a first single-element transducer and a second single-element transducer. The first and second single-element transducers are disposed on an ultrasound probe shaft, which has a longitudinal axis, with transducing sides disposed transverse to and facing away from the longitudinal axis. The first and second single-element transducers are angularly offset from each other on the shaft by a non-zero angle. The method includes operating the first and second single-element transducers at first and second different cutoff frequencies, and concurrently translating and rotating the shaft, moving the first and second single-element transducers along a helical path while the first and second single-element transducers acquire first and second echo signals. The method further includes receiving first electrical signals from the first single-element transducer, wherein the first electrical signals are indicative of the first echo signals, and receiving second electrical signals from the second single-element transducer, wherein the second electrical signals are indicative of the second echo signals. The method further includes delay and sum beamforming, with first and second adaptive synthetic aperture focusing beamformers, the first and second electrical signals, respectively via different processing chains, employing adaptive synthetic aperture focusing, producing first and second images. The method further includes combining the first and second images, creating a final image and displaying the final image.
In another aspect, an ultrasound imaging system includes a probe with an elongate shaft, a drive assembly coupled to the elongate shaft and configured to translate and rotate the shaft, and at least first and second single-element transducers disposed at an end region of the shaft angularly separated from each other by an angle in a range between 60 and 180 degrees. The at least first and second single-element transducers transmit and receive in a direction transverse to the elongate shaft and have different center frequencies, and respectively generate first and second electrical signals. The ultrasound imaging system further includes a console with delay and sum beamformer configured to process the first and second electrical signals respectively through different processing chains, wherein the different processing chains respectively include first and second adaptive synthetic aperture focusing beamformers configured to employ adaptive synthetic aperture focusing to produce first and second images. The ultrasound imaging system further includes an image combiner that combines the first and second images and displays the combined image on a display.
In another aspect, an apparatus includes a delay and sum beamformer configured to process first and second electrical signals respectively through different processing chains, wherein the different processing chains respectively include first and second adaptive synthetic aperture focusing beamformers configured to employ adaptive synthetic aperture focusing to produce first and second images. The first electrical signals are received from a first single-element transducer. The first electrical signals are indicative of first ultrasound signals, and receiving second electrical signals are received from a second single-element transducer. The second electrical signals are indicative of second ultrasound signals. The first and second single-element transducers are disposed on a shaft, which has a longitudinal axis, of an ultrasound imaging probe with transducing sides disposed transverse to and facing away from the longitudinal axis. The first and second single-element transducers are angularly offset from each other on the shaft by a non-zero angle. The first and second single-element transducers are operated at first and second different cutoff frequencies. The shaft concurrently translates and rotates while the first and second single-element transducers receive the first and second ultrasound signals. The apparatus further includes an image combiner that combines the first and second images and displays the combined image on a display.
Those skilled in the art will recognize still other aspects of the present application upon reading and understanding the attached description.
The application is illustrated by way of example and not limited by the figures of the accompanying drawings, in which like references indicate similar elements and in which:
The transducer probe 102 includes an elongate tubular portion 106, a drive assembly 108, an elongate shaft 110, and a plurality of single-element transducers 1121, . . . , 112N (collectively referred to herein as single-element transducers 112), where N is a positive integer.
A first end (not visible) of the shaft 110 is coupled to the drive assembly 108 (not visible), which is configured to rotate and/or translate the shaft 110. The drive assembly 108 can be in the shaft 110, the handle 206, and/or elsewhere. A second end 208 of the shaft 110 is in the first end region 202. The single-element transducer 1121 is coupled to the second end 208, with its transducing region perpendicular to and away facing away from the longitudinal axis 200. With a two, single-element transducer configuration (N=2), the single-element transducer 112N is likewise coupled to the second end 208, but angularly shifted or offset relative to the single-element transducer 1121.
For instance, in the illustrated instance, the single-element transducers 1121 and 112N are diametrically opposed, or offset one hundred and eighty degrees (180°) around the shaft 110. In another instance, the single-element transducers 1121 and 112N are perpendicular, or offset ninety degrees (90°) around the shaft 110. In another instance, the single-element transducers 1121 and 112N are offset sixty degrees (60°) around the shaft 110. In general, the single-element transducers 1121 and 112N are angularly offset or separated by a single angle in a range from sixty (60) to one hundred and eighty (180) degrees. More than two, single-element transducers (i.e., N>2) are contemplated herein and can increase the frame rate. Smaller non-zero angles are also contemplated herein.
In
Returning to
A suitable frequency range for the transducers 112 is from three (3) MHz to fifty (50) MHZ, such as eight (8) to ten (10) MHZ, or higher or lower, with a bandwidth of 60 to 75%. In one instance, the single-element transducers 1121 and 112N respectively transmit with a center frequency at five (5) megahertz (MHz) and fifteen (15) MHz, with no overlap in their frequency bands. In this configuration, with a bandwidth of 60%, the single-element transducers 1121 and 112N transmit in bands of 3.5 to 6.5 MHz and 10.5 to 19.5 MHz. In another instance, the frequency bands overlap less than 50%. For example, with center frequencies at 10 and 15 MHz, and a bandwidth of 70%, the single-element transducers 1121 and 112N respectively transmit in bands of 6.5 to 13.5 MHz and 9.75 to 20.25 MHz.
A non-limiting example of the probe 102 is the 20R3 Transducer, which is a product (#9052) of BK Ultrasound, a company of Analogic Corporation, which is headquartered in Peabody, Mass., USA.
Returning to
A beamformer 120 is configured to beamform the signals. In one instance, this includes, e.g., delay and weighting, and summing the delayed and weighted signals. As described in greater detail below, the beamformer 120, in one instance, includes a synthetic aperture beamformer configured to perform adaptive synthetic aperture focusing. In general, this beamformer corrects calculated delays, which are subject to errors due to jitter from the mechanical movement of the shaft 110, employs the corrected delays, and adaptively sums the weighted and delayed signals based on a correlation of the signals. The adaptive synthetic aperture focusing described herein can mitigate error in the position of the transducer from jitter and transducer acceleration, improving image quality.
A B-mode processor 122 processes the output of the beamformer 120. In one instance, this includes detecting the envelope of the signal and/or applying dynamic range compression (DRC) to the envelope, including thresholding. The B-mode processor 122 can process RF data and/or IQ data for generating envelope data. The DRC applied by the B-mode processor 122 can follow a linear law, a quadratic law, a logarithmic law, a μ-law and/or other DRC algorithm. The B-mode processor 122 can log compress the envelope data into a grayscale format, downscale the compressed data, and/or otherwise processes the data.
An image combiner 124 combines images from B-mode processor 122, which includes an image generated with data acquired by each of the transducer elements 1121, . . . , 122N. As described in greater detail below, the image combiner 124, in one instance, frequency compounds or blends these images from B-mode processor 122. Such compounding may first include aligning/registering the images and then combing aligned/registered images, which are acquired at different frequencies and at different spatial positions. This reduces speckle. The effect of the operation is a combined spatial and frequency compounding. The frequency compounding stems from the different bands at which the transducers operate, and the spatial compounding stems from different positions that the two transducers (relative to each other) scan the same tissue. A display 126 displays the compounded image.
A user interface (UI) 128 includes one or more input devices (e.g., a button, a knob, a slider, a touch pad, a mouse, a trackball, a touch screen, etc.) and/or one or more output devices (e.g., a display screen, a light, an audio generator, etc.), which allow for interaction between a user and the ultrasound imaging system 100. This includes allowing the sonographer to select adaptive synthetic aperture focusing. An example of monostatic synthetic aperture focusing is discussed at least in Andresen et al., “Synthetic aperture focusing for a single element transducer undergoing helical motion,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 58(5):935-43, May 2011.
A system controller 130 is configured to control one or more of the components of the console 104, the transducer elements 112, and/or other device. For example, in one instance, the system controller 130 controls the transmit circuitry 114 and/or received circuitry 116 to control the transmit angles, transmit energies, transmit frequencies, transmit and/or receive delays, weights, etc. The system controller 130 also controls beamformer 120 to perform adaptive synthetic aperture focusing and/or frequency compounding. Such control can be based on configuration files, user input, a selected mode of operation, etc.
One or more of the components of the console 104 can be implemented via one or more processors (central processing unit (CPU), graphics processing unit (GPU), microprocessor, controller, etc.) executing one or more computer readable instructions encoded or embedded on computer readable storage medium, which is a non-transitory medium such as physical memory or other non-transitory medium, and excludes transitory medium. Additionally, or alternatively, at least one of the instructions can be carried by a carrier wave, a signal, or other transitory medium.
The ultrasound imaging system 100 can be part of a portable system on a stand with wheels, a system residing on a tabletop, and/or other system in which the transducer elements 112 is housed in a probe or the like, and the console 104 is housed in an apparatus separate therefrom such as a standard and/or other computer. In another instance, the transducer elements 112 and the console 104 can be housed in a same apparatus such as within a single enclosure hand-held ultrasound scanning device.
A first processing chain 7021 processes signals from the transducer element 1121, and a second processing chain 7022 processes signals from the transducer element 1122. A 3, 4, 5, 6, etc. transducer configuration will have 3, 4, 5, 6, etc. processing chains, or a different processing chain for each of the single-element transducers 112. The first processing chain 7021 is described in detail herein. It is to be appreciated that the second (and 3, 4, 5, 6, etc.) processing chain(s) 7022 is identical to the first processing chain 7021 but processes an input signal from a different single-element transducer 112.
The first processing chain 7021 includes a pre-processor 1181, a beamformer 1201, and a B-mode processor 1221. The second processing chain 7022 includes a pre-processor 1182, a beamformer 1202, and a B-mode processor 1222. The pre-processor 118 comprises the pre-processors 1181 and 1182, the beamformer 120 comprises the beamformers 1201 and 1202, the B-mode processor 122 comprising the B-mode processors 1221 and 1222. Alternatively, these can be separate and distinct components. The processing chains 7021 and 7022 share the image combiner 124.
The illustrated pre-processer 1181 includes a tissue harmonic imaging (THI)/contrast enhanced imaging (CEI) processor 7041. The THI/CEI processor 704 implements one or more existing and/or other approaches to separate harmonic frequencies, e.g., pulse inversion, amplitude modulation, and two filters, one for the fundamental and another for the harmonic frequencies. The output signal can be either a real signal or a complex (IQ) signal, centered around a frequency in the MHz range, or around 0 Hz (baseband). In general, the signal contains both magnitude and phase information for the received echoes.
The pre-processer 1181 additionally or alternatively includes a filter 7061. In this example, the filter 7061 is a bandpass/sliding filter. The bandpass/sliding filter 7061 is used to increase signal-to-noise ratio and to separate signals with different frequency contents. The filter coefficients are updated as of function of depth to change the center frequency and the bandwidth of the filter. The output signal can be either a real signal or a complex signal. In general, the signal contains both magnitude and phase information and can be used for beamforming.
The beamformer 1201 includes a synthetic aperture (SA) delay/apodization processor 7081, a beam model 7101, a delay corrector 7121, an estimator 7141, and a summer (adder) 7161.
The SA delay/apodization processor 7081 processes the RF signal and/or the pre-processed signal, producing delayed signals yn ({right arrow over (r)}) as shown in Equation 1:
yn({right arrow over (r)})=an({right arrow over (r)})sn(Tn({right arrow over (r)})), Equation 1:
where sn( ) is a signal recorded at emission n, Tn({right arrow over (r)}) is a propagation time from a surface of the single-element transducer 1121 to a point {right arrow over (r)} and back to the single-element transducers 1121, and an({right arrow over (r)}) is a weight (apodization) applied on the signal, e.g., to minimize side lobes and eliminate regions that are not illuminated by the beam. The signal sn(t) is discrete, and if Tn falls between samples, then ŝn(Tn) is generated by interpolation. The interpolation can be linear, spline, polynomial, based on fractional delay filters, etc.
The beam modeler 7101 calculates the propagation times Tn({right arrow over (r)}) and the weight coefficients an({right arrow over (r)}). The calculation of Tn and an can be based on a virtual source model, a semi-analytic model, simulated or measured data, etc. A virtual source model is discussed in Andresen et al., “Synthetic aperture focusing for a single element transducer undergoing helical motion,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 58(5):935-43, May 2011, and Frazier et al., “Synthetic aperture techniques with a virtual source element,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 45(1):196-207, 1998, etc.
A semi-analytic model is discussed in Jensen et al., “Spatial filters for focusing ultrasound images,” IEEE Ultrasonics Symposium, Proceedings, An International Symposium (Cat. No. 01CH37263), Volume 2, pp. 1507-1511. IEEE, 2001. A semi-analytic model is discussed in Hansen et al. “Synthetic aperture imaging using a semi-analytic model for the transmit beams,” Medical Imaging 2015: Ultrasonic Imaging and Tomography, volume 1, page 94190K, March 2015, and Nikolov et al. “Synthetic aperture imaging using a semi-analytic model for the transmit beams,” IEEE International Ultrasonics Symposium (IUS), pages 1-4. IEEE, October 2015.
SA delay/apodization processor 7081 outputs the delayed signal yn and/or the input signal sn and the delay and weight information Tn, and an. The delay corrector 7121 and the estimator 7141 correct the delay Tn. The mechanical movements of the single-element transducers 112 gives rise to imprecisions and jitter, which introduce error in the propagation times Tn. The delay corrector 7121 interpolates a desired sample at time instance Tn({right arrow over (r)})+δn({right arrow over (r)}), where δn({right arrow over (r)}) is a delay adjustment determined by the estimator 7141. An example of a suitable interpolation is discussed in Lakso et al., “Splitting the unit delay,” IEEE Sig. Proc. Mag., 13(1):30-60, 1996. Other interpolation approaches are also contemplated herein.
The estimator 7141 determines differences between the calculated time of flights and generates the correction value δn({right arrow over (r)}) based thereon. An example of generating the correction value is described in connection with
The propagation time from the transducer surface to a point P and back is defined (the subscript n for emission number is omitted for conciseness) as shown in Equation 2:
where c is the speed of sound, and |{right arrow over (v)}−{right arrow over (c)}| and |{right arrow over (r)}−{right arrow over (v)}| are the lengths of the line segments connecting the respective points.
The real propagation time is as shown in Equation 3:
where {right arrow over (c)}′ and {right arrow over (v)}′ are the actual positions of the transducer center and focus points, respectively. The actual propagation time can be expressed as shown in Equation 4:
T′({right arrow over (r)})=T({right arrow over (r)})+δn({right arrow over (r)}), Equation 4:
where δn({right arrow over (r)}) is the difference in arrival time due to the jitter in mechanical position, which in
s(t)=A(t)·e(−j2πf
where A(t) is an envelope function such as Gaussian, f0 is a carrier frequency, and t is time. The duration of A(t) is several periods of the carrier signal. For mathematical convenience, it is often approximated with a rectangular window as shown in Equation 6:
where Tp is the duration of a pulse.
After delaying the signals with a delay calculated using the beam model, the signal can be expressed as shown in Equation 7:
The signal y( ) whose direction (assumed direction) coincides with the line in the image that is currently beamformed (the line on which the point {right arrow over (r)} is located), is used as a reference signal y0({right arrow over (r)}). This signal, for this line, is assumed not to have any jitter (all _δ0({right arrow over (r)}) are set to zero). All other signals are aligned to it.
To find the deviation in propagation, the cross correlation between the central signal yo({right arrow over (r)}) and the other signals yn({right arrow over (r)}) that are used in the synthetic aperture focusing are calculated at lag 0. The signals are first delayed according to the beam model, and then, their delayed versions are correlated as shown in Equation 8:
where yo is a central beam, yn is a beam for which a weight has been calculated, ·;· is an inner product, and ∥·∥ is a norm.
The delay _δn({right arrow over (r)}) is derived from the angle of the correlation function Rn(0,{right arrow over (r)}) as shown in Equation 9:
This estimation procedure is based on a phase-shift technique, used in color flow imaging, and discussed in Kasai et al., “Real-Time Two-Dimensional Blood Flow Imaging Using an Autocorrelation Technique,” IEEE Trans. Son. Ultrason., SU-32(3):458-464, 1985.
An alternative approach is to calculate the cross-correlation Rn(k,{right arrow over (r)}) for a series of lags k and search for the location of the peak of |Rn(0,{right arrow over (r)})|. This the approach is used for combined motion compensation and motion estimation in Nikolov et al., “Velocity estimation using recursive ultrasound imaging and spatially encoded signals,” In 2000 IEEE Ultrasonics Symposium, Proceedings, An International Symposium (Cat. No. 00CH37121), volume 2, pages 1473-1477. IEEE, 2000.
The difference in the current context is that the deviations _δn({right arrow over (r)}) are due to difference in transducer position. This means that _δn({right arrow over (r)}) is a systematic error for a given set of acquisitions. It is possible to find the deviation in position {right arrow over (c)}′-{right arrow over (c)}, as least squares fit from the beam model and the estimated deviations _δn. This procedure makes the estimator robust to deviations due to speckle artifacts. The procedure is further enhanced by estimating the signal to noise ratio (SNR), and using only the portions with high SNR in the least squares fit.
The summer 7161 adaptively adds the delay corrected signals, which reconstructs a signal p({right arrow over (p)}) at a point at a location {right arrow over (r)}=[x,y,z]T. In one instance, this is achieved as shown in Equation 10:
where N is a total number of contributing emissions, and wn({right arrow over (r)}) is a weighting coefficient. The adaptive sum ensures that the summed signals are in phase. The adaptive weight coefficient wn({right arrow over (r)}) can be computed from the magnitude of the normalized cross correlation function at lag 0 as shown in Equation 11:
wn({right arrow over (r)})=F(|Rn(0,{right arrow over (r)})|), Equation 11:
where F( ) is a function, and Rn( ) is calculated using Equation 8. An example of a suitable function F( ) is shown in
For signals highly correlated signals, wn is closer to one (1), relative to less corrected signals. The function can also be sigmoid or another empirically determined relation. The calculated values of wn({right arrow over (r)}) are smoothed with a low pass filter or a polynomial fit prior to use in the adaptive sum to avoid discontinuities and/or fluctuations in the image brightness.
The B-mode processor 1221 processes the image generated by the summer 7161. As briefly discussed herein, this may include detecting the envelope and applying dynamic range compression (DRC), including thresholding. For example, the B-mode processor 122 can use IQ data for generating envelope data by computing the amplitude of the (complex) IQ signal. In another instance, the B-mode processor 122 filters the RF-data with a filter such as a finite impulse response (FIR), an infinite impulse response (IIR) filter, or other filter. The B-mode processor 122 then runs the filtered RF-data through an envelope detector.
The image combiner 124 generates a final image by (non-coherently) compounding the images from the B-mode processors 1221 and 1222. The images are misaligned due to the mechanic motion of the probe 102. The image combiner 124 aligns the images (e.g., via registration) and then adds/blends the images to create the final image. Examples of suitable compounding techniques are discussed in Gehlbach et al., “Frequency diversity speckle processing,” Ultrasonic Imaging, 9(2):92-105, April 1987, and Magnin et al., “Frequency compounding for speckle contrast reduction in phased array images,” Ultrasonic Imaging, 4(3):267-281, July 1982.
The image is displayed via the display 126. The approach described herein, in one instance, can achieve a uniform image before and after the focus point in the transverse plane, improve focusing in the sagittal plane, and/or reduce speckle noise. In general, this is achieved using an adaptive synthetic aperture focusing algorithm and frequency/spatial compounding. The approach is described in detail for N=2 single-element transducers, and is extended to more such as three, four (as described below), five, etc. single-element transducers 112.
Variations are described next.
p({right arrow over (r)})=Σn=0N−1an({right arrow over (r)})sn(Tn({right arrow over (r)})).
In this example, the transducer probe 102 includes the single-element transducers 1121 and 1122 and single-element transducers 1123 and 1124. The single-element transducers 1121 and 1122 and disposed 180° apart, similar to the configuration of
In the two, single-element transducers version described herein, the two, single-element transducers operate at two different frequency f1 and f2 and are focused at two different depths z1 and z2. In other words, there are two pairs of frequency and depth (f1, z1) and (f2, z2). In the four, single-element transducers version configuration, there are four pairs of frequency and depth parameters (f1, z1), (f2, z1), (f1, z2), and (f2, z2). The spatial and frequency separation makes it possible to acquire two or four simultaneous images with the same transmit event. The different focus depths and the different frequencies give different realizations of the speckle. The non-coherent summation of the images results in speckle reduction.
An alternative configuration is to use separate and distinct probes, each having one or more single-element transducers 112, where probes that operate at the same frequency are placed at angles of 180°.
It is to be understood that the following acts are provided for explanatory purposes and are not limiting. As such, one or more of the acts may be omitted, one or more acts may be added, one or more acts may occur in a different order (including simultaneously with another act), etc.
At 1202, ultrasound signals from at least two, single-element transducers are received. As described herein, the at least two, single-element transducers operate at different center frequencies with field of views at different spatial positions (e.g., 180° apart).
At 1204, the ultrasound signals from the at least two, single-element transducers are input to respective processing chains.
At 1206, the ultrasound signals are pre-processed in their respective processing chains. In a variation, this act is omitted.
At 1208, the ultrasound or pre-processed ultrasound signals are beamformed using adaptive synthetic aperture focusing, as described herein and/or otherwise.
At 1210, the beamformed data is processed via a B-mode processor, as described herein and/or otherwise.
At 1212, the output of the B-mode processor is combined to form a final image using frequency and spatial compounding, as described herein and/or otherwise.
At 1214, the final image is displayed.
At least a portion of one or more of the methods discussed herein may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium (which excludes transitory medium), which, when executed by a computer processor(s), causes the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.
The embodiments disclosed herein can be used in applications such as pelvic, prostate and/or other imaging.
The application has been described with reference to various embodiments. Modifications and alterations will occur to others upon reading the application. It is intended that the invention be construed as including all such modifications and alterations, including insofar as they come within the scope of the appended claims and the equivalents thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2016/051124 | 3/1/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/149352 | 9/8/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6120455 | Teo | Sep 2000 | A |
6123670 | Mo | Sep 2000 | A |
6159153 | Dubberstein et al. | Dec 2000 | A |
20040225221 | Olsson | Nov 2004 | A1 |
20060245601 | Michaud | Nov 2006 | A1 |
20080110263 | Klessel | May 2008 | A1 |
20090067699 | Clark | Mar 2009 | A1 |
20090304246 | Walker | Dec 2009 | A1 |
20100152590 | Moore et al. | Jun 2010 | A1 |
20130102865 | Man | Apr 2013 | A1 |
20140288426 | Ebisawa | Sep 2014 | A1 |
20160143614 | Huang | May 2016 | A1 |
20160157828 | Sumi | Jun 2016 | A1 |
20180303545 | Lupotti | Oct 2018 | A1 |
20180310915 | Maruyama | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
2008039793 | Apr 2008 | WO |
Entry |
---|
Andresen, Henrik & Nikolov, S.I. & Jensen, Jørgen. (2011). Synthetic Aperture Focusing for a Single-Element Transducer Undergoing Helical Motion. IEEE transactions on ultrasonics, ferroelectrics, and frequency control. 58. 935-43. 10.1109/TUFFC.2011.1894. (Year: 2011). |
International Search Report for PCT/IB2016/051124 published as WO2017149352 on Sep. 8, 2017. |
Henrik Andresen et al. Synthetic aperture focusing for a single-element transducer undergoing helical motion, IEEE Trans. on Ultrasonics, Ferroelectrics and Frequency Control, vol. 58, No. 5, pp. 935-943, May 1, 2011. |
B-K Medical: 20R3 Transducer—User Guide, pp. 1-26, XP055321000, Retrieved from the internet: URL: www.okultrasound.com/filedepot_download/965/307, Aug. 2015. |
Henrik Andresen, Synthetic Aperture Beamforming in Ultrasound using Moving Arrays, Dissertation, May 2009. |
Bastien Denarie, Real-time 3-D echocardiography: challenges of parallel transmission and acquisition, Thesis for the degree of Philosophiae Doctor, Trondheim, Nov. 2013. |
Jacob Kortbek, Synthetic Aperture Sequential Beamforming and other Beamforming Techniques in Ultrasound Imaging 2008. |
Andresen, Henrik Stensby, et al., Synthetic Aperture Focusing for a Single Element Transducer undergoing Helix Motion, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 2011, Downloaded from orbit.dtu.dk on: Jan. 6, 2016. |
Number | Date | Country | |
---|---|---|---|
20190072671 A1 | Mar 2019 | US |