METHOD AND APPARATUS FOR ADAPTIVE BEAMFORMING

Information

  • Patent Application
  • 20220155439
  • Publication Number
    20220155439
  • Date Filed
    July 10, 2020
    4 years ago
  • Date Published
    May 19, 2022
    2 years ago
  • Inventors
    • RINDAL; Ole Marius Hoel
    • AUSTENG; Andreas
    • RODRIGUEZ-MOLARES; Alfonso
  • Original Assignees
Abstract
In a method of imaging, a first transmission is carried out in a first direction. The reflected signals are received using a plurality of receiving devices. For each device, a two/three dimensional data set is formed. The first dimension (26b) represents the depth or range and the second dimension (26a) represents lateral distance. The optional third dimension (26c) represents an orthogonal lateral distance. The data set is formed by calculating times of flight for each pixel within a grid. The receive time is then assigned to each pixel. A data set is generated for each receiver, which results in a three/four dimensional data set from the first transmission of signals. A second transmission of signals is made in a different direction or from a different position. The signals received from the second transmission are received in the same way as those received from the first transmission. The signals are first summed across the transmit dimension to form a single data set, so that the data from various transmissions is combined. Adaptive beamforming is then carried out on this data set, resulting in a single adaptive image.
Description
BACKGROUND OF THE INVENTION

This invention relates to data combination and adaptive beamforming. Although this invention is not limited to the field of ultrasound, and is relevant also in a number of other fields, such as radar and sonar imaging, the currently known imaging techniques can be most easily explained with reference to ultrasound imaging.


It is known in conventional ultrasound imaging to form an image by transmitting a focused wave into a region, time-shift the back-scattered signals which are received and then sum these results—this is often referred to as the conventional “delay and sum” approach.


An alternative method of ultrasound imaging is known as adaptive beamforming. There exist a myriad of adaptive beamforming methods, which can roughly be grouped into two main categories; adaptive element weighting and adaptive image weighting. Adaptive beamforming using adaptive element weighting aims at calculating element weights so as to increase the signal strength to/from a certain direction and minimize interference and noise. Adaptive element weights are applied to the signals received from the individual elements, before the signals are then combined into an intensity value to be displayed as an image. The adaptive beamforming weights are often calculated based on a covariance matrix as in Minimum Variance beamforming.


The van-Cittert-Zernike (VCZ) theorem forms the basis of various other methods of adaptive beamforming utilizing the coherence between received signals to create a coherence image, an adaptive weight per pixel in the image. The theorem predicts a similarity, or coherence, between the focused signals received by an array of spatially separated transducers. Noise is assumed to have low coherence, while wanted signals have high coherence. There are many proposed methods for calculating the predicted coherence between the signals received on different transducers. The quality of an ultrasound image can be improved by multiplying each image pixel, the backscatter value, with the corresponding calculated coherence. Thus, suppressing pixels with low coherence noise. Or the coherence information alone can be used as an image, as in short-lag spatial coherence imaging (SLSC).


Generally 2D ultrasound imaging will provide raw data which is four-dimensional. For example, in Coherent Plane Wave Compounding (CPWC) plane waves are transmitted across an entire region at a number of different angles using a transmission array, and are then received by a number of different receivers. For a plane wave transmitted at a particular angle, the data received by a particular receiver can be represented as a two-dimensional grid with coordinates of Z (depth or range) and X (lateral). Since a 2D image is produced by each receiver, stacking these images gives, for each transmission angle, a data cube, where the third dimension is known as the receive dimension (RX). A data cube is produced for each angle of transmission of a plane wave, for example transmissions at five different angles will give five different cubes. The dimension “between” these cubes is known as the transmit dimension (TX). In addition, time can also be included, forming a fifth dimension to the data. In a similar manner, 3D ultrasound imaging will provide raw data that is five-dimensional, where time can be included as a sixth dimension.


It is known for this data to be combined by conventional delay-and-sum, or adaptive, beamforming, across the receive dimension (RX), as described above. For 2D imaging, this provides a low quality 2D image for each transmission angle, and similarly for 3D imaging this provides a low quality 3D image for each transmission angle. These images are then combined coherently, or in a further adaptive beamforming step.


The Applicant has recognised that there is a problem with this known approach in that conventional methods of producing these images by delay-and-sum beamforming provide low quality images. The image quality can be improved by carrying out adaptive beamforming at both stages of the data processing, however this is time consuming and computationally expensive.


The present invention seeks to address these shortcomings.


SUMMARY OF THE INVENTION

From a first aspect, the invention provides a method of imaging a target region, the method comprising:


i) carrying out a first transmission of signals in a first direction into the target region using one or a plurality of transmitting devices located at a first position;


ii) receiving the signals reflected from the target region using a plurality of receiving devices;


iii) for each receiving device, forming a data set made up of the received signals wherein the data set has at least two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region and optionally wherein the data comprises a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region,

    • wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional grid, or optionally a three-dimensional grid, and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals;
    • iv) making a second transmission of signals into the region, wherein the second transmission is in a second direction and/or made from a second position, distinct from the first direction or first position;
    • v) repeating steps ii) and iii) for the signals received from the second transmission;
    • vi) for each receiving device, summing the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device;
    • vii) forming a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carrying out adaptive beamforming on said three-dimensional, or four-dimensional, data set to combine the receiving device data sets so as to produce a single adaptive two-dimensional, or optionally three-dimensional, image of the region;
    • viii) and storing or displaying said image.


From a second aspect, the invention provides an imaging device, comprising:

    • one or a plurality of transmitting devices, for carrying out a first transmission of signals in a first direction into a target region from a first position and for carrying out a second transmission of signals in a second direction from a second position, wherein the second direction is distinct from the first direction and/or the second position is distinct from the first position, into the target region;
    • a plurality of receiving devices, for receiving the signals reflected from a target region using a plurality of receiving devices;
    • a processing unit, configured to form a first data set made up of the received signals from the first transmission, and a second data set made up of the received signals from the second transmission, wherein the first data set and the second data set comprise two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region, and optionally wherein the first data set and the second data set each comprise a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region;
    • wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional, or optionally three-dimensional, grid and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals;
    • the processing unit further configured, for each receiving device, to sum the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device;
    • the processing unit further configured to form a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carry out adaptive beamforming on said three-dimensional, or optionally four dimensional, data set to combine the receiving device data sets so as to produce a single adaptive image of the region; and
    • a storage unit; for storing or displaying said image.


Thus it will be seen that, in accordance with the invention, by first summing across the transmit dimension to form a single data cube, the data from various transmissions may be combined into a single data cube of higher quality to do synthetic focusing of the received signals. In accordance with the invention adaptive beamforming is then carried out on this data cube, resulting in a single adaptive image. The adaptive image may optionally be a high-quality image. This invention allows an adaptive image, either a high-quality image or an image showing different information than a conventional DAS image, to be produced whilst only carrying out the computationally expensive adaptive beamforming processing stage once. The term “high-quality” is used herein to mean that the image is of better quality than an image produced using conventional “delay-and-sum” imaging techniques on the same data set. The adaptive image could, for example, be a coherence image produced using short lag spatial coherence imaging, as explained herein. This method and the advantages which it offers are equally as applicable in the case of two-dimensional and three-dimensional imaging and image processing.


It will be understood by the skilled person that the first dimension can be either a “depth” within a region, or a “range” within a region, depending on the different types of imaging. For example, ultrasound imaging is often used to image an internal region of a human or animal body, so the transducer will generally be placed directly above the region of the body to be imaged, and therefore the first dimension would be referred to as the depth within that region of the body. Alternatively, other imaging methods such as sonar and radar often image the outer surface of an object e.g. satellite images taken of the earth. In this case a satellite may be positioned at a certain height above the closest point on the surface of the earth, but it may be imaging a region which is at a different point on the curved surface of the earth, and is therefore further away due to the curved surface mapping. This distance, the first dimension, is referred to herein by the term “range”.


In some embodiments, the adaptive beamforming is carried out by software. In some embodiments, the stage of summing the data acquired from each of the at least two transmissions for each receiving device, at stage vi, is carried out by software. The Applicant has appreciated that using software beamforming opens up the possibility of storing the signals received on individual transducer elements (the channel data) for multiple different transmissions, allowing real time access to all ultrasound channel-data simultaneously, and therefore allowing data to be processed differently to techniques which are known in the art.


Flexible software implementations have introduced a number of “adaptive beamformers”. Such adaptive beamformers aim at improving the quality of a final ultrasound image by exploiting information derived from the channel-data when combining the data into the final image pixel. They are therefore to be contrasted with the conventional ‘beamforming’ in prior art systems where the raw individual signals from the transducer elements are summed early in the processing chain, often in specialized hardware, so that such individual signals are not available for analysis. Two examples of adaptive beamformers are Capon's Minimum Variance beamformer and the Short Lag Spatial Coherence, which are explained in detail below.


At stage vi) data acquired from at least two transmissions for each receiving device is summed. In some embodiments, the method further comprises carrying out stages iv) and v) more than once i.e. such that overall the stages of transmission, reception and forming of a data set are carried out a total of at least three times. Thus in some embodiments, the method further comprises making at least one further transmission of signals into the region, wherein the at least one further transmission is in a different direction and/or made from a different position, distinct from the direction or position of the previous transmissions. In such embodiments, the method further comprises repeating steps ii) and iii) for the signals received from the at least one further transmission. In some embodiments, the method comprises making at least five transmissions, each in different directions or from different positions.


In some embodiments the transmitted signal is a sound wave, preferably an ultrasound wave. In other embodiments, the transmitted signal is an electromagnetic wave. This is the case, for example, in a radar imaging system using a method according to the present invention.


In some embodiments the first transmission and/or the second transmission is carried out using a majority of the plurality of transmitting devices, for example as is done in Coherent Plane Wave Compounding. In this example, one or multiple planar transmit beams are transmitted into a domain at different transmit angles α.


In some embodiments the plurality of transmitters are arranged so that the first transmission and/or the second transmission originates from a virtual source located behind the transmitters. This is often referred to as a diverging wave waveform.


In some embodiments the first transmission and/or the second transmission is a focused-wave waveform.


In some embodiments the first transmission and/or the second transmission is an omni-directional wave. In this case the first transmission and the second transmission are made from different positions.


The second direction of the second transmission may be at a distinct angle to the first direction of the first transmission. Optionally said distinct angles are each in a range from a minimum angle value −αmax, to a maximum angle value αmax. Optionally the value of αmax is determined to be αmax≈½f#, wherein f# is a selected ratio between the depth of a pixel and the size of a receiving aperture.


Features of any aspect or embodiment described herein may, wherever appropriate, be applied to any other aspect or embodiment described herein. Where reference is made to different embodiments or sets of embodiments, it should be understood that these are not necessarily distinct but may overlap.





BRIEF DESCRIPTION OF THE DRAWINGS

Certain preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:



FIG. 1 is a schematic representation of an imaging system;



FIG. 2 is a schematic representation of a prior art method of data processing;



FIG. 3 is a schematic representation of a method in accordance with the present invention;



FIGS. 4a-4d show images created by processing data using various known data processing techniques;



FIG. 4e shows an image created by processed data using a method in accordance with the present invention;



FIG. 5 is a graph showing the computational time required for the various data processing techniques used to produce the images of FIGS. 4a-4e;



FIG. 6 is an ultrasound image of an experimental phantom created by processing ultrasound data using a conventional SLSC adaptive beamforming algorithm; and



FIG. 7 is an ultrasound image constructed from the same data as was used to produce the image shown in FIG. 6, but processed using a method in accordance with the invention.





DETAILED DESCRIPTION

The following examples are given with reference to a two-dimensional imaging system and method, however it will be readily understood by the skilled person that the same disclosure and teaching applies equally in the context of three-dimensional imaging systems and methods.



FIG. 1 shows schematically a linear array 2 made up of a number (M) of individual transducer elements 4, arranged along an x-axis 6. The transducer elements may, for example be ultrasonic transducers. The linear array 2 irradiates a region with a transmit beam, which can be either planar, diverging or converging, and which can travel, for example, in the direction of the z-axis 8.


A particular transducer element m receives a signal hm,a(t) following a particular transmit a. In the example case of FIG. 1, used for explanatory purposes, the signal is reflected from a particular point 10, {right arrow over (x)}=(z,x). This signal has travelled a first distance, denoted as T, from the origin o of the transmitted wave to this point, and has travelled a distance, denoted as R, from the point 10 to the receiving transducer element m. Assuming the signal is travelling in a medium in which the sound speed is c0, therefore the delay from transmission to reception of the signal shown in FIG. 1 is given by:





Δt=(T+R)/c0,


The receive distance R is independent of the type of transmit which is made. It is calculated as:






R(z, x, m)=√{square root over (z2+(x−m)2 )}


The transmit distance T depends on the type of beam which is transmitted from the array 2. Some specific examples are considered below, for explanatory purposes:


In the case of Coherent Plane Wave Compounding one or multiple planar transmit beams are transmitted into a domain at different transmit angles α. In this case the transmit distance T, is:






T(z, x, α)=(z cos(α)+x sin(α))


In the case of a Diverging Wave, originating from a virtual source (zs,xs) behind the transducer, the transmit distance T is given by:






T(z, x, zs, xs)=√{square root over ((x−xs)2+(z−zs)2)}


In the case of a focused wave, which can be either converging or diverging, one can calculate the transmit distance T assuming a spherical virtual source model. In this model a virtual source {right arrow over (v)}s=(zs, xs) is placed in the focus of the transmission, with the centre of the transmission originating from {right arrow over (p)}c=(zc, xc) and the transmit distance T is given by:






T(z, x, zs, xs)=(|{right arrow over (v)}s−{right arrow over (p)}c|+|{right arrow over (x)}−{right arrow over (v)}s|)


This “time of flight” value, Δt, is used to assign a signal value sm,a, also known as a pixel value, to a particular pixel corresponding to a point in the imaged region, for a particular transducer element m and a particular transmission a. So that:






s
m,a
=h
m,a(t)|t=Δt


Conventionally, these pixel values for different transducer elements and for different transmissions are combined using “Delay-and-Sum” beamforming. In this approach an image bDAS is formed by coherently combining the pixel values as received by all elements M from all transmits Na. Giving:







b

D

A

S


=




a
=
0



N
a

-
1







m
=
0


M
-
1






w
a

T
x




[

z
,
x

]





w
m

R
x




[

z
,
x

]





s

m
,
a




[

z
,
x

]









Here wmRx is the receive apodization with dimensions [Nz, Nx, M] while waTx is the transmit apodization with dimensions [Nz, Nx, Na]. An apodization function, or tapering function (also known as a window function) is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. One such window function is known as a Boxcar window. For a uniform Boxcar window the receive apodization wmRx can be calculated by








w
m
Rx



[

z
,
x
,

x
m


]


=

{




1
,





if








x
-

x
m







z

2





f

#








0
,




otherwise
.









where (z,x) is the pixel position and xnis the position of the receiving element and the f# is the selected ratio between the pixel depth and the size of the receiving aperture. Other window functions such as, but not limited to, Hamming and Tukey can also be used. For the transmit apodization waTx the apodization is dependent on the type of transmitted wave and which area the transmitted wave is transmitted into.


For simplicity the spatial coordinates can be dropped, the sum over each transmit a can be defined as the sum over the transmit dimension Tx, and the sum over each transducer element m can be defined as the sum over the receive dimension Rx.







b

D

A

S


=





a
=
0



N
a

-
1







m
=
0


M
-
1





w
a

T
x




w
m

R
x




s

m
,
a





=




T
x





W
a

T
x







R
x





w
m

R
x




s

m
,
a











This sum shows the conventional way of implementing “Delay-and-Sum”. Once data is received for all of the M elements, the sum over the receive dimension Rx is carried out during the imaging process, whilst further transmissions are being made.


In particular, in the above equation, the data for each transmission a can be considered as a three dimensional cube, with dimensions of z, x, and M—the number of receive elements. Thus the pixel values sm, 1 have dimensions [Nz, Nx, M], and there will be Na of these data cubes, one for reach transmit a. This is shown in FIG. 2, which represents the processing of the conventional processing of these data cubes.



FIG. 2 represents schematically five data cubes. Each cube (20a, 20b, 20c, 20d, 20e) corresponds to data from a particular transmission; in this example there have been five different transmissions. Each data cube has two spatial dimensions, an x-dimension 26a and a z-dimension 26b (i.e. the depth or range of the pixel), and also has a third dimension, referred to as the receive dimension 26c, since each two-dimensional data set corresponds to data received on a particular transducer element m.


In the “Delay-and-Sum” approach described in the above equation, the sum over the receive dimension 26c is carried out first, producing a single transmission image (22a, 22b, 22c, 22d, 22e) corresponding to each transmission. The images produced from each transmission are then summed to produce a final image 24, referred to above as bDAS.


However, the Applicant has appreciated that using software beamforming, opens up the possibility of storing the signals received on individual transducer elements (the channel data) for multiple different transmissions, and therefore of processing the data differently.


Considering again the equation:







b
DAS

=




T
x





w
a

T
x







R
x





w
m

R
x




s

m
,
a













b
a


R
xDAS

_


=




R
x





w
m

R
x




s

m
,
a








Where






b
a


R

x

D

A

S


_





is the result of the coherent combination of the signals over the receive elements M.


Therefore







b
DAS

=





T
x








w
a

T
x




b
a


R
xDAS

_




=



b



T
xDAS

_














R
xDAS

_







Where






b
a


T

x

D

A

S


_





is likewise the coherent combination of the signal over the transmit dimension.


This equation therefore shows the process as represented in FIG. 2, the data is first summed over the receive dimension 26c, and then summed over the different transmissions, the transmit dimension Tx.


A sum is commutative and therefore









b
^


D

A

S


=


b



R

x

D

A

S


_








T

x

D

A

S


_



=


b



T

x

D

A

S


_








R
xDAS

_



=

b
DAS












A key insight of the present invention is therefore to carry out this sum in a different order, to sum firstly on the transmit dimension Tx, to arrive at a three-dimensional data set, and then to sum on the receive dimension Rx to arrive at a final image. This method is represented schematically in FIG. 3. The equation for the final produced image is therefore:








b
^

DAS

=




R
x








w
m

R
x




b
m


T
xDAS

_








This process is represented schematically in FIG. 3. Similarly to FIG. 2, each of the data cubes 30a, 30b, 30c, 30d, 30e has two spatial dimensions, an x-dimension 36a, and a z-dimension 36b. Each two-dimensional image in the cube corresponds to a particular receive element 4, therefore giving each data cube a third receive dimension 36c. Each cube 30a, 30b, 30c, 30d and 30e corresponds to a particular transmission, this gives the transmit dimension Tx. In accordance with the present invention the data is summed firstly across the different transmissions, therefore giving a single data cube with the two spatial dimensions (z, x) and a third dimension, the receive dimension 38. This data can then be summed to give a single high-quality image 34.


Thus the described embodiment of the present invention processes received data in two stages: the first stage 40 sums the data across the transmit dimension, whilst the second stage 42 sums, or combines, the data cube produced by the first stage across the receive dimension, giving a single adaptive image, which may be a “high-quality” image. Thus far the only method of summing the received data that has been described is the conventional delay-and-sum approach, as denoted by the subscript ‘DAS’. However, it is known in the art to process such received data using a method known as “adaptive beamforming”.


There are many different adaptive beamformers which are known in the art. The dimension reduction discussed above is general and can be implemented using most known adaptive beamformers. For illustration purposes the dimension reduction strategy will be considered in more detail below using two of the most popular adaptive beamformers—Capon's Minimum Variance beamformer and the Short Lag Spatial Coherence.


Capon's Minimum Variance (MV)

Capon's Minimum Variance (MV) technique calculates a data dependent set of weights w while maintaining unity gain in the steering direction. This is posed as a minimization problem by





minwE{|b|2}=wHRw





subject to wHa=1


where R≡E{ttH} is the spatial covariance matrix, E is the expected value operator and the steering vector a=1, because it is assumed that all signals are already delayed.


This equation can be solved using the method of Lagrange multipliers. This gives:







ω
MV

=




R

-
1



a



a
H



R

-
1



a


.





The spatial covariance matrix R is unknown, but assuming a linear array it can be estimated for point (z,x) by:









R
^



(

z
,
x

)


=





k
=

-
K


K






l
=
0


M
-
L







t
_

l



(


z
-
k

,
x

)





t
l

-
H




(


z
-
k

,
x

)







(


2

K

+
1

)



(

M
-
L
+
1

)




,




where (2K+1) is the number of axial samples, L is the length of the subarray, and







t

l(z, x)=[tl(z, x) tl+1(z, x) . . . tl+L−1(x, z)]T


The subarray averaging improves robustness. To further improve robustness, and numerical stability, diagonal loading is added to the estimated covariance matrix by {tilde over (R)}(z, x)=R(z, x)+∈I where I is the identity matrix, and






ϵ
=


Δ
L


tr


{


R
^



(

z
,
x

)


}






where tr{ } is the trace operator.


The adaptive weights are then applied as







b
MV

=


1

M
-
L
+
1







l
=
0


M
-
L





w
MV
H




t
_

i








Conventionally, the minimum variance (MV) weightset is calculated for the signals received on the M elements from one transmit a, and thus the t in the equation for {circumflex over (R)}(x, z) above is t=Sm,a.


This means that we are calculating the waRx weightset from the Delay-and-Sum beamforming equation given above,







b
DAS

=




a
=
0



N
a

-
1







m
=
0


M
-
1






w
a

T
x




[

z
,
x

]





w
m

R
x




[

z
,
x

]





s

m
,
a




[

z
,
x

]









and substituting







b
a


R
xDAS

_


=




R
x








w
m

R
x




s

m
,
a








as defined in the case of Delay-and-Sum above with bMV as given above.


The resulting images, where the MV was applied over the Rx dimension, are denoted by







b
a


R

x

M

V


_


.




Notice that we will have one such image for each transmit a, and thus we can do conventional coherent compounding over the transmit dimension









b



T
xDAS

_














R
xMV

_


=




T
x








w
a

T
x




b
a


R

x





MV


_








One can also first do a conventional Delay-and-Sum over the receive elements as described above, and then set t=baRx in the equation for {circumflex over (R)}(x, z) above. This results in an adaptive waTx weightset, and one can substitute







b
DAS

=





T
x








w
a

T
x




b
a


R

x





DAS


_




=



b



T
xDAS

_














R
xDAS

_







with the equation for bMV as given above. Let's denote the resulting image, where the MV was applied over the Tx dimension of multiple






b
a


R

x

D

A

S


_





images as






b



T
xMV

_








R
xDAS

_






For the CPWC case, this means that the coherent compounding is adaptive.


Alternatively, one can do both—meaning that we apply MV first over the receive dimension, and then over the transmit dimension. Thus we are substituting both








b
a


R

x





DAS


_


=




R
x








w
m

R
x




s

m
,
a















and











b
DAS

=





T
x








w
a

T
x




b
a


R

x





DAS


_




=



b



T
xDAS

_














R
xDAS

_







with the equation for bMV as given above. Let's denote the resulting image as






b



T
xMV

_








R
xMV

_






For the CPWC case, this approach has elsewhere been referred to as Double Adaptive Plane-Wave Imaging.


All of the above-described methods are ways in which adaptive beamforming can be implemented within conventional data processing as described with reference to FIG. 2.


However, a significant development provided in accordance with the present invention is the realization that the novel data processing method in which data is summed first on the transmit dimension and then combined, or summed, on the receive dimension (as shown in FIG. 3), is particularly advantageous when used in combination with adaptive beamforming. Thus, conventional Delay-and-Sum processing can be done over Tx, but then








b
^

DAS

=




R
x








w
m

R
x




b
m


T

x





DAS


_








can be substituted with the equation for bMV as given above, so that minimum variance adaptive beamforming is done on the Rx dimension. Let's denote the resulting image as






b



R
xMV

_








T
xDAS

_







FIGS. 4a-e show images of the same region, which have been processed using the various methods discussed above. The data was created by carrying out Coherent Plane Wave Compounding Imaging, using an array of ultrasound transducers, of a particular region, carrying out five different transmissions (Na=5) each at a different angle. The images are simulated images of an image quality benchmarking phantom including a hypoechoic cyst, a linear intensity gradient to evaluate contrast and point scatterers to evaluate resolution.



FIG. 4a shows an image produced according to conventional “Delay-and-Sum” processing, denoted above as







b



T
xDAS

_








R
xDAS

_



.





FIG. 4b shows an image produced by first summing over the receive dimension using Minimum Variance adaptive beamforming, then summing over the transmit dimension using conventional “Delay-and-Sum”. This method is described above, and an image processed in such a way is denoted as







b



T
xDAS

_








R
xMV

_



.





FIG. 4c shows an image produced by first summing over the receive dimension using conventional “Delay-and-Sum”, then carrying out minimum variance adaptive beamforming over the transmit dimension, as described above. This image is denoted







b



T

x

M

V


_








R

x

D

A

S


_



.





FIG. 4d shows an image produced by summing first over the receive dimension, and then over the transmit dimension, both using minimum variance adaptive beamforming.


This method has been referred to as Double Adaptive Plane-Wave imaging. Such an image is denoted as








b



T

x

M

V


_








R
xMV

_



.










FIG. 4e shows an image produced according to a method embodying the present invention, denoted as







b



R
xMV

_








T
xDAS

_



.




These Figures clearly show an improvement in image resolution for all of the images (FIGS. 4b-4e) which use minimum variance beamforming, compared to conventional Delay-and-Sum (FIG. 4a).



FIG. 5 shows the computational time in minutes (on axis 52) required to produce each of the images shown in FIG. 4a-e. Bar 50a is the computational time required for the image in FIG. 4a, and all the other bars correspond respectively to each Figure, according to their letter.


Interestingly, the computation time for FIG. 4e is significantly less than for FIG. 4b and FIG. 4d whilst still resulting in the same, and in fact somewhat better, image quality in terms of resolution and contrast. This reduction in computational time is as a result of the use of the principles in accordance with present invention and in particular of reducing the number of times the adaptive beamforming process is carried out on the different data sets.


Short Laq Spatial Coherence

The invention may also be applied to short lag spatial coherence (SLSC) algorithm. The spatial correlation can be calculated as









R
^



(
m
)


=


1

M
-
m







i
=
1


M
-
m








n
=

n
1



n
2






p
i



(
n
)





p

i
+
m




(
n
)









n
=

n
1



n
2






p
i
2



(
n
)





p

i
+
m

2



(
n
)









,




where p is the delayed signal, n is the depth sample index, m is the distance, or lag, in number of elements between two points on the aperture. The sum over n results in a correlation over a given kernel size, n2−n1 of pixels. The short lag spatial coherence, is calculated as the sum over the first M lags,







b
SLSC

=



m
M




R
^



(
m
)







Thus, notice that bSLSC is an image of the coherence and not the backscattered signal amplitude as with DAS and MV. The SLSC is a visualization of the spatial coherence of backscattered ultrasound waves, building upon the theoretical prediction of the van Cittert-Zernike (VCZ) theorem. Thus, the SLSC is applied on the Rx dimension. Convenionally, this is done by setting p in the equation above to p=sm,a so that we get one SLSC image






b
a


R

x

S

L

S

C


_





from each transmit a. Which again can be coherently compounded so that we get the final SLSC image.






b




T
xDAS

_








R
xSLSC

_


=


Σ

T
x




w
a

T
x




b
a


R
xSLSC

_








We can also exploit the invention: that we can sum over the Tx dimension first, and then doing SLSCS over the Rx dimension. Thus, we set






p
=

b
m


T

x

D

A

S


_






resulting in the final image







b



R
xSLSC

_








T
xDAS

_



.





FIG. 6 illustrates an ultrasound image of an experimental phantom using what is described as and can be regarded as a conventional implementation of the SLSC adaptive beamforming algorithm, denoted as bSLSC above. The image is from a focused transmit. The x and y-axes represent respectively an “x” and a “z” distance in mm.



FIG. 7 illustrates an ultrasound image constructed from the same data as was used to produce the image shown in FIG. 6, but processed using an imaging method in accordance with the invention. Specifically the data is processed using






b




R
xSLSC

_








T
xDAS

_











as described above. It can be appreciated that the region of high coherence from the speckle background is extended beyond only the focal region as is high in FIG. 6.


The invention is however, not limited to the adaptive beamforming techniques of SLSC and MV.


Although the present invention has been described in the context of ultrasound imaging it is also suitable for use in other imaging systems, such as radar and sonar imaging systems.


In synthetic sonar and radar, the transmitter is often simpler than in the case of Coherent Plane Wave Compounding imaging. The transmitter often consists of one or a few elements, while the receiver is an array comprising many receiver elements.


In synthetic sonar and radar, a small number of transmitters (or possibly a single transmitter) is used to transmit a signal at a first location, and a large number of receivers are then used to receive the reflected signal. The small number of transmitters is then moved a short distance, and used to carry out a second transmission, which is again received by the array of receivers. This effectively forms what is known as a “synthetic long array” which is able to image a large region, as though a large array of transmitters were used, by using only a small number of transmitters but moving them many times. The above description is somewhat simplified, since typically in this kind of imaging the arrays are moved during the transmission and reception processes, not simply between each transmission-reception cycle.


The long receiver array can be treated mathematically as though it is a long array of transceivers. It can be treated as though a small region of the transceiver array, a “sub-array”, has been used to make a first transmission, and has then been used to receive the reflected signal. It can then be treated as though a second region of the transceiver has been used to make a second transmission into the same region (this corresponds to the transmission by the transmitters after they have been moved to a different position), and this data has been received with a different array of receivers (an array of the same size as the array used for the first transmission).


This process is then repeated at many transmitter locations (which can be treated as many different “sub-arrays”). Of course there is not a long transceiver array, so these “sub-arrays” are just mathematical tools. In synthetic aperture processing (SAR and SAS), they might better be termed a “translated receiver array”. Notably, the shorter arrays into which the synthetic receiver array is divided do not need to be the same size as the physical transmitter array. The shorter array could be shorter than the physical transmitter array (i.e. it could divide the physical array into two), or it could be longer than the physical transmitter array (i.e. the “sub-array” could comprise two-and-two transmit-receive locations using the physical receiver array (i.e. divide into something twice as long as the physical array)). This is likewise true for the technique used in the context of ultrasound imaging. In the ultrasound method there is a physical transceiver array, but nonetheless in processing the image the physical array can be artificially divided into shorter sub-arrays and summed to combine cubes from these.


Ordinarily a transmission from a single position into a large region, using an unfocused beam, would form a low quality image. Adaptive beamforming algorithms, specifically adaptive beamforming based on adaptive element weightings, do not produce very good results when applied to such images. The technique described herein is useful as it allows data from a number of transmissions to be summed, and then synthetically formed into a focused transmit beam. Adaptive beamforming algorithms can then be applied very effectively to the image data.


Instead of just summing all of the received data together, as if it was a long (synthetic) array, cubes are formed for each transmission (or for every two transmissions/receptions etc.) and summed, as described above with reference to ultrasound imaging, to give a cube. Summing the acquired cube along the receiver dimension gives a high-resolution image of the order of, for example, cm or dm, as if conventional DAS beamforming had been used. Alternatively, adaptive beamforming along the receiver dimension can be applied to form an adaptive image of the same scene.


It will be appreciated by those skilled in the art that the invention has been illustrated by describing one or more specific embodiments thereof, but is not limited to these embodiments; many variations and modifications are possible, within the scope of the accompanying claims.

Claims
  • 1. A method of imaging a target region, the method comprising: i) carrying out a first transmission of signals in a first direction into the target region using one or a plurality of transmitting devices located at a first position;ii) receiving the signals reflected from the target region using a plurality of receiving devices;iii) for each receiving device, forming a data set made up of the received signals wherein the data set has at least two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region and optionally wherein the data comprises a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region,wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional grid, or optionally a three-dimensional grid, and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals;iv) making a second transmission of signals into the region, wherein the second transmission is in a second direction and/or made from a second position, distinct from the first direction or first position;v) repeating steps ii) and iii) for the signals received from the second transmission;vi) for each receiving device, summing the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device;vii) forming a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carrying out adaptive beamforming on said three-dimensional, or four-dimensional, data set to combine the receiving device data sets so as to produce a single adaptive two-dimensional, or optionally three-dimensional, image of the region;viii) and storing or displaying said image.
  • 2. A method as claimed in claim 1, wherein the transmitted signal is a sound wave, optionally an ultrasound wave.
  • 3. A method as claimed in claim 1, wherein the transmitted signal is an electromagnetic wave.
  • 4. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is carried out using a majority of the plurality of transmitting devices.
  • 5. A method as claimed in claim 1, wherein the plurality of transmitters are arranged so that the first transmission and/or the second transmission originates from a virtual source located behind the transmitters.
  • 6. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is a focused-wave waveform.
  • 7. A method as claimed in claim 1, wherein the first transmission and/or the second transmission is an omni-directional wave and wherein the first transmission and the second transmission are made from different positions.
  • 8. A method as claimed in claim 1, wherein the second direction of the second transmission is at a distinct angle to the first direction of the first transmission, wherein said distinct angles are each in a range from a minimum angle value −αmax, to a maximum angle value αmax, and the value of αmax is determined to be αmax≈1/2f#, wherein f# is a selected ratio between the depth of a pixel and the size of a receiving aperture.
  • 9. An imaging device, comprising: one or a plurality of transmitting devices, for carrying out a first transmission of signals in a first direction into a target region from a first position and for carrying out a second transmission of signals in a second direction from a second position, wherein the second direction is distinct from the first direction and/or the second position is distinct from the first position, into the target region;a plurality of receiving devices, for receiving the signals reflected from a target region using a plurality of receiving devices;a processing unit, configured to form a first data set made up of the received signals from the first transmission, and a second data set made up of the received signals from the second transmission, wherein the first data set and the second data set comprise two dimensions, wherein the first dimension represents the depth or range within the region and the second dimension represents lateral distance within the region, and optionally wherein the first data set and the second data set each comprise a third dimension and wherein the third dimension represents an orthogonal lateral distance within the region;wherein the data set is formed by first calculating times of flight for each pixel within a two-dimensional, or optionally three-dimensional, grid and then assigning to each pixel in the grid the data value of the corresponding time of the received signal, thereby generating a two-dimensional, or optionally three-dimensional, data set for each receiver and therefore a three-dimensional, or optionally four-dimensional, data set resulting from the first transmission of signals;the processing unit further configured, for each receiving device, to sum the data acquired from each of the at least two transmissions, thereby producing a two-dimensional, or optionally three-dimensional receiving device data set corresponding to each receiving device;the processing unit further configured to form a three-dimensional, or optionally four-dimensional, data set made up of receiving device data sets, and subsequently carry out adaptive beamforming on said three-dimensional, or optionally four dimensional, data set to combine the receiving device data sets so as to produce a single adaptive image of the region; anda storage unit; for storing or displaying said image.
  • 10. An imaging device as claimed in claim 9, wherein the transmitted signal is a sound wave, optionally an ultrasound wave, or wherein the transmitted signal is an electromagnetic wave.
  • 11. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is carried out using a majority of the plurality of transmitting devices.
  • 12. An imaging device as claimed in claim 9, wherein the plurality of transmitters are arranged so that the first transmission and/or the second transmission originates from a virtual source located behind the transmitters.
  • 13. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is a focused-wave waveform.
  • 14. An imaging device as claimed in claim 9, wherein the first transmission and/or the second transmission is an omni-directional wave and wherein the first transmission and the second transmission are made from different positions.
  • 15. A method as claimed in any of claim 9, wherein the second direction of the second transmission is at a distinct angle to the first direction of the first transmission, wherein said distinct angles are each in a range from a minimum angle value −αmax, to a maximum angle value αmax, and the value of αmax is determined to be αmax≈½f#, wherein f# is a selected ratio between the depth of a pixel and the size of a receiving aperture.
Priority Claims (1)
Number Date Country Kind
1910043.7 Jul 2019 GB national
PCT Information
Filing Document Filing Date Country Kind
PCT/GB2020/051671 7/10/2020 WO 00