RADAR-IMAGING OF A SCENE IN THE FAR-FIELD OF A ONE-OR TWO-DIMENSIONAL RADAR ARRAY

Abstract
A method of radar-imaging a scene in the far-field of a one-dimensional radar array, comprises providing an array of backscatter data D(fm, x′n) of the scene, these backscatter data being associated to a plurality of positions x′n, n=0 . . . N−1, N>1, that are regularly spaced along an axis of the radar array. The backscatter data for each radar array position x′n are sampled in frequency domain, at different frequencies fm, m=0 . . . M−1, M>1, defined by fm=fc−B/2+m−Δf, where fc represents the center frequency, B the bandwidth and Δf the frequency step of the sampling. A radar reflectivity image 1(αm′, βn′) is computed in a pseudo-polar coordinate system based upon the formula (2) with formula (3) where j represents the imaginary unit, formula (A) is the baseband frequency, FFT2D denotes the 2D Fast Fourier Transform operator, αm′, m′=0 . . . M−1, and βn′, n′=0 . . . N−1 represent a regular grid in the pseudo-polar coordinate system, and Pmax is chosen >0 depending on a predefined accuracy to be achieved. A corresponding method of radar-imaging a scene in the far-field of a two-dimensional radar array is also proposed.
Description
TECHNICAL FIELD

The present invention generally relates to radar-imaging a scene located in the far-field of a radar aperture, more particularly to a method for converting a radar image from raw radar data.


BACKGROUND

The present invention is applicable for image reconstruction from radar data acquired by a synthetic aperture radar (SAR) or by a physical radar array. SAR is a well-known and well-developed technique to produce high-resolution images. A large number of imaging algorithms are operationally used in different civilian and military application domains. A common requirement for such algorithms is that of producing imagery with the highest possible resolution. It is well known that the limits of the resolution in range and cross-range are, respectively, dictated by the frequency bandwidth and the physical dimension of the radar aperture. In practice, a criterion to assess the optimality of a SAR system is to compare the achieved cross-range resolution in the imagery with the physical dimension of the radar's antenna. As an example, in strip-map SAR, the cross-range resolution cannot be finer than a half of the physical antenna aperture size. A major part of the radar imaging algorithms presently in use have been conceived for SAR systems with an optimal aperture length. To date, the interest in radar imaging systems with sub-optimal aperture lengths has been very limited. The focus of this invention is on the problem of implementing a fast and accurate imaging algorithm with a highly sub-optimal aperture length, e.g. in a radar system having an aperture length of a few meters and illuminating an image scene spanning a few square kilometers located within the far-field of the radar aperture. This scenario is quite different from those of space-borne and air-borne SAR. In particular, the imaging algorithms used with optimal aperture lengths (typically a few tens of kilometers long in case of space-borne SAR), such as the range migration and chirp-scaling algorithms, do not satisfy certain requirements encountered with a sub-optimal radar aperture. The polar format or range-Doppler algorithm was also discarded because of the geometric distortion caused by it in the imagery. This algorithm can only be used with image extents much smaller than the range to the center of the scene and, therefore, is not appropriate for all interesting scenarios. In, Averbuch et al. disclose a method for manipulating the Fourier transform in Polar coordinates, which uses as a central tool a so-called pseudo-polar FFT, where the evaluation frequencies lie in an oversampled set of nonangularly equispaced points. The pseudo-polar transform plays the role of a nearly-polar system from which conversion to polar coordinates uses processes relying only on 1D FFTs and interpolation operations.


An example application field for a sub-optimal imaging radar is that of ground-based SAR (GB-SAR), which is presently used to monitor the displacement of landslides with sub-millimeter accuracy. In the last ten years, the Joint Research Centre of the European Commission has been a pioneer of this technology and has carried out a vast number of field campaigns that have demonstrated its operational use. This activity has resulted into a massive archive of GB-SAR data with more than 300,000 sets of raw data collected in various sites. Typically, a site monitored on a permanent basis with one of our GB-SAR instruments produces a total of 35,000-40,000 sets of raw data in an entire year. A motivation of this work comes from the need to have a computationally efficient and accurate GB-SAR processing chain to cope with this huge volume of data.


BRIEF SUMMARY

The present invention provides a method of radar-imaging a scene, which can be implemented in a computationally efficient way.


The method according to the invention comes with two variants, directed to the “two-dimensional” (“2D”) case and the “three-dimensional” (“3D”) case, respectively. Both variants are comprise the computation of a radar reflectivity image based upon an image series expansion. In the first variant, the raw radar data stem from a one-dimensional radar array, whereas, in the second variant, they stem from a two-dimensional radar array. In both variants of the method, the radar array may be a synthetic aperture radar array or a physical radar array. In both variants, the scene of interest to be imaged lies in the far-field of the radar array. This may be expressed mathematically as:










ρ
>



2


L
x
2



λ
c







and





ρ

>


2


L
y
2



λ
c



,




(
1
)







where ρ denotes the distance from the centre of the radar array to an arbitrary point within the scene of interest, Lx the length of the radar array along its first axis and Ly the length of the radar array along its second axis (in case of a 2D radar array), and λc the centre wavelength of the radar.


Turning to the first variant, a method of radar-imaging a scene in the far-field of a one-dimensional radar array, comprises providing an array of backscatter data D(fm, x′n) of the scene, these backscatter data being associated to a plurality of positions x′n, n=0 . . . N−1, N>1, that are regularly spaced along an axis of the radar array. The backscatter data for each radar array position x′n are sampled in frequency domain, at different frequencies fm, m=0 . . . M−1, M>1, defined by fm=fc−B/2+m·Δf, where fc represents the center frequency, B the bandwidth and Δf the frequency step of the sampling. According to the present variant of the invention, a radar reflectivity image I(αm′, βn′) is computed in a pseudo-polar coordinate system based upon the formula:











I


(


α

m



,

β

n




)


=




p
=
0


P
max





I
p



(


α

m



,

β

n




)




,




(
2
)






with









I
p



(


α

m



,

β

n




)


=




1

p
!




[


-

j2πβ

n





f
c


]


p


F





F





T





2


D


[


D


(


f
m

,

x
n



)





(



f
^

m



x
n



)

p


]




,





(
3
)







where

    • j represents the imaginary unit,
    • {circumflex over (f)}m=−B/2+m·Δf is the baseband frequency,


FFT2D denotes the 2D Fast Fourier Transform operator,

    • αm′, m′=0 . . . M−1, and βn′, n′=0 . . . N−1 represent a regular grid in the pseudo-polar coordinate system,
    • and Pmax is chosen ≧0 depending on a predefined accuracy to be achieved;


      or any mathematically equivalent formula.


Those skilled will appreciate that the present invention uses a series expansion for approximating the reflectivity image in a pseudo-polar coordinate system, i.e. the different terms of the series are computed up to the predefined order Pmax and these terms are then summed up (if Pmax≧1). In the following the method is going to be referred to in brief as the far-field pseudo-polar format algorithm, abbreviated FPFA. In practice, a zeroth order series may be sufficient for obtaining a good approximation of the reflectivity on the pseudo-polar grid. In this case, Pmax=0 and thus





I(αm′n′)≈FFT2D[D(fm,x′n)]  (4)


In the particular case of using just the zeroth order series expansion, the computational cost of the FPFA is O(N M log2M), which is the lowest possible value one could expect for such an imaging algorithm. As an example, with n=N=M=2048, the FPFA has a computational cost three orders of magnitude lower than that of the time-domain back-propagation algorithm (TDBA), and six orders of magnitude lower than that of the frequency-domain back-propagation algorithm (FDBA). The benefit of using the FPFA is thus evident. It shall be noted that the addition of more terms in the series expansion, being all of them evaluated with 2-D FFT transforms, is straightforward and does not increase the computational cost significantly. The benefit of using the FPFA is thus evident.


According to an advantageous implementation of the FPFA, the individual terms of the image series are evaluated concurrently and separately (e.g. using a parallel multi-processor system).


The radar array positions are preferably defined by x′n=−Lx/2+n·Δx′, where Lx represents a length of the radar array and Δx′ the spacing between the radar array positions.


The pseudo-polar coordinate system is implicitly determined by the Fourier transform, the array positions x′n and the frequencies fm. With an appropriate choice of the origin of the pseudo-polar coordinate system, the points αm, m′=0 . . . M−1 and βn′, n′=0 . . . N−1, of the regular grid in the pseudo-polar coordinate system may, for instance, be expressed by αm′=m′/B, m′=0 . . . M−1 and βn′=n′/Lx−(N−1)/(2Lx), n′=0 . . . N−1. More details on the concept of pseudo-polar grids can be found in and. More recently, a technique that implements a 2-D polar FFT using a pseudo-polar grid has been presented in and.


The order Pmax of the series expansion is preferably chosen depending on the ratio of radar array length Lx to range resolution. Range resolution δr is given by δr=c/(2B), where c denotes the speed of light. As a rule of thumb, the larger the ratio Lxr, the more terms one preferably uses in the series to guarantee the accuracy of the reflectivity image. An interesting observation is that the number of terms does not depend on the center frequency of the radar.


Preferably, subsequent computation steps using the reflectivity image are carried out in the pseudo-polar coordinate system, e.g. the computation of a coherence image and/or a 2D phase interferogram. Most preferably, a transformation of the reflectivity image, the coherence image and/or the 2D phase interferogram into a coordinate system that is more convenient for visualizing the information, e.g. a polar or Cartesian coordinate system, is carried out only after the substantial computations (of the reflectivity image, the coherence image and/or the 2D phase interferogram) have been achieved in the pseudo-polar coordinate system. In this way, errors introduced into the data through the mapping of the computed image onto a polar or Cartesian grid by interpolation or any other suitable procedure only affect the visualization but not the substantial calculations. A further advantage of the present method is that, in contrast to previous radar imaging methods, no computationally costly interpolation is required before any Fourier transform. This represents an important advantage, for instance, over the so-called range-migration algorithm, which uses a matched filter and Stolt interpolation to represent the radar backscatter data on a regular grid in special frequency domain, before these are Fourier transformed directly into a reflectivity image in a regular grid in a Cartesian coordinate system.


Turning to the second variant of the invention, a method of radar-imaging a scene in the far-field of a two-dimensional radar array, comprises providing an array of backscatter data D(fm, x′n, y′k) of the scene, these backscatter data being associated to a plurality of positions (x′n, y′k) n=0 . . . N−1, N>1, k=0 . . . K−1, K>1, regularly spaced along a first and a second axis of the radar array. The backscatter data for each radar array position (x′n, y′k) are sampled in frequency domain, at different frequencies fm, defined by fm=fc−B/2+m·Δf, where fc represents again the center frequency, B the bandwidth and Δf the frequency step of the sampling. A radar reflectivity image I(αm′, βn′, γk′) is computed in a pseudo-spherical coordinate system according to the formula:
















I


(


α

m



,

β

n



,

γ

k




)


=




p
=
0


P
max





I
p



(


α

m



,

β

n



,

γ

k




)




,





(
5
)











with









I
p



(


α

m



,

β

n



,

γ

k




)


=



[


-
j2π


f
c


]

p






q
=
0

p






β

m


q



γ

k



p
-
q





q
!




(

p
-
q

)

!




F





F





T





3






D


[


D


(


f
m

,

x
n


,

y
k



)





f
^

m
p



x
n







q




y
k








p

-
q



]






,






(
6
)







where

    • j represents the imaginary unit,
    • {circumflex over (f)}m=−B/2+m.·Δf,
    • FFT3D denotes a 3D Fast Fourier Transform operator,
    • αm′, m′=0 . . . M−1, βn′, n′=0 . . . N−1 and γk′, k=0 . . . K−1, represent a regular grid in the pseudo-spherical coordinate system,
    • and Pmax is chosen ≧0 depending on a predefined accuracy to be achieved;


      or any mathematically equivalent formula.


As in the previous variant, a series expansion is used for approximating the reflectivity image. However, this time the reflectivity image is computed in a pseudo-spherical coordinate system. The different terms of the series are computed up to the predefined order Pmax and these terms are then summed up (if Pmax≧1). The method is going to be referred to as the far-field pseudo-spherical format algorithm, also abbreviated FPFA since it will be clear from the context whether a pseudo-polar or a pseudo-spherical coordinate system is considered. In practice, a zeroth order series may be sufficient for obtaining a good approximation of the reflectivity on the pseudo-spherical grid. In this case, Pmax=0 and thus





Ipm′n′k′)≈FFT3D[D(fm,x′n,y′k)].  (7)


The individual terms of the image series may be evaluated concurrently and separately (e.g. using a parallel multi-processor system).


The radar array positions are preferably defined by x′n=−Lx/2+n·Δx′ along the first axis, where Lx represents a length of the radar array along the first axis and Δx′ the spacing between the radar array positions along the first axis, and by y′k=−Ly/2+k·Δy′ along the second axis, where Ly represents a length of the radar array along the second axis and Δy′ the spacing between the radar array positions along the second axis. The pseudo-spherical coordinate system is implicitly determined by the 3D Fourier transform, the radar array positions (x′n, y′k) and the frequencies fm. With an appropriate choice of the origin of the pseudo-spherical coordinate system, the points αm, m′=0 . . . M−1, βn′, n′=0 . . . N−1, and γk′, k′=0 . . . K−1, of the regular grid in the pseudo-spherical coordinate system may, for instance, be expressed by αm′=m′/B, m′=0 . . . M−1, βn′=n′/Lx−(N−1)/(2Lx), n=0 . . . N−1, and γk′=k′/Ly−(K−1)/(2Ly).


The order Pmax of the series expansion is preferably chosen depending on the ratios of the radar array lengths Lx and Ly to range resolution: the larger the ratios Lxr and Lyr, the more terms one preferably uses in the series to guarantee the accuracy of the reflectivity image. As for the previously discussed variant, the number of terms does not depend on the center frequency of the radar.


Preferably, subsequent computation steps using the reflectivity image are carried out in the pseudo-spherical coordinate system, e.g. the computation of a coherence image and/or a 3D phase interferogram. Most preferably, a transformation of the reflectivity image, the coherence image and/or the 3D phase interferogram into a coordinate system that is more convenient for visualizing the information, e.g. a spherical or Cartesian coordinate system, is carried out only after the substantial computations (of the reflectivity image, the coherence image and/or the 3D phase interferogram) have been achieved in the pseudo-spherical coordinate system. In this way, errors introduced into the data through the mapping of the computed image onto a spherical or Cartesian grid by interpolation or any other suitable procedure only affect the visualization but not the substantial calculations. No computationally costly interpolation is required before any Fourier transform.


Both variants of the invention may be applied for computing a radar reflectivity image in or nearly in real time.


An aspect of the invention concerns a computer program product for controlling a data processing apparatus, e.g. a computer, a microprocessor, a parallel multiple-processor system, and the like, comprising instructions causing the data processing apparatus to carry out the FPGA when executed on the data processing apparatus.





BRIEF DESCRIPTION OF THE DRAWINGS

Further details and advantages of the present invention will be apparent from the following detailed description of not limiting embodiments with reference to the attached drawings, wherein:



FIG. 1 is a schematic side view of an SAR;



FIG. 2 is a schematic side view of a physical radar array;



FIG. 3 is a front view of a 2D SAR;



FIG. 4 is a schematic front view of a 2D physical radar array;



FIG. 5 is a is a schematic front view of a 2D MIMO radar array;



FIG. 6 is a top schematic view of a situation when a scene is radar-imaged;



FIG. 7 is a schematic perspective view of a situation when a scene is imaged with a 2D radar array;



FIG. 8 shows a comparison of images, in polar coordinates, obtained with the TBDA and the zeroth order FPFA, respectively;



FIG. 9 is a diagram showing the preferred order of the image series expansion as a function of aperture length to range resolution.





DETAILED DESCRIPTION

The far-field pseudo-polar format algorithm can be used with an short imaging radar array having an array length Lx. This radar array 10, 10′ can be either synthetic or physical, as shown in FIGS. 1 and 2. The aperture synthesis can be achieved through the controlled linear motion (indicated by the dashed arrow 13) of a single radar element 12 comprising a transmit antenna 14 and a receive antenna 16 (or a single antenna for both transmission and reception) connected to a radar transceiver 18. Alternatively, a physical radar aperture 10′ can be provided in form of an array of transmit/receive antennas 14, 16 and a multiplexer unit 17 switching electronically through these antennas 14, 16.


While FIGS. 1 and 2 show one-dimensional radar arrays, FIGS. 3-5 illustrate the case of a two-dimensional radar array. FIG. 3 shows a 2D-synthetic radar array 10 with a single radar element comprising a transmit antenna 14 and a receive antenna 16 (or a single antenna for both transmission and reception). During operation of the radar array, the radar element moves along a predefined path 19 and radar backscatter measurements are carried out at a plurality of positions 20, which are regularly distributed on the aperture area so as to define a regular grid. These radar array positions 20 are regularly spaced along a first axis (“x-axis”) and a second axis (“y-axis”), which are perpendicular to one another. The spacings in direction of the first axis and the second axis are Δx′ and Δy′, respectively. The measurement points, i.e. the radar array positions 20, correspond to the phase centers of the moving transmit/receive antenna pair.


A first alternative to the synthetic 2D radar array of FIG. 3 is the physical radar array 10′ of FIG. 4. A plurality of radar elements each having a transmitting 14 and a receiving 16 antenna or a single antenna for both transmission and reception, are arranged along the first and second array axes, with spacings Δx′ and Δy′, respectively. In a measurement with the radar array of FIG. 4, one records the radar echo sequentially with every radar array element using a multiplexer or a switching device. Due to the number of radar elements, a physical radar array is normally more expensive than a synthetic one. However, a physical radar array has the advantage of much shorter acquisition times.


A second alternative to the synthetic 2D radar array of FIG. 3 is the 2D MIMO (multiple input multiple output) radar array 10″ of FIG. 5. A set of STX transmitting antennas 14 and a set of SRX receiving antennas 16 are arranged so that the phase centers of all possible combinations of transmitting and receiving antennas are regularly distributed in the aperture area. With a MIMO radar array, one fires sequentially with all the transmitting antennas 14 and for each receiving antenna in this sequence, one records the radar echo with some or all receiving antennas 16. One thus has a total of maximally STX·SRX measurements associated to the different phase centers (which are thus the measurement points 20, i.e. the radar array positions) on the aperture are. This array configuration comes at reasonable cost and complexity and offers short acquisition times.


In the following, the FPFA is first going to be discussed for the two-dimensional case with reference to FIG. 6 (which is not to scale).


The scene of interest 22 to be interested is assumed to be located within the far-field of the radar array (which is represented here, for illustration, as an SAR 10), i.e.:









ρ
,


ρ


>


2


L
x
2



λ
c







(
8
)







where ρ′ denotes the range distance from a radar array position x=x′ to an arbitrary point P within the scene 22. Similarly, ρ denotes the range distance from the center of the radar aperture to the same point P. The proposed imaging technique requires the image scene to be located in the far-field. However, provided this condition is satisfied, the extent of the image scene (i.e. its widths Wx and Wz in x- and z-directions, respectively) is only limited by the field of view 24 of the individual antenna elements. The resulting cross-range resolution is expected to be highly sub-optimal. This is a point that distinguishes the proposed FPFA from the polar format or range-Doppler imaging algorithm, which can only be used with image extents much smaller than the range to the center of the scene.


To introduce the formulation of the FPFA algorithm the use of a stepped-frequency radar will be assumed. Note that this choice is made without loss of generality and the presented formulation could also be used with a frequency-modulated continuous wave (CW) radar. Thus, we consider a CW signal radiated by a radar array element 12 located at array position x=x′, which has a beam-width sufficiently large to irradiate the entire image area of interest. The backscatter signal is received at substantially the same position. For sake of the explanation of the algorithm, the scene is assumed to consist of a single point scatterer located at a point P, with polar coordinates (ρ, θ), as shown in FIG. 2. The point's coordinates referred to the position of the radar element are (ρ′, θ′). The radar array acquires the backscatter signal D(f, x′) as a function of two parameters: the frequency f of the CW signal, and the position of the radar element on the array x′. The backscatter data are assumed to be sampled uniformly both in the frequency domain and the space domain along the axis of the array. Thus, a measurement with this radar will give as a result the following two-dimensional matrix of complex values D(fm, x′n) with:






f
m
=f
c
−B/2+mΔf  (9)






x′
n
=−L
x/2+nΔx′  (10)


where m=0, 1, . . . , M−1, n=0, 1, . . . , N−1, fc is the center frequency, B is the frequency bandwidth swept in the measurement, Δf is the frequency step, M is the number of frequencies measured, Δx′ is the spacing between the radar array positions (i.e. the spacing between the physical radar array elements in case of a physical radar array, the phase centers of the different transmitting/receiving antenna combinations in case of a MIMO radar array, or the movement step used in the linear scan in the case of a synthetic array) and N is the number of measurement points along the radar aperture. As in any imaging algorithm based on a 2D Fourier transform, the steps in the frequency and radar position will have to be fine enough in order to avoid ambiguities both in range and cross-range directions.


The synthesis of a radar image can be achieved by coherently summing the signal contributions relative to different radar positions and CW frequencies. This technique is known as frequency domain wavefront back-propagation. Thus, with the imaging geometry of FIG. 6, the radar reflectivity at the point P can be estimated as follows in the case of a exp(+j 2π f t) time dependence:










I


(

ρ
,
θ

)


=




m
=
0


M
-
1







n
=
0


N
-
1





D


(


f
m

,

x
n



)




exp


[


+
j




4

π






f
m


c



ρ
n



]









(
11
)







where c is the speed of light and





ρ′n=√{square root over ((ρ sin θ−x′n)22 cos2 θ)}  (12)


The synthesis of an entire reflectivity image using eq. (11) has associated a high computational cost, which is O(MNM′N′), where M′ and N′ denote the number of pixels in the x and z directions, respectively. The algorithm of eq. (5) is known as the frequency-domain back-propagation algorithm (FDBA). The back-propagation algorithm can also be formulated in the time domain. The associated computational cost of this algorithm is O(N N′ M log2M), which is significantly lower than that of its frequency domain implementation. In practice, the time domain implementation is that most commonly used with highly sub-optimal aperture lengths. The formulation of the time-domain back-propagation algorithm (TDBA) can be written as:










I


(

ρ
,
θ

)


=




n
=
0


N
-
1





D
t



(


t
=


2


ρ
n



c


,

x
n



)







(
13
)







where Dt(t, x′) denotes the frequency to time Fourier transform of the frequency domain backscatter data D(f, x′). The TDBA has associated a 1D interpolation prior to the azimuth compression. Typically, an FFT with zero-padding (to increase substantially the sampling frequency in the time domain) and a Lagrange interpolation are used. Later we will consider the resulting imagery obtained with the TDBA in eq. (13) as the reference to assess the quality of the FPFA imagery.


Since the point P is in the far-field of the radar aperture, we can approximate the range distance ρ′ using the binomial expansion as follows:










ρ




ρ
-


x



sin





θ

+



x
′2



sin
2


θ


2

ρ


+



x
′3


sin





θ






cos
2


θ


2


ρ
2



+






(
14
)







whose higher order terms become less significant provided ρ, ρ>>Lx. At this point, we will make use of the first-order far-field approximation of a dipole radiator, which is well known in antenna measurements:





ρ′≈ρ−x′ sin θ  (15)


Thus, the radar reflectivity at point P in eq. (11) can now be re-written as:










I


(

ρ
,
θ

)







m
=
0


M
-
1







n
=
0


N
-
1





D


(


f
m

,

x
n



)




exp


[


+
j




4

π






f
m


c



(

ρ
-


x



sin





θ


)


]









(
16
)







which, considering that






f
m
=f
c
+{circumflex over (f)}
m  (17)





where






{circumflex over (f)}
m
=−B/2+mΔf  (18)


is the baseband frequency term, can be re-written as:










I


(

ρ
,
θ

)







m
=
0


M
-
1







n
=
0


N
-
1





D


(


f
m

,

x
n



)




exp


[


+
j






2






π
(



f
m




2

ρ

c


-


x





2





sin





θ


λ
c




)


]




exp


[


+
j






2





π






x
n





f
^

m




2





sin





θ

c


]









(
19
)







wherein the first exponential is the kernel of a 2D Fourier transform. Examining the behavior of the second exponential, one finds the bounds:












Ψ
mn



=




2

π






x
n





f
^

m




2





sin





θ

c







π





L





sin





θ


2


δ
r








(
20
)







where δr=c/(2B) denotes the range resolution. The maximum values of |x′n| and |{circumflex over (f)}m| used in eq. (20) are L/2 and B/2, respectively. Because of the presence of the sin θ factor and the fact that the mean values <x′n> and <{circumflex over (f)}m> are both null, the effective bounds of ψmn are in practice much smaller than those given in eq. (20). A Taylor expansion of the last exponential in eq. (19) yields:










exp


[


-
j







Ψ
mn


]







p
=
0







1

p
!


[



-
j






4





π






x
n





f
^

m


sin





θ

c

]

p






(
21
)







Since ψmn is a range-independent phase modulation term, we can predict that any truncation error in the above expansion will result into a range-independent image deblurring effect increasing with increasing θ. No deblurring is observed at θ=0. To reformulate eq. (19) with 2D FFTs, the pseudo-polar coordinate system determined by the kernel of the 2D Fourier transform in eq. (19) is used. The pseudo-polar coordinate system is defined with the two variables:










α
=


2

ρ

c








β
=


2





sin





θ


λ
c







(
22
)







which clearly resemble a polar coordinate system with ρ and θ as the radial and angular variable, respectively. The α coordinate is directly proportional to the range coordinate of a polar grid. The β coordinate is a sinusoidal function of the polar angle θ with amplitude 2/λc. For a narrow field of view of the radar (i.e. if θ<<1), then β≈2θ/λc and becomes also proportional to the polar coordinate θ. The reverse transformation from the pseudo-polar to the polar coordinate system is straightforward and can be formulated as follows:










ρ
=


c
2


α








θ
=

arc






sin


[



λ
c

2


β

]








(
23
)







and the corresponding ground-range and cross-range coordinates in the Cartesian grid are, respectively:





x=ρ sin θ





y=ρ cos θ  (24)


An example of 2D grid uniformly sampled in α and β (λc=5 cm) with the corresponding pseudo-polar and Cartesian grids is shown in FIG. 7. As can be seen, the uniform grid in the pseudo-polar coordinate system highly resembles a polar grid. However, an important advantage of the suggested pseudo-polar format is that the resulting images will show an invariant resolution within the entire image scene. These resolutions are δα=1/B and δβ=1/Lx, respectively. Invariant resolution is not given in a polar formatted image, where the azimuth resolution is a decreasing function of θ, except when the image is resampled by appropriate interpolation at the price of introducing interpolation errors. Consequently, it is a good practice to use the pseudo-polar format at all stages of the processing chain but the last one where the image has to be geo-located and/or displayed in a coordinate system more convenient for visualization. Products such as the radar reflectivity image, coherence images, and 2D phase interferograms can also be computed in the pseudo-polar format.


Regarding the transformation from the pseudo-polar to either polar or Cartesian grids, it can e.g. be implemented using any suitable technique, e.g. a 2D Lagrange interpolation. Such transformations are well known and need not be explained. Details concerning their implementation can be found in the relevant literature (see e.g.).


Using the above results, The radar reflectivity at the point P of eq. (19) expressed in the pseudo-polar coordinate system becomes:










I


(

α
,
β

)


=




p
=
0








1

p
!




[



-
j






2





π





β


f
c


]



p











m
=
0


M
-
1







n
=
0


N
-
1





D


(


f
m

,

x
n



)





(



f
^

m



x
n



)

p

×

exp


[


+
j






2


π


(



f
m


α

-


x
n



β


)



]











(
25
)







To simplify the notation, the following 2D Fourier transform pair is introduced:





Hpm′n′)custom-characterD(fm,x′n)({circumflex over (f)}mx′n)p  (26)


where the symbol custom-character denotes the 2D FFT operator, with m, m′=0, . . . , M−1, and n, n′=0, . . . , N−1. Finally, the reflectivity image can be expressed as a series expansion of the function Hpm′, βn′) as follows:











I


(


α

m



,

β

n




)


=




p
=
0






I
p



(


α

m



,

β

n




)









with




(
27
)








I
p



(


α

m



,

β

n




)


=




1

p
!




[



-
j






2





π






β

n





f
c


]


p





H
p



(


α

m



,

β

n




)


.






(
28
)







In practice, only a limited number of terms (Pmax+1) can be summed, which yields equations (2) and (3). When the radar aperture has a dimension comparable to the range resolution (i.e. Lx≈δr), which is quite a common scenario, a zeroth order series expansion in eq. (27) suffices (i.e. Pmax=0 in eq. (2)), and an excellent estimate of the image reflectivity in the pseudo-polar grid can be obtained through a single 2D FFT, yielding eq. (4):






Im′n′)≈I0m′n′)=H0m′n′)=FFT2D[D(fm,x′n)]


The addition of more terms in the series expansion, all of them being evaluated with 2D FFTs, is straightforward and does not significantly increase the computational cost. Furthermore, a separate and concurrent evaluation of every single term of the image series is perfectly possible (e.g. using parallel multi-processor systems). The rule is that the larger the ratio Lxr is, the more terms in the series should be used to guarantee the convergence of the FPFA series expansion. The diagram of FIG. 9 indicates the preferred cutoff order Pmax of the series expansion when the image is to be evaluated numerically, as a function of the aperture length to range resolution ratio Lxr. Pmax can be determined e.g. using FIG. 9, a look-up table (containing the values of FIG. 9) or by evaluating a fit function (e.g. f(x)=0.0318x2+2.554x+5.3251, where x stands here for Lxr) and rounding up or down to the next integer.


For the sake of completeness, we give an alternative form of eq. (27) that is obtained exploiting the derivative property of the Fourier transform:










I


(


α

m



,

β

n




)







p
=
0







[



-
j







β

n





2

π






f
c



]

p





p




p



β

n










k
=
0

p




(



p




k



)




(

-

f
c


)

k







p
-
k





I
0



(


α

m



,

β

n




)







p
-
k




α

m













(
29
)







which is not used in practice, but is useful to illustrate the fact that the series terms Ipm′n′) with p≧1 are partial derivatives of the first term (i.e. the zeroth order term) of the series expansion. Under the condition of having a range resolution comparable to the aperture length, these additional terms show in general reflectivities much lower than those of the zeroth order term (typically 30-40 dB below) and thus do not introduce any noticeable artifacts into the imagery. If, however, the ratio of aperture length to range resolution Lxr is large, additional terms in the series may be needed to obtain a more precise estimate of the reflectivity image.


The above derivation of the FPFA for a linear radar array can mutatis mutandis be applied for the case of a 2D radar array. A sketch of that imaging scenario is shown in FIG. 7. The radar array lies in the xy-plane and has the lengths Lx and Ly in x-direction and y-direction, respectively. The measurement points 20 (radar array positions) are indicated by the circles in the radar aperture 10. The spacings of the measurement points along the x and y axes are Δx′ and Δy′, respectively. At each measurement point 20, backscatter data are sampled in the frequency domain with a frequency spacing of Δf. A measurement with such radar yields a 3D matrix of complex-valued backscatter data D(fm, x′n, y′k) with:






f
m
=f
c
−B/2+mΔf






x′
n
=−L
x/2+nΔx′






y′
k
=−L
y/2+nΔy′  (30)


where m=0, 1, . . . , M−1, n=0, 1, . . . , N−1, k=0, 1, . . . K−1, fc is the center frequency, B is the frequency bandwidth swept in the measurement, M is the number of frequencies measured, and N and K are the number of radar array positions in x and y directions, respectively. As in any imaging algorithm based on a 3D Fourier transform, the steps in the frequency domain and the two radar axes have to be fine enough to avoid ambiguities in range and cross-range directions.


Under the assumption of having a 3D image scene entirely in the far-field of the 2D radar array (i.e. ρ>>2Lx2c and ρ>>2Ly2c), the 3D reflectivity image in the pseudo-spherical coordinate system can be expressed as:











I


(


α

m



,

β

n



,

γ

k




)


=




p
=
0


P
max






[



-
j






2





π


f
c


]

p






q
=
0

p






β

m


q



γ

k



p
-
q





q
!




(

p
-
q

)

!




FFT





3






D


[


D


(


f
m

,

x
n


,

y
k



)





f
^

m
p



x
n







q




y
k








p

-
q



]







,




(
31
)







where {circumflex over (f)}m=fm−fc is the baseband sampling frequency.


Instead of a pseudo-polar coordinate system one has here a pseudo-spherical coordinate system, with variables defined as










α
=


2

ρ

c








β
=


2

x



λ
c


ρ









γ
=


2

y



λ
c


ρ







(
32
)







Every term of the image series expansion in eq. (31) is formatted on a 3D uniform grid along the α, β and γ coordinates. The transformation from this coordinate system to a spherical or Cartesian coordinate system can be achieved using the following expressions:










ρ
=


2
c


α








x
=

[




λ
c


ρ

2


β

]








y
=

[




λ
c


ρ

2


γ

]








z
=



ρ
2

-

x
2

-

y
2








(
33
)







As concerns the preferred cutoff order Pmax of the series expansion in equation (31), it may be determined in a similar way as described with respect to the 2D case. Pmax depends on the aperture length to range resolution ratios Lxr and Lyr. The preferred value of Pmax can be determined by selecting the larger of Lxr and Lxr and using, e.g. FIG. 9 or a look-up table (containing the values of FIG. 9), or by evaluating a fit function (e.g. f(x)=0.0318x2+2.554x+5.3251, where x stands here for the larger of Lxr and Lxr) and rounding up or down to the next integer.


EXAMPLES
A. Numerical Simulations

The FPFA has been validated with a series of numerically simulated scenarios (in the 2D case). As a first example, a scene consisting of 5×5 point scatterers uniformly distributed in range and azimuth, with 500 m<ρ<1500 m and −60 deg<θ<60 deg, was generated. The range distance to the center of the scene was set to 1 km. The scatterer spacing in range and azimuth are, 250 m and 30 deg, respectively. The radar's center frequency was assumed to be 17.05 GHz (i.e. in the Ku-Band) and the bandwidth 100 MHz. The radar aperture was assumed to be 2 m long (Lx=2 m). The aperture length to range resolution is Lxr≈1.3. Prior to the formation of the images, a four-terms Blackman-Harris window was applied both along frequency and the linear coordinate of the radar aperture dimensions. The reflectivity images were computed using the FPFA and, for comparison, the TDBA. In the FPFA, the first four terms of the image series in eq. (27), i.e., Ip(α, β) with 0≦p≦3. The dynamic range of the images was 100 dB. From the results, it clearly followed that the first term of the image series, the zeroth order expansion, is already an excellent approximation of the true reflectivity. In fact, it was found that the second term in the series has reflectivity values, on a pixel by pixel basis, at least 25 dB below those of the first term. Similarly, the third term showed values at least 41 dB below those of the first term, which indicates that one can obtain an excellent imagery using the zeroth order series expansion. Comparing these results with those from the TDBA it was again confirmed that truncation error is negligible, and the zeroth order FPFA image is extremely close to that of the TDBA.


In a second simulated scenario the radar was assumed to have a much larger bandwidth, here B=1 GHz. The center frequency of the radar was chosen 5.5 GHz (i.e. in the C-Band). The aperture length was set to 3 m. This radar has a relative bandwidth of 20%, therefore it is fully classifiable as ultra-wide band according to the US FCC. The image scene this time was assumed to consist of seven point scatterers uniformly distributed in azimuth within −45 deg<θ<45 deg. The angular distance among scatterers is 15 deg and all of them were given the same reflectivity. The range distance to the scatterers was fixed to 600 m for all of them. As in the previous simulation, a four-terms Blackman-Harris window was used both in the frequency and radar aperture domains. In this example the aperture length to range resolution ratio is exactly Lxr=20. It was expected that a larger number of terms in the image series expansion of eq. (27) would have to be used to guarantee convergence of the series. The FPFA reflectivity image was computed for a number of terms ranging from 1 (Pmax=0, zeroth order expansion) to 65 (Pmax=64). It could be observed that the alternating series of eq. (21) converges rapidly to a very precise reflectivity image when the order of the expansion is above Pmax≈50. With smaller values of Pmax (Pmax≦48), some artifacts located at large off-boresight angles were noted but these artifacts disappeared as the number of terms Pmax was further increased. It was noted that the zeroth order image series gives a reasonably good result. Images obtained with Pmax≧57 were found to be in almost perfect agreement with those obtained with the TDBA.


B. GB-SAR Measurements

A first GB-SAR (Ground-based SAR) data set was collected in the framework of a field campaign at the avalanche test site of the Swiss Federal Institute for Snow and Avalanche Research (SLF-Davos), located in Vallee de la Sionne (Switzerland). A LISA (Linear SAR) instrument of the JRC was deployed to monitor the avalanche activity and assess the possible operational use of the GB-SAR technology. The center frequency used was 5.83 GHz, in the C-Band, with a frequency bandwidth of 60 MHz. The radar was based on a PNA network analyzer from Agilent Technologies working in the stepped frequency mode. The radar had two separate transmit and receive antennas and measured the VV polarization. The length of the synthetic radar aperture was 3.5 m. The typical avalanche path length on this site is 2.5 km, starting at an altitude of about 2650 m above sea level and ending at about 1450 m. The LISA instrument was positioned on the other site of the Valley at an altitude of 1800 m. The average slope within the image scene was about 27 deg. The range distance to the image scene went from 700 to 2100 m. The span of the image scene in azimuth angle was 90 deg. The aperture length to range resolution ratio in this case was Lxr≈1.4, which indicated that a zeroth order expansion of the image series should suffice for obtaining decent accuracy. The number of frequency points and radar positions along the aperture fixed to guarantee an image scene free of any ambiguity are, respectively, M=1601 and N=251. In this campaign, the total measurement time needed for a single image acquisition was 9 minutes. The backscatter data were converted into reflectivitiy images using the FPFA and, for comparison, the TBDA. A four-terms Blackman-Harris window both in the frequency and radar's aperture domains was used. FIG. 8 shows the image resulting from the TBDA (left-hand side) and from the zeroth order FPFA (right-hand side) in polar coordinates. As can be seen, the image obtained with the zeroth order FPFA is indistinguishable from that obtained using the TBDA. The second term of the image series has also been evaluated. It was found that it has, on a pixel by pixel basis, a reflectivity at least 39 dB below those of the first term.


A second field campaign was carried out with a GB-SAR instrument deployed in a ski resort located in Alagna Valsesia (Italy, Piedmont Region). The area monitored was a very steep slope with 30 to 50 degrees of inclination, at an altitude ranging from 2300 to 2700 m. The bottom part of the image scene corresponds to the Olen Valley, where a ski track passes through putting under risk skiers when snow avalanches fall down. The goal of this campaign was that of automatically detecting any avalanche event occurring within the field of view of the GB-SAR instrument. The extent of the image scene was about one square km, and was located at range distances ranging from 750 to 1500 m from the radar array. The radar used was again based on a PNA network analyzer from Agilent Technologies working in the stepped frequency mode. The radar bandwidth used in this field campaign was 250 MHz, with a center frequency of 13.25 GHz (i.e. in the Ku-Band). The radar had two separate transmit and receive antennas and measured the VV polarization. The length of the aperture was 1.9 m. The aperture length to range resolution ratio is in this case Lxr˜3.1, which is larger than in the previous example. The number of frequency points and radar positions along the aperture are, respectively, M=3201 and N=301. In this campaign, the total measurement time needed for a single image acquisition was 6 minutes. It was observed again that the zeroth order FPFA gave basically the same reflectivity image as the TBDA. This was because from the second term onwards the FPFA images in the series showed a very low reflectivity. On a pixel by pixel basis, the second term in the image series showed reflectivities at least 28 dB below those of the first term. This value is smaller than in the previous example, as was expected because of the larger aperture length to range resolution ratio.


It is worthwhile noting that the FPFA images computed in pseudo-polar or pseudo-spherical format can be interpolated onto a digital terrain model (DTM) of the image area of interest. For instance, two images collected immediately before and after an event (e.g. an avalanche) may be combined in the pseudo-polar or pseudo-spherical format into a coherence image, which may thereafter be interpolated onto a DTM with texture (e.g. an orthophoto) using the coordinates transformation given in eqs. (23) and (24). Such coherence image allows to readily identify changes caused by the event (e.g. the extent of an avalanche) because of the low coherence values in the affected area or areas.


Regarding practical use of the technique of the present invention, the FPFA may be implemented using any suitable software or hardware. So far the inventor has implemented and tested it using a number of commercial software packages including Matlab™ (The Mathworks, MA, USA), LabView™ (National Instruments, TX, USA), and IDL™ (ITT Visual Solutions, Boulder, Colo., USA), all of them giving excellent results. Of particular interest is the combination of these implementations and the FFTW library (i.e. the “fastest Fourier transform in the west” package developed at MIT by Frigo and Johnson). During tests with massive amounts of images it was concluded that the disk read (raw data) and write (radar image) operations were more time-consuming than the FPFA itself. Typical processing times for a single image (excluding read and write operations) were found to be in the order of a few tens of ms on an Intel Xeon™ 5160-3 GHz workstation.

Claims
  • 1.-14. (canceled)
  • 15. Method of radar-imaging a scene in a far-field of a one-dimensional radar array, comprising providing an array of backscatter data of said scene, said backscatter data being herein denoted by D(fm, x′n), said backscatter data being associated to a plurality of radar array positions, herein denoted by x′n, n=0 . . . N−1, N>1, regularly spaced along an axis of said radar array;the backscatter data being sampled, for each radar array position x′n, at different frequencies, herein denoted by fm, m=0 . . . M−1, M>1, defined by fm=fc−B/2+m·Δf, where fc represents a center frequency, B a bandwidth and Δf a frequency step;computing a radar reflectivity image, herein denoted by I(αm′, βn′), in a pseudo-polar coordinate system, in which coordinates of a point of said scene are expressible by equations:
  • 16. The method as claimed in claim 15, wherein said radar array positions are defined by x′n=−Lx/2+n·Δx′, where Lx represents a length of the radar array and Δx′ a spacing between said radar array positions.
  • 17. The method as claimed in claim 15, wherein Pmax is chosen depending on a ratio of radar array length to range resolution.
  • 18. The method as claimed in claim 15, wherein said radar reflectivity image in said pseudo-polar coordinate system is mapped into at least one of a polar coordinate system and a Cartesian coordinate system.
  • 19. The method as claimed in claim 15, wherein at least one of a coherence image and a 2D-phase interferogram is computed based upon said radar reflectivity image in said pseudo-polar coordinate system.
  • 20. The method as claimed in claim 19, wherein said at least one of a coherence image and a 2D-phase interferogram is mapped into at least one of a polar coordinate system and a Cartesian coordinate system.
  • 21. Method of radar-imaging a scene in a far-field of a two-dimensional radar array, comprising providing an array of backscatter data of said scene, said backscatter data being herein denoted by D(fm, x′n, y′k), said backscatter data being associated to a plurality of radar array positions, herein denoted by (x′n,y′k) n=0 . . . N−1, N>1, k=0 . . . K−1, K>1, regularly spaced along a first and a second axis of said radar array;the backscatter data being sampled, for each radar array position (x′n,y′k) at different frequencies, herein denoted by fm, m=0 . . . M−1, M>1, defined by fm=fc−B/2+m·Δf, where fc represents a center frequency, B a bandwidth and Δf a frequency step;computing a radar reflectivity image, herein denoted by I(αm′, βn′, γk′), in a pseudo-spherical coordinate system, in which coordinates of a point of said scene are expressible by equations:
  • 22. The method as claimed in claim 21, wherein said radar array positions are defined by x′n=−Lx/2+n·Δx′ along said first axis, where Lx represents a length of the radar array along said first axis and Δx′ a spacing between said radar array positions along said first axis, and by y′k=−Ly/2+k·Δy′ along said second axis, where Ly represents a length of the radar array along said second axis and Δy′ a spacing between said radar array positions along said second axis.
  • 23. The method as claimed in claim 21, wherein Pmax is chosen depending on ratios of radar array lengths along said first and said second axis to range resolution.
  • 24. The method as claimed in claim 21, wherein said radar reflectivity image in said pseudo-spherical coordinate system is mapped into at least one of a spherical coordinate system and a Cartesian coordinate system.
  • 25. The method as claimed in claim 21, wherein at least one of a coherence image and a 3D-phase interferogram is computed based upon said radar reflectivity image in said pseudo-spherical coordinate system.
  • 26. The method as claimed in claim 25, wherein said at least one of a coherence image and a 3D-phase interferogram is mapped into at least one of a spherical coordinate system and a Cartesian coordinate system.
  • 27. The method as claimed in claim 15, wherein said reflectivity image is computed in or nearly in real time.
  • 28. The method as claimed in claim 21, wherein said reflectivity image is computed in or nearly in real time.
  • 29. A computer program product for controlling a data processing apparatus, comprising instructions causing said data processing apparatus to carry out the method as claimed in claim 15, when executed on said data processing apparatus.
  • 30. A computer program product for controlling a data processing apparatus, comprising instructions causing said data processing apparatus to carry out the method as claimed in claim 21, when executed on said data processing apparatus.
Priority Claims (1)
Number Date Country Kind
08156238.1 May 2008 EP regional
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/EP09/55359 5/4/2009 WO 00 2/14/2011