This invention belongs to the field of signal processing technology, specifically relating to a segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar.
With the extensive application of remote sensing technology in civil applications such as map surveying and mapping, surface deformation detection of terrestrial objects, and weather observation, the miniaturization of remote sensing equipment has become an inevitable trend. Since the 1990s, due to their small size, portability, flexibility, and ease of operation, unmanned aerial vehicle platforms have gradually become an important platform for synthetic aperture radar, with many scholars conducting extensive research on unmanned aerial vehicle platform synthetic aperture radar systems from both the system design and imaging algorithm perspectives. In recent years, with the development of unmanned aerial vehicle navigation and control technologies, various models of multi-rotor unmanned aerial vehicles have flooded the market. However, they have not been widely used in the field of remote sensing. The primary reason is that the traditional aircraft-mounted imaging algorithms are unstable when applied to the imaging of synthetic aperture radar carried by multi-rotor unmanned aerial vehicles, being constrained by the performance of the inertial equipment they carry. Therefore, researching imaging algorithms for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles that are not dependent on inertial equipment is of significant importance.
Currently, research institutions domestically and internationally primarily rely on airborne imaging algorithms that depend on data obtained from high-precision inertial navigation systems for motion compensation and imaging. Additionally, due to the servo systems carried by large platforms, which can compensate in real-time for the effects brought by changes in platform attitude angles, there is no need to consider the variations in platform angles. Therefore, it fails to address three issues present in the multi-rotor unmanned aerial vehicle carrying synthetic aperture radar systems: 1) The trajectory of multi-rotor platforms is usually very unstable because it is easily affected by environmental factors such as wind, possibly leading to severe jitters and abrupt turns; 2) The unique flight principles of multi-rotors bring about high-frequency errors and incidence angle errors due to rotor vibrations and fuselage tilting operations; 3) The combination of high-maneuverability trajectories with low-cost inertial navigation systems results in rather poor motion status and attitude angle data used for motion compensation. Moreover, due to the limitations of load capacity of the unmanned aerial vehicles, high-precision inertial navigation devices cannot be carried, resulting in a decrease in the positioning accuracy of navigation systems of the unmanned aerial vehicles. If the aerial vehicles' trajectory is reconstructed based on imaging results during flight, basis can be provide for their navigation systems. Therefore, studying the imaging and positioning algorithms of the multi-rotor unmanned aerial vehicle-borne synthetic aperture radar is of great significance.
This invention is targeted at the shortcomings and deficiencies of existing airborne imaging algorithms. Its objective is to provide segmented aperture imaging (SAI) and positioning method of a multi-rotor unmanned aerial vehicle synthetic aperture radar. This method is suitable for synthetic aperture radar systems without inertial navigation equipment or with low-precision inertial navigation equipment, aiming to enhance the focusing effect and imaging efficiency of radar imaging. This invention offers a SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles. This method is used for progressive imaging based on the echo signals of the synthetic aperture radar, characterized by the following steps: Step 1, perform range pulse compression on the raw echo signal s(t, η) to obtain a range pulse compression signal sRC(t, η), where t is the fast time in range dimension, and η is the slow time in azimuth dimension. On the basis of the phase history φ(η) of strong scattering points in the range pulse compression signal sRC(t, η), calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (θ)} of the manoeuvring platform; Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η) into N segments, with each segment corresponding to the segmented pulse compression signal sRC,i(t, η), where i=1 . . . N; Step 3, based on the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (θ)}, calculate the phase compensation amount φm,i of each segmented pulse compression signal sRC,i(t, η). Multiply the segmented pulse compression signal sRC,i(t,η) by the motion error compensation filter HMC,i=exp(−jφm,i), where the imaginary unit j=√{square root over (−1, )} obtaining N compensated echo signal of each segment denoted as sMC,i(t, η); Step 4, perform a two-dimensional Fourier transform on the compensated signal sMC,i(t, η) to obtain the two-dimensional spectrum sMC,i(f, fd). Utilize the series inversion method to decompose the two-dimensional spectrum sMC,i(f, fd), constructing azimuth compression filter HAC,i, where f represents the frequency corresponding to the fast time t in range dimension and fd represents the Doppler frequency corresponding to the slow time η in azimuth dimension; Step 5, multiply the two-dimensional spectrum sMC,i(f, fd) by the azimuth compression filter HAC,i, and then perform a two-dimensional inverse Fourier transform to obtain N imaging results, represented as sIMG,i(t, η) of each segment; Step 6, sequentially, for the overlapping areas in the imaging result sIMG,i(t, η) corresponding to adjacent segments, align and coherent integrate the envelopes in the range dimension where the strong focus points are located. For the non-overlapping areas, perform splicing to obtain the final imaging result, denoted as Sall.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics: Within Step 1, for the strong scattering points in the range pulse compression signal SRC(t, η), perform a second-order fitting on the phase history φ(η) to obtain the phase history of the strong scattering points in the slow time in azimuth dimension as φ(η)=βη2+αη+φ0+0(η), where, t is the fast time in range dimension, η is the slow time in azimuth dimension, 0(η) represents higher order phase errors and φ0 is a constant phase term. On the basis of the coefficients of the second-order term β and the first-order term α, the estimated velocity of the manoeuvring platform is calculated as
and the estimated squint angle of the beam center is calculated as
where λ is the wavelength of the system transmitted signal and {circumflex over (R)} is the estimated value of the system reference range.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics:
Within Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal sRC(t, η) is sequentially divided into N segments with consistent velocity directions. Then, it is determined whether the length of each segment is less than one synthetic aperture length. If it is less, the segment is extended on both sides to one synthetic aperture length. Finally, N segmental echo signals are obtained, denoted sRC,i(t, η) of each segment, where i=1 . . . N.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics: Within Step 3, the phase compensation amount of each segment is derived a
where R0 is the mean value of each segment {circumflex over (R)}, θ0 is the mean value of the estimated squint angle of beam center of each segment {circumflex over (θ)}.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics: Within Step 4, the azimuth compression filter of each segment is defined as
where f corresponds to the frequency associated with the fast time t in range dimension, fd is the Doppler frequency corresponding to the slow time η in azimuth dimension, fe is the carrier frequency of the system transmitted signal and c is the speed of light.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics: Within Step 6, firstly, geometric corrections are applied to the imaging result sIMG,i(t, η) corresponding to the adjacent sections, obtaining the corrected imaging result sIMG,iGC. Subsequently, the corrected imaging result sIMG,iGC is rotated by θ−θ0 degrees, obtaining the corrected imaging result sIMG,iGC perpendicular to the trajectory of the manoeuvring platform in slant distance. Then, consecutive overlapping areas of the corrected imaging result sIMG,iGC corresponding to adjacent sections are aligned in the range dimension envelopes where the strong focus points are located, and coherent integration is performed. For the non-overlapping areas, splicing is executed, obtaining the final imaging result Sall.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics. Within Step 6, involving the geometric correction of the imaging result sIMG,i, performs as follows in sub-steps: Step 6-1, perform the Fourier transformation in the azimuth dimension on the imaging result sIMG,i(t, η). This process will yield a result of each segment in the range-Doppler domain, represented as sIMG,i(t, fd); Step 6-2, on the basis of the characteristics of Fourier transformation and the geometric structure of the target space, construct the expression for a tilt correction filter to correct the tilt of the image:
where l represents the range scale of a typical building; Step 6-3, multiply the range-Doppler domain image sIMG,i(t, fd)by the tilt correction filter HGC−1, obtaining the tilt-corrected frequency domain image sIMG,iGC−1(t, fd); Step 6-4, perform the inverse Fourier transform in the azimuth dimension on the tilt-corrected frequency domain image sIMG,iGC−1(t, η), obtaining the tilt-corrected time domain image sIMG,iGC−1(t, η); Step 6-5, on the basis of the geometric structure of the target space, obtain the expression for the stretch/compression factor:
Step 6-6, substitute the stretch/compression factor expression into the already tilt-corrected time domain image sIMG,iGC−1(t, η), obtaining the deformation-corrected time domain image sIMG,iGC−2(t, η); Step 6-7, perform the Fourier transform in range dimension on the deformation-corrected time domain image sIMG,iGC−2(t, η), obtaining the deformation-corrected frequency domain image sIMG,iGC−2(f, η); Step 6-8, on the basis of the characteristics of the Fourier transform and the geometric structure of the target space, construct the expression for a secondary position correction filter to correct image translation:
Step 6-9, multiply the deformation-corrected frequency domain image sIMG,iGC−2(f, η) by the position correction filter HGC-3, to obtain the geometrically corrected frequency domain image sIMG,iGC−3(f, η); Step 6-10, perform the inverse Fourier transform in the range dimension on the geometrically corrected frequency domain image of each segment sIMG,iGC−3(f, η), to obtain the geometrically corrected time-domain image of each segment, denoted as sIMG,iGC(t, η); Step 6-11, rotating the geometrically corrected time-domain image SIMG,iGC(t, η) counterclockwise by {circumflex over (θ)}−θ0 degrees to obtain the to-be-spliced time-domain image sIMG,iGC−p(t, η), which is perpendicular to the trajectory of the manoeuvring platform in slant distance; Step 6-12, sequentially align the envelopes in the range dimension where the strong focus points are located in the overlap regions of the adjacent to-be-spliced time-domain images sIMG,iGC−p(t, η); Step 6-13, perform coherent integration in the overlapping regions and sequentially connect the non-overlapping regions, completing sub-aperture splicing and obtaining the full-aperture imaging result Sall.
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics. In Step 1, the estimated velocity {circumflex over (θ)} and the estimated squint angle of the beam center {circumflex over (θ)} are calculated as follows: Step 1-1, based on the definitions of the Doppler frequency modulation slope Ka and the Doppler center fdc, the expressions for calculating Ka and fdc are obtained as follows:
where
denotes the derivation of (⋅) with respect to the slow time in azimuth κ; Step 1-2, designate the space constituted by the echo signal as the signal space. For the range pulse compression signal sRC(t, η), perform a second-order fitting on the phase history φ(η) of the strong scattering points, obtaining the expression of such strong scatterers in the signal space as follows: φ(η)=βη2+αη+φ00+(η), where 62 is the coefficient of the second-order term, α is the coefficient of the first-order term, φ0 is the constant phase term and 0(η) represents the higher-order phase error; Step 1-3, substitute the expression of the phase history φ(η) into the calculation formulas of Ka and fdc in Step 1-1, respectively, obtaining the expressions of Ka and fdc in the signal space as follows:
Step 1-4, define the space constructed by the actual positions of the target and the manoeuvring platform as the target space. On the basis of spatial geometric relationships, the expression for the phase history φ(η) in the target space is obtained as follows:
where v represents the velocity of the manoeuvring platform, θ represents the squint angle of the beam center caused by the movement of the manoeuvring platform, R0 represents the closest range between the target and the manoeuvring platform and λ represents the wavelength of the system transmitted signal; Step 1-5, Substitute the expression for the phase history φ(η) from the target space into the calculations of Ka and fdc in Step 1-1, to obtain the expressions for Ka and fdc in the target space as follows:
Step 1-6, by comparing the right side of the expression of Ka in the signal space with that in the target space, and comparing the right side of the expression of fdc in the signal space with that in the target space, the estimated velocity, estimated squint angle of the beam center, and estimated range are obtained as follows:
The SAI method for synthetic aperture radar carried by multi-rotor unmanned aerial vehicles provided by this invention can also have the following characteristics. In Step 4, the process of decomposing the two-dimensional spectrum of each segment sMC,i(f, fd) to obtain the expression of the azimuth compression filter of the corresponding segment HAC,i using the series inversion method is performed as follows in sub-steps: Step 4-2-1, on the basis of the stationary phase method, the expression of the two-dimensional spectrum of each segment sMC,i(f, fd) corresponding to the segment is obtained as:
where fc represents the carrier frequency of the system transmitted signal and c represents the speed of light; Step 4-2-2, Using the series inversion method, decompose the expression of the two-dimensional spectrum sMC,i(f, fd) in Step 4-2-1 to obtain the two-dimensional spectrum expression of each segment that eliminates the coupling terms between f and
Step 4-2-3, on the basis of the two-dimensional spectrum expression obtained in Step 4-2-2, derive the expression of the ideal phase filter:
Step 4-2-4, substitute the estimated velocity {circumflex over (v)}, estimated squint angle of the beam center {circumflex over (θ)} and estimated reference range {circumflex over (R)} into the expression of the ideal phase filter to obtained the azimuth compression filter expression of each segment
The present invention also provides a positioning method for an unmanned aerial vehicle-borne synthetic aperture radar. This method is used to calculate the flight trajectory of the unmanned aerial vehicle platform based on the raw echo signal from the synthetic aperture radar thereof. It is characterized by the following steps: Step S1, perform range pulse compression on the raw echo signal s(t,η) to obtain the range pulse compression signal sRC(t, η), and based on the phase history φ(η) of strong scattering points in the sRC(t, η) , calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (θ)}; Step S2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η) into N segments, with each segment corresponding to the segmented pulse compression signal sRC(t, η), where i=1 . . . N. Additionally, calculate the platform trajectory coordinates [Xki, Yki, Zki] for the i-th segment based on the estimated velocity {circumflex over (v)} and estimated the squint angle of beam center {circumflex over (θ)}, where k=1 . . . M and M is the length of each segment in the azimuth direction. The coordinates are calculated as follows: Xki=∫{circumflex over (v)}ηdη, Yki=0, Zki={circumflex over (R)}=cos{circumflex over (θ)} cosθin, where θin represents the angle between the beam direction of synthetic aperture radar and the normal direction of the ground plane; Step S3, in the adjacent regions between the i-th and (i−1)-th segments, extract three strong scattering points. The coordinates of the strong scattering points in the i-th segment are denoted as [Q1i, Q2i, Q3i], and the coordinates of the strong scattering points in the (i−1)-th segment are denoted as [Q1i−1, Q2i−1, Q3i−1], Step S4, calculate the rotation matrix y for the i-th and (i−1)-th segments based on the coordinates of the strong scattering points [Q1i−1, Q2i−1, Q3i−1] and [Q1i, Q2i, Q3i]. The rotation matrix γ is computed as follows: γ=[Q1i−1, Q2i−1, Q3i−1]·[Qi1, Qi1, Q3i], −1; Step S5, use the platform trajectory coordinates of the (i−1)-th segment [Xki−1, Yki−1, Zki−1] as the reference to rotate the platform trajectory coordinates of the i-th segment. This rotation aligns the platform trajectory coordinates of adjacent segments, ensuring that: [Xki−1, Yki−1, Zki−1]=γ·[Xki, Yki, Zki]; Step S6, perform coherent integration of the platform trajectory coordinates in the overlapping region of the i-th and (i−1)-th segments, and concatenate the platform trajectory coordinates in the non-overlapping region to obtain the concatenated trajectory coordinates [Pxi, Pyi, Pzi], [Pxi, Pyi, Pzi]=[Xki, Yki, Zki]+[Xki−1, Yki−Zki−1]. Step S7, repeat steps S3 through S6 until the spliced trajectory coordinates [Px, Py, Pz] for all segments are obtained, so as to obtain the final trajectory coordinates of the platform [pxall, pyall, pzall].
The segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar disclosed in this invention utilize the estimation of platform motion parameters based on the phase history of echo signals and the segmentation of echo signals to precisely compensate for platform motion errors. This results in superior imaging focus and a higher imaging success rate. It enhances the efficiency of data acquisition on multi-rotor unmanned aerial vehicle platforms and enables high-resolution imaging for synthetic aperture radar systems mounted on multi-rotor unmanned aerial vehicle platforms. And during the imaging, the platform trajectory can be effectively estimated to obtain the coordinate information of the platform relative to the measurement area, achieving the effect of platform positioning.
Compared to prior art, the present invention has the following advantages:
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the relationship between platform motion and signal phase. As a result, it is suitable for synthetic aperture radar imaging systems on multi-rotor unmanned aerial vehicle platforms that lack inertial navigation systems or are equipped with low-precision inertial navigation systems devices.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the slant effects caused by variations in the attitude angles of the unmanned aerial vehicle platform and compensates for the coupling phase of range and azimuth, thereby improving image focusing quality. Consequently, this method is suitable for synthetic aperture radar imaging systems on unmanned aerial vehicle platforms that do not employ antenna servo mechanisms, as it effectively addresses the slant effects and enhances imaging quality.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention utilizes the method of phase filter multiplication instead of interpolation. This approach enhances the imaging speed for each segment, thus improving the efficiency of single-pass imaging within each segment.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention adopts a method of parallel imaging of each segment and then splicing it into a complete image to obtain the image, which improves the imaging speed of the complete image.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention can estimate the position of the platform while imaging, achieving the effect of platform positioning and facilitating the establishment of an automated, intelligent, and integrated unmanned aerial vehicle detection system.
In order to make the technical means, creative features, objectives, and effects of this invention easy to understand, the following provides a detailed explanation of a segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, in conjunction with examples and accompanying figures.
In the present invention, after the unmanned aerial vehicle takes off, the unmanned aerial vehicle-borne synthetic aperture radar transmits a linear frequency-modulated signal with a carrier frequency of fc, through a transmitted antenna. The transmitted signal, after being scattered by a target, is received by the radar through a receiving antenna, resulting in the raw echo signal s(t, η), where t is the fast time related to range dimension, and n is the slow time related to azimuth dimension. The synthetic aperture length Ls is given by Ls=RθBW, where R is the reference range, and ηBW is beam width of the azimuth. The invention provides a segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar. This method is used for the segmented aperture imaging based on the raw echo signal of a unmanned aerial vehicle-carried synthetic aperture radar, and includes the following steps:
Step 1, perform range pulse compression on the raw echo signal s(t, η) to obtain a range pulse compression signal sRC(t, η), where t is the fast time in range dimension, and η is the slow time in azimuth dimension. On the basis of the phase history φ(η) of strong scattering points in the range pulse compression signal sRC(t, η), the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (θ)} are calculated;
Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η) into N segments, with each segment corresponding to the echo signal sRC(t, η), where i=1 . . . N;
Step 3, based on the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (θ)}, calculate the value of the phase compensation φm,i for echo signal of each segment sRC(t, η). Multiply the echo signal of each segment SRC,¿(t, n) by the motion error compensation filter HMC,i=exp(−jφm,i), where the imaginary unit j=√{square root over (−1)}, obtaining N compensated echo signal of each segment denoted as sMC,i(t,η);
Step 4, perform a two-dimensional Fourier transform on the compensated echo signal of each segment sMC,i(t, η) to obtain the two-dimensional spectrum of each segment sMC,i(f, fd). Utilize the series inversion method to decompose the two-dimensional spectrum of each segment sMC,i(f, fd), constructing azimuth compression filter of each segment HAC,i, where f represents the frequency corresponding to the fast time t in range dimension and fd represents the Doppler frequency corresponding to the slow time η in azimuth dimension;
Step 5, multiply the two-dimensional spectrum of each segment sMC,i(f, fd) by the azimuth compression filter of each segment HAC,i, and then perform a two-dimensional inverse Fourier transform to obtain N imaging results, represented as sIMG,i(t, η) of each segment;
Step 6, sequentially, for the overlapping areas in the imaging result of each segment sIMG,i(t, η) corresponding to adjacent segments, align and coherent integrate the envelopes in the range dimension where the strong focus points are located. For the non-overlapping areas, perform splicing to obtain the final full-aperture imaging result, denoted as Sall.
Within Step 1, for the strong scattering points in the range pulse compression signal sRC(t, η), perform a second-order fitting on the phase history φ(η) to obtain the phase history of the strong scattering points in the slow time in azimuth dimension as φ(η)=βη2αη+φ0+0(η), where, t is the fast time in range dimension, η is the slow time in azimuth dimension, 0 (η) represents higher order phase errors and φ0 is a constant phase term. On the basis of the coefficients of the second-order term β and the first-order term α, the estimated velocity of the manoeuvring platform is calculated as
and the estimated squint angle of the beam center is calculated as
where λ is the wavelength of the transmitted signal and {circumflex over (R)} is the estimated value of the reference range.
Within Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal sRC(t, η) is sequentially divided into N segments with consistent velocity directions. Then, it is determined whether the length of each segment is less than one synthetic aperture length. If it is less, the segment is extended on both sides to one synthetic aperture length. Finally, N segmented pulse compression signals can be obtained, denoted sRC(t, η)of each segment, where i=1 . . . N.
Within Step 3, the phase compensation amount of each segment is defined as
where R0 is the mean value of each segment {circumflex over (R)}, θ0 is the mean value of the estimated squint angle of beam center of each segment {circumflex over (θ)}.
Within Step 4, the azimuth compression filter of each segment is defined as
where f corresponds to the frequency associated with the fast time t in range dimension, fa is the Doppler frequency corresponding to the slow time η in azimuth dimension, fc is the carrier frequency of the system transmitted signal and c is the speed of light.
Within Step 6, firstly, geometric corrections are applied to the imaging result sIMG,i(t, η) corresponding to the adjacent sections, obtaining the corrected imaging result sIMG,iGC. Subsequently, the corrected imaging result SIMG,i are rotated by {circumflex over (θ)}−θ0 degrees, obtaining the corrected imaging result sIMG,iGC perpendicular to the trajectory of the manoeuvring platform in slant distance. Then, consecutive overlapping areas of the corrected imaging result sIMG,iGC corresponding to adjacent sections are aligned in the range dimension envelopes where the strong focus points are located, and coherent integration is performed. For the non-overlapping areas, splicing is executed, obtaining the final full-aperture imaging result Sall.
The invention also provides segmented aperture positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar. This method is used to calculate the flight trajectory of the unmanned aerial vehicle platform based on the echo signal from the synthetic aperture radar. It is characterized by the following steps:
Step S1, perform range pulse compression on the raw echo signal s(t, η) to obtain the range pulse compression signal sRC(t, η), and based on the phase history q(n) of strong scattering points in the sRC(t, η), calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (θ)};
Step S2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η), into N segments, with each segment corresponding to the segmented pulse compression signal sRC(t, η), where i=1 . . . N. And calculate the platform trajectory coordinates [Xki, Yki, Zki] for the i-th segment based on the estimated velocity {circumflex over (θ)} and estimated the squint angle of beam center {circumflex over (θ)}, where k=1 . . . . M and M is the length of each segment in the azimuth direction. The coordinates are calculated as follow:
where θin represents the angle between the beam direction of synthetic aperture radar and the normal direction of the ground plane;
Step S3, in the adjacent regions between the i-th and (i−1)-th segments, extract three strong scattering points. The coordinates of the strong scattering points in the i-th segment are denoted as [Q1i, Q2i, Q3i], and the coordinates of the strong scattering points in the (i−1)-th segment are denoted as [Q1i−1, Q2i−1, Q3i−1];
Step S4, calculate the rotation matrix γ for the i-th and (i−1)-th segments based on the coordinates of the strong scattering points [Q1i−, Q2i−1, Q3i−1] and [Q1i, Q2i, Q3i]. The rotation matrix γ is computed as follows:
Step S5, use the platform trajectory coordinates of the (i−1)-th segment [Xki−1, Yki−1, Zki−1] as the reference to rotate the platform trajectory coordinates of the i-th segment. This rotation aligns the platform trajectory coordinates of adjacent segments, which are given by [Xki−1, Yki−1, Zki−1]=γ·[Xki, Yki, Zki];
Step S6, perform coherent integration of the platform trajectory coordinates in the overlapping region of the i-th and (i−1)-th segments, and concatenate the platform trajectory coordinates in the non-overlapping region to obtain the concatenated trajectory coordinates, which are calculated by [Px, Py, Pz]=[Xki, Yki, Zki]+[Xki−1, Yki−1, Zki−1];
Step S7, repeat steps S3 through S6 until the spliced trajectory coordinates [Px, Py, Pz] for all segments are obtained, so as to obtain the final trajectory coordinates [Pxall, puall, Pxall ] of the platform.
In this embodiment, the manoeuvring platform refers to a multi-rotor unmanned aerial vehicle, and the onboard synthetic aperture radar operates in the Ku band as a linear frequency-modulated continuous-wave radar. The target refers to the ground area that requires synthetic aperture imaging.
As shown in
Step 1, after the unmanned aerial vehicle takes off, the unmanned aerial vehicle-borne synthetic aperture radar transmits a linear frequency-modulated signal with a carrier frequency of fc, through a transmitted antenna. The transmitted signal, after being scattered by a target, is received by the radar through a receiving antenna, resulting in the raw echo signal s(t, η), where t is the fast time related to range dimension, and η is the slow time related to azimuth dimension., where t is the fast time in range dimension, and η is the slow time in azimuth dimension. Perform range pulse compression on the raw echo signal s(t,η) to obtain a range pulse compression signal SRC(t, η). On the basis of the phase history Φ(η) of strong scattering points in the range pulse compression signal SRC(t, η), the estimated velocity {circumflex over (θ)} and estimated squint angle of the beam center {circumflex over (θ)} are calculated.
Specifically perform as follows in sub-steps:
Step 1-1, based on the definitions of the Doppler frequency modulation slope Ka and the Doppler center fac, the expressions for calculating Ka and fdc are obtained as follows:
where
denotes the derivation of (⋅) with respect to the slow time in azimuth η, η is the slow time in azimuth dimension and φ(η) is the phase history of any strong scattering point in ground area;
Step 1-2, designate the space constituted by the echo signal as the signal space.
For the range pulse compression signal sRC(t, η), where t is the fast time in range dimension, perform a second-order fitting on the phase history φ(η) of the strong scattering points, obtaining the expression of such strong scatterers in the signal space as follows:
where β is the coefficient of the second-order term, α is the coefficient of the first-order term, φ0 is the constant phase term and o(η) represents the higher-order phase error;
Step 1-3, substitute the above Eq. <3> into Eqs. <1> and <2>, obtaining the expressions of Ka and fdc in the signal space as follows:
Step 1-4, define the space constructed by the actual positions of the target and the manoeuvring platform as the target space. On the basis of spatial geometric relationships and law of cosines, the calculation formula for the range R between the target and the platform is obtained as follows:
where v represents the velocity of the manoeuvring platform, θ represents the squint angle of the beam center caused by the movement of the manoeuvring platform and R0 represents the closest range between the target and the manoeuvring platform.
Perform Taylor expansion of the above Eq. <6> at η=0, and retain to the second-order term, and the expression of R is obtained:
On the basis of the phase calculation formula and definition, the expression for the phase history φ(η) in the target space is obtained as follows:
where λ represents the wavelength of the system transmitted signal; Step 1-5, substitute the above Eq. <8> into Eqs. <1> and <2>, respectively, obtaining the expressions of Ka and fdc in the target space as follow:
Step 1-6, by comparing the right side of the above Eqs. <4> and <9>, and comparing the right side of the above Eqs. <5> and <10>, the estimated velocity, estimated squint angle of the beam center, and estimated range are obtained as follow:
Step 2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η) into N segments, with each segment corresponding to the segmented pulse compression signal SRRC,i(t, η), where i=1 . . . N. Specifically perform as follows:
On the basis of the positive or negative direction of the estimated velocity {circumflex over (v)}, the range pulse compression signal sRC(t, η) is preliminarily divided into segments consistent with the velocity direction. Next, check whether the length of each segment is less than a synthetic aperture length, which is given by Ls=R×θBW. If it is less, extend the segment to both sides to be one synthetic aperture length; otherwise, do not process it. Finally, the range pulse compression signal sRC(t, η) is divided into N segments, and the echo signal corresponding to each segment is represented as sRC,i(t, η) where i=1 . . . N. Step 3, on the basis of the estimated velocity {circumflex over (v)} and the estimated squint angle of the beam center {circumflex over (θ)}, the phase compensation Φm,i for the segmented pulse compression signal sRC,i(t, η) is calculated. Multiply the segmented pulse compression signal sRC,i(t, η) of each segment by the motion error compensation filter HMC,i=exp(−jφm,i), where j=√{square root over (−1)}, obtaining N compensated echo signal of each segment sRC(t, η) . This realizes the motion compensation of the segmented pulse compression signal SRC,i(t, n). Specifically perform as follows:
Step 3-1, the influence of the slant range change δR, caused by the change in the platform motion state, on the phase, are calculated as follows:
where v0 is the mean value of the estimated velocity θ and θ0, is the mean value of the estimated squint angle of the beam center {circumflex over (θ)};
Step 3-2, the expression for the phase compensation amount corresponding to the segmented pulse compression signal sRC,i(t, η) of each segment are derived as follow:
Step 3-3, the expression for the motion error compensation filter are derived as follow:
where j represents imaginary number, given by j√{square root over (−1)};
Step 3-4, multiply the segmented pulse compression signal sRC,i(t, η) of each segment by the motion error compensation filter as described in Eq. <14>, obtaining the compensated echo signal sMC,i(t, η) of each segment.
Step 4, perform a two-dimensional Fourier transform on the compensated echo signal sMC,i(t, η) to obtain the two-dimensional spectrum of sMC,i(f, fd). Utilize the series inversion method to decompose the two-dimensional spectrum sMC,i(f, fd), constructing azimuth compression filter HAC,i, where f represents the frequency corresponding to the fast time t in range dimension and fd represents the Doppler frequency corresponding to the slow time η in azimuth dimension. Specifically perform as follows:
Step 4-1, perform a two-dimensional Fourier transform on the compensated echo signal sMC,i(f, fd) to obtain the two-dimensional spectrum sMC,i(f, fd), where f represents the frequency corresponding to the fast time t in range dimension and fd represents the Doppler frequency corresponding to the slow time η in azimuth dimension;
Step 4-2-1, on the basis of the stationary phase method, the expression of the two-dimensional spectrum sMC,i(f, fd) corresponding to the segment is obtained as:
where fc represents the carrier frequency of the system transmitted signal and c represents the speed of light;
Step 4-2-2, using the series inversion method, decompose the above Eq. <17> to obtain the two-dimensional spectrum expression of each segment that eliminates the coupling terms between f and fd:
Step 4-2-3, on the basis of the above Eq. <18>, derive the expression of each segment for the ideal phase filter:
Step 4-2-4, substitute the estimated velocity {circumflex over (v)}, estimated squint angle of the beam center {circumflex over (θ)} and estimated reference range {circumflex over (R)} into the above Eq. expression <19> to obtained the azimuth compression filter expression of each segment:
Step 5, multiply the two-dimensional spectrum sMC,i(f, fd) by the azimuth compression filter of each segment HAC,i, and then perform a two-dimensional inverse Fourier transform to obtain N imaging results, represented as sIMG,i(t, η) of each segment.
Step 6, refer to
Step 6-2, on the basis of the characteristics of Fourier transformation and the geometric structure of the target space, construct the expression for a tilt correction filter to correct the tilt of the image:
where i represents the range scale of a typical building.
Step 6-3, multiply the range-Doppler domain image sIMG,i(t, fd) by the tilt correction filter Hcc−1, obtaining the tilt-corrected frequency domain image sIMGGC−1((t, fd);
Step 6-4, perform the inverse Fourier transform in the azimuth dimension on the tilt-corrected frequency domain image sIMGGC−1((t, fd), obtaining the tilt-corrected time domain image sIMGGC−1((t, η);
Step 6-5, on the basis of the geometric structure of the target space, obtain the expression for the stretch/compression factor:
Step 6-6, substitute the aforementioned Eq. <22>, into the tilt-corrected time domain image sIMG,iGC−1((t, η), obtaining the deformation-corrected time domain image sIMG,iGC−2((t, η);
Step 6-7, perform the Fourier transform in range dimension on the deformation-corrected time domain image sIMG,iGC−1((t, η), obtaining the deformation-corrected frequency domain image sIMG,iGC−1((f, η);
Step 6-8, on the basis of the characteristics of the Fourier transform and the geometric structure of the target space, construct the expression for a secondary position correction filter to correct image translation:
Step 6-9, multiply the deformation-corrected frequency domain image sIMG,iGC−2(f, η) by the position correction filter HGc-3, as described in Eq. <23>, to obtain the geometrically corrected frequency domain image sIMG,iGC−3((f, η);
Step 6-10, perform the inverse Fourier transform in the range dimension on the geometrically corrected frequency domain image sIMG,iGC−3(f, η), to obtain the geometrically corrected imaging result of each segment, denoted as sIMG,iGC(f, η);
Step 6-11, rotating the geometrically corrected time-domain image SIMG, ¿ (t, n) counterclockwise by {circumflex over (θ)}−θ0 degrees to obtain the time-domain image to be spliced sIMG,iGC−p(t, η) which is perpendicular to the trajectory of the manoeuvring platform;
Step 6-12, sequentially align the envelopes in the range dimension where the strong focus points are located in the overlap regions of the adjacent to-be-spliced time-domain images sIMG,iGC−p(t, η)
Step 6-13, perform coherent integration in the overlapping regions and sequentially connect the non-overlapping regions, completing segments splicing and obtaining the full-aperture imaging result Sall.
The position part of the segmented aperture imaging and positioning method of a multi-rotor unmanned aerial vehicle-borne synthetic aperture radar includes the following steps:
Step S1, perform range pulse compression on the raw echo signal s(t, η) to obtain the range pulse compression signal sRC(t, η), and based on the phase history q(n) of strong scattering points in the sRC(t, η), calculate the estimated velocity {circumflex over (v)} and estimated squint angle of the beam center {circumflex over (θ)};
This step is identical to Step 1 in the part concerning the imaging method, and thus, it is not reiterated here.
Step S2, on the basis of the direction of the estimated velocity {circumflex over (v)}, segment the range pulse compression signal sRC(t, η) into N segments, with each segment corresponding to the echo signal sRC,i(t, η), where i=1 . . . . N. Additionally, calculate the platform trajectory coordinates [Xki, Yki, Zki] for the i-th segment based on the estimated velocity {circumflex over (v)} and estimated the squint angle of beam center {circumflex over (θ)}, where k=1 . . . M and M is the length of each segment in the azimuth direction. The coordinates are calculated as follow:
where θin represents the angle between the beam direction of synthetic aperture radar and the normal direction of the ground plane;
Step S3, in the adjacent regions between the i-th and (i−1)-th segments, extract three strong scattering points. The coordinates of the strong scattering points in the i-th segment are denoted as [Q1i, Q2i, Q3i], and the coordinates of the strong scattering points in the (i−1)-th segment are denoted as [Q1i−1, Q2i−1, Q3i−1];
Step S4, calculate the rotation matrix γ for the i-th and (i−1)-th segments based on the coordinates of the strong scattering points [Q1i−1, Q2i−1, Q3i−1] and [Q1i, Q2i, Q3i]. The rotation matrix y is computed as follows:
Step S5, use the platform trajectory coordinates of the (i−1)-th segment [Xk1−1, Yki−1, Zki−1] as the reference to rotate the platform trajectory coordinates of the i-th segment. This rotation aligns the platform trajectory coordinates of adjacent segments, which are as follows:
Step S6, perform coherent integration of the platform trajectory coordinates in the overlapping region of the i-th and (i−1)-th segments, and concatenate the platform trajectory coordinates in the non-overlapping region to obtain the concatenated trajectory coordinates [Pxi, Pyi, Pzi], which are as follows:
Step S7, repeat steps S3 through S6 until the spliced trajectory coordinates [Px, Py, Pz] for all segments are obtained, so as to obtain the final trajectory coordinates [pxall, pyall, pzall] of the platform.
The advantages of this method are further illustrated below in conjunction with specific comparative verification results.
As listed in Table 1:
Case 1: Under the conditions listed in Table 1, the grid-points target range pulse compression signal sr(t, η) obtained from the simulated and measured trajectory. The measured trajectory comes from the actual data measured by the inertial guidance in an experiment, and its trajectory is shown in
The grid- points target consists of 7×7 points with a distance and azimuth interval of 25m each. The spatial relationship between the grid-points target and the platform trajectory is shown in
After range pulse compression on sr(t, η), sRD(t, η) is obtained. The segmented aperture imaging (SAI) algorithm and traditional imaging methods are respectively used for imaging comparison on sRD(t, n) and the results are shown in
Case 2: In the conditions listed in Table 1, the velocity of the platform is changed to 10 m/s, and the flight height of the platform was altered to 350 m. These modifications represent the experimental conditions of experiments. The platform used was a multi-rotor unmanned aerial vehicle, named KWT-65, equipped with a Ku-band miniaturized frequency-modulated continuous-wave synthetic aperture radar. This platform was flown approximately 100 times, each flight covering a route length of about 1 km, collecting multiple sets of raw echo data from a specific ground area. Imaging was performed on multiple sets of raw echo data using both traditional imaging methods and the SAI methods of this invention. The comparison of focusing effects is shown in
Case 3: Processing the experimental data under case 2, images were obtained, and concurrently, the trajectory of the platform could be estimated. The comparison between the trajectory estimated using the SAI method of this invention and the trajectory collected using inertial navigation equipment is shown in
According to the segmented aperture imaging and positioning method of a multi-rotor unmanned aerial radar, the method primarily comprising: 1) on the basis of an unmanned aerial vehicle-borne synthetic aperture radar system, acquiring a target echo; 2) on the basis of an echo signal, estimating the motion state of a manoeuvring platform; 3) on the basis of the motion state of the platform, segmenting the echo signal; 4) on the basis of the motion state of the platform, performing motion compensation on each echo signal segment; 5) performing two-dimensional Fourier transform on each compensated echo signal segment to obtain a two-dimensional spectrum, and using a series inversion method to decompose the two-dimensional spectrum to obtain a phase filter of each segment; 6) multiplying the two-dimensional spectrum of each segment by the phase filter corresponding to said segment, and then performing two-dimensional inverse Fourier transform on the two-dimensional spectrum to obtain an image of each segment; 7) performing geometric correction on the images of each segment, and then splicing the images of each segment to obtain a full-aperture imaging result; and 8) splicing the trajectory of each segment of the platform to obtain complete trajectory coordinates of the platform. In this embodiment, a method based on the estimation of platform motion parameters from the phase history of the echo signal and segmenting the echo signals is employed. This allows for precise compensation of platform motion errors, resulting in enhanced imaging focus and a high success rate in image acquisition. It improves the efficiency of data collection for multi-rotor unmanned aerial vehicle-drone platforms, enabling effective high-resolution imaging for synthetic aperture radar systems on multi-rotor unmanned aerial vehicle-drone platforms. Simultaneously, this method can calculate the three-dimensional coordinates of the platform trajectory during the imaging process, achieving effective platform positioning. This has applications in drone navigation, providing possibilities for the future development of intelligent, integrated detection systems.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the relationship between platform motion and signal phase. As a result, it is suitable for synthetic aperture radar imaging systems on multi-rotor unmanned aerial vehicle platforms that lack inertial navigation systems or are equipped with low-precision inertial navigation systems devices.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention considers the slant effects caused by variations in the attitude angles of the unmanned aerial vehicle platform and compensates for the coupled phase between range and azimuth, thereby improving image focusing quality. Consequently, this method is suitable for synthetic aperture radar imaging systems on unmanned aerial vehicle platforms that do not employ antenna servo mechanisms.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention utilizes the method of phase filter multiplication instead of interpolation. This approach enhances the imaging speed for each segment, thus improving the efficiency of single-pass imaging within each segment.
Compared to traditional airborne synthetic aperture radar imaging algorithms, this invention adopts a method of parallel imaging of each segment and then splicing it into a complete image to obtain the image, which improves the imaging speed of the complete image.
Furthermore, experimental validation has demonstrated that the embodiment of the present invention, which proposes a segmented aperture imaging (SAI) method for multi-rotor unmanned aerial vehicle-borne synthetic aperture radar, achieves high imaging resolution and fast computational speeds. Concurrently, it enables the estimation of platform trajectory, reducing the algorithm's dependence on hardware equipment, thus indicating that this invention can be effectively applied to synthetic aperture radar imaging systems on small and maneuverable platforms.
In this embodiment, a detailed derivation of the relationship between platform motion parameters and echo phase has been carried out. This allows for motion compensation and imaging without reliance on inertial navigation equipment by considering the second-order expansion of the two-dimensional spectrum in squint views. Through simulations, the point spread function curves of each segment, the complete image, and the traditional imaging algorithm were compared in this embodiment. Experimental validations contrasting the imaging results of the proposed method and traditional algorithms have been conducted, proving that this embodiment can effectively achieve high-resolution imaging for multi-rotor unmanned aerial vehicle-drone platform synthetic aperture radar systems.
The aforementioned embodiments are preferred examples of the present invention and are not intended to limit the scope of protection of the invention.
Number | Date | Country | Kind |
---|---|---|---|
202110929813.9 | Aug 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/115882 | 9/1/2021 | WO |