This invention relates generally to methods for correcting radar antenna beam patterns and, more particularly, to a method for correcting synthetic aperture radar (SAR) antenna beam patterns.
Synthetic Aperture Radar (SAR) is a type of radar that uses the relative motion between a target and a sensor to produce high resolution images of that target. This relative motion can include a stationary target and a moving antenna or a moving target and a stationary antenna. A moving target and a stationary antenna are referred to as inverse synthetic aperture radar (ISAR). The distance the SAR device travels over a target in the time taken for the radar pulses to return to the antenna creates a synthetic antenna aperture with finer resolution than the real beam aperture of the antenna. To create a SAR image, successive pulses of radio waves are transmitted to “illuminate” a target scene, and the echo of each pulse is received and recorded. The pulses are transmitted and the echoes received using a single beam-forming antenna, with wavelengths of a meter down to several millimeters. As a SAR device on board a platform moves, the antenna location relative to the target changes with time. Signal processing of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna positions. This process forms a synthetic antenna aperture and allows the creation of high-resolution images.
One of the challenges of increasing an imaging radar system's instantaneous radio frequency (RF) bandwidth is maintaining a constant antenna beam pattern. For a fixed physical aperture area of an antenna, by definition, the directivity, and pattern, varies as a function of frequency. The pattern changes are significant for close range instrumentation radars that accurately measure the radar cross section (RCS) of objects from 2-18 GHz.
When collecting data in an ISAR collection mode, where the radar is stationary and the object is rotated as measurements are collected, the beam pattern effects can be calibrated with a reference measurement of a known object beforehand. This approach increases cost as the object to be measured increases in size. In some cases, a large, specialized facility is necessary to rotate the object. Forward operating areas with less infrastructure have need for a cost-effective instrumentation radar to scan an area of an aircraft, for example, to evaluate if maintenance is needed for its radar reducing features. For this case the radar needs to move, not the object, to locate and measure RCS.
Existing beam pattern correction methods that approximate the beam pattern as a polynomial cannot compensate for frequency variances observed and they assume the target response is present in all frequencies and all measurement locations. Frequency dependent pattern corrections have been developed for range migration processing over a 2 GHz bandwidth. Many radar systems use a uniform clutter field to estimate an antenna pattern but that requires both a uniform clutter field and a constant gain over frequency, which may be difficult to obtain or maintain. At present, there is no satisfactory method or system that addresses the wide range of frequency or the wide range of angles that occur with an instrumentation radar.
The present invention provides a method for correcting a synthetic aperture radar (SAR) image by collecting SAR data, including phase history and wave number domain, from an object, forming an uncorrected image Iuc of the object from the SAR collected data using an invertible image formation algorithm, isolating a pixel value Iuc (x,y) from the uncorrected image Iuc, then inserting that value into another image, I//uc, the same size as Iuc with zero values for all other pixels, performing an inverse image formation on the image I//uc, and converting the image into a phase history X//v that represents only the isolated pixel value Iuc (x,y). The relative location of the isolated pixel value Iuc (x,y) is calculated between every radar measurement point to the actual isolated pixel's location Sx/Sy/.
Range loss corrections are computed for the isolated pixel value Iuc (x,y), based on the distance to the actual pixel value location Sx/Sy/. Antenna beam pattern corrections for the isolated pixel value Iuc (x,y) are computed based on frequency and angle to a measurement location at every SAR sampling position. Phase corrections for the isolated pixel value Iuc (x,y) are calculated using an image formation algorithm. Range loss corrections, antenna beam pattern corrections, and phase corrections are interpolated in a phase history domain as X//corr, according to the image formation algorithm. The interpolated phase history X//corr is applied to the phase history X//v forming a corrected phase history X///v.
The corrected phase history X///v is converted back into an image I//c. The corresponding uncorrected pixel value Iuc (x,y) in the uncorrected image Iuc is replaced with the corrected isolated pixel value I//c(x,y). The above steps are repeated for each uncorrected pixel value until all uncorrected pixel values in the uncorrected image Iuc are replaced with corrected pixels values from I//c, thereby providing a corrected SAR image of the object.
An advantage of the present invention is the ability to rapidly make corrections to a radar image to bring a radar cross-section image measurement closer to true radar cross-section.
Another advantage is the ability to use any invertible image formation algorithm to form an uncorrected image and to use that image formation algorithm to calculate corrections.
While the following description details the preferred embodiments of the present invention, it is to be understood that the invention is not limited in its application to the details of arrangement of the parts as described and shown in the figures disclosed herein, since the invention is capable of other embodiments and of being practiced in various ways.
The present invention provides a method for implementing a frequency and spatially variant beam pattern correction for an instrumentation radar in a strip map SAR collection mode, wherein the correction relies solely upon the measured antenna patterns and not upon any particular image content to correct the image. The steps of the method are illustrated in the flow chart of
For linear frequency modulated waveforms (only), the method includes defining a value, R_line, of the distance between the center of the image/target area and the position of the radar at the center of the collection rail SAR. The method further includes defining a vector, R_point, of the distance between the center of the image/target area to the location of the radar for each pulse collected. The range difference, defined as delR=R_line−R_point is then calculated and the following correction is applied on a pulse-by-pulse basis as the range difference changes for each pulse:
where Xv is the stripmap formatted phase history data and Xv′ is the spotlight formatted phase history data.
An initial uncorrected image Iuc of the object is formed using a polar format algorithm (or another invertible image formation algorithm) (Step 5). A beam correction is applied using any invertible image formation algorithm. The polar format algorithm is used as an example. Invertible means that the image and collected data (i.e. phase history) can be transformed back and forth between the image and data domains. The transformation between image and phase history is a forward or reverse transformation.
The forward transformation converts phase history to an image. The reverse transformation converts the image to a phase history. An expression to describe the forward transformation from the phase history, X′v, to an uncorrected image, is Iuc={X′v}.
Steps 6-14 are performed for each uncorrected pixel value Iuc (x,y) in the initial uncorrected image Iuc. A new image, I//uc, is created the same size as Iuc that contains all zero values except for the isolated pixel from the uncorrected image Iuc an inverse image formation is then performed, converting the isolated pixel value I//uc, into a phase history X//v. (Step 6). This step 6 can be implemented in a loop for every image pixel in Iuc. Other implementations include processing all image pixels at the same time, in parallel. The steps for each pixel in the uncorrected image are: selecting a pixel, zero valuing the remainder of the image pixels, and creating a single pixel value image, I″uc, of the same size as the uncorrected image, Iuc. Then reverse transform image I″uc, into a phase history, X″v, that represents only that pixel location:
X″v=−1{I″uc}.
The actual location for each pixel is calculated in the uncorrected image based on detected pixel location (Step 7). The pixel location is relative to a reference point. The reference point can be defined anywhere. It is convenient to define the center of the image as the reference point. The pixel location is calculated as the number of pixels from the reference point, then multiplied by the pixel spacing. The pixel location relative to the reference point is now known.
In many image formation algorithms, and depending on how the data are collected, the image pixel location (Sx, Sy) is distorted from its true spatial location due to the radar's wavefront curvature (See Doerry, A W, “Wavefront curvature limitations and compensation to polar format processing for synthetic aperture radar images,” SAND2007-0046, 902879, January 2006. doi: 10.2172/902879, which is incorporated herein by reference). Step 7 estimates the actual spatial position (Sx′ Sy′) of that pixel location. Specifically, for the polar format algorithm at a close-range radar system, the true pixel location is a linear transform that requires calculating Δr and α. The distance from the radar data collection locations to the reference point (in this case the center of the image) is Rpoint,
where Rpoint(midAperture) is the closest distance, the radar approaches the reference point (this calculation assumes that the image is being formed at this location and orientation):
the actual pixel location is then calculated (s′x,s′y) as:
s′y=sy−Δr
s′x=tan(α)*(Rpoint(midAperture)+sy).
The range loss corrections are computed based on range to actual pixel location (Step 8). Range loss is calculated using a radar range equation, based on the Friis transmission equation (en.wikipedia.org/wiki/Friis_transmission_equation). In the radar range equation, range loss is a term in the dominator, to the fourth power. The radar collects data based on the range to scene center. When the scene size is large relative to the distance to the radar, the R4 varies significantly over the entire scene. The assumed range used thus far in processing needs to be removed before the actual range to this pixel location is used to adjust the received power level. Fundamentally, targets that are closer in range return more power than the radar expects. This correction will adjust this difference.
The range loss correction is made on a pulse-by-pulse basis. For each position of the radar collected data, the distance to the reference point is calculated as Rngref. Since the distance between the actual pixel location (s′x,s′y) and each position the radar collected data is calculated as Rngtarget, the correction factor for range loss can be calculated as
Antenna beam pattern corrections are computed as a matrix based on frequency and angle to a measurement location at every sample position (Step 9). Antenna beam pattern corrections are computed as a matrix based on frequency and angle between target true location and all radar measurement locations. Some radars use the same antenna for both transmit and receive. One method to calculate this angle is to first define a normal vector that represents the relative position to the radar measurement locations and pointing direction of the transmit and receive antenna(s). The radar measurement positions are then used with the relative antenna location to calculate an absolute location for each transmit and receive antenna at all radar measurement locations. Another vector from the transmit and receive antenna to the actual pixel location is calculated. The angle between these two vectors is then calculated. A dot product is one way to compute the angle between the two vectors. The computed angle at each location is then used to interpolate the antenna gain from a stored set of antenna pattern measurement data; this antenna gain is GTx and GRx.
Antenna pattern measurement data will have two dimensions to express antenna gain: angle and frequency. Antenna pattern angle and frequency data can be measured independently of the radar system to characterize the antennas. The frequency points from the antenna pattern measurement data can be interpolated to match the same frequency support points and span as the phase history data.
An antenna gain factor correction, Ampfac, is calculated and applied on a per-pulse basis. Each pulse has a unique angle from the antenna to the true pixel location (calculated as described in the above paragraph). The collected data may or may not be calibrated to a specific RCS value at a specific location. In the case the data has been calibrated, the calibration needs to be removed before applying this antenna correction. The antenna gain is calculated in the same way as it is calculated for the true pixel location, except the reference point is used instead of the true pixel location, which is GrefTx and GrefRx.
Phase corrections are calculated based on a suitable image formation algorithm known in the art (Step 10). For an example, see Doerry, “Wavefront Curvature Limitations and Compensation to Polar Format Processing for Synthetic Aperture Radar Images”, page 12, equations 114-124. The phase corrections are expressed as Phasefac.
Range loss corrections, antenna beam pattern corrections, and phase corrections are interpolated in the wave number domain according to the image formation method (Step 11). The phase corrections are calculated in the phase history domain based on the radar position as data is collected. The processes used in the image formation process are applied to convert the correction data to the same state as the reverse transformed image. Specifically, for use with a polar format algorithm, this process is a data resampling in the slow-time dimension. In the present implementation, correction data is created on resampled grid coordinates in the fast-time dimension (of the phase history). This is accomplished by resampling the antenna pattern data in the frequency dimension to correspond to the frequency points used in polar format to provide an interpolated phase history X//corr:
The interpolated corrections from Step 11 are applied to the phase history of the single pixel image I//uc(Step 12). This step 12 is a multiplication of each element of the phase history, X″v, with the same corresponding frequency and angle element of the interpolated phase history, X″corr, of the single pixel:
Xv′″=Xv″o Xcorr″.
The interpolated wavenumber domain data is converted back into an image (Step 13). This step 13 is a forward transformation, reversing Step 6. It creates a single pixel image with a corrected amplitude.
Ic″{Xv′″}
The pixel value is replaced in the uncorrected image with the corresponding pixel value, I//c(x,y), in the image from Step 13. Then proceeding to the next pixel in the uncorrected image, steps 6-14 are repeated until all pixels are corrected (Step 14).
In an alternate embodiment, corrections are made just to the antenna beam pattern, which is particularly useful for long range SAR systems where the range differences between near and far edges of the SAR image do not have a large variance in RCS due to relative change in distance.
A rail SAR system was computer simulated to generate SAR data from a set of objects to test the precision and accuracy of the method of the present invention.
Rail SAR Computer Simulations
A test case of five 1.5-inch diameter metal spheres was created using V-LOX electromagnetic software (IERUS Technologies, Inc., Huntsville, Alabama, www.ierustech.com. V-Lox is a computational electromagnetics prediction software product based on method of moments and leverages advanced matrix compression and GPU acceleration to output high quality solutions quickly.) The sphere targets were positioned at a near corner 16, near center, center 15, far center, and far corner 17 of a 5 ft-by-5 ft area centered at a point 10 ft away from the radar (center location 15 in
The computer simulations and SAR measurements show that the radar antenna beam pattern correction method of this invention improves the accuracy of measured RCS values by bringing the measured RCS values closer to true RCS values.
The computing device 30 additionally includes a data store 34 that is accessible by the processor 31 by way of the system bus 33. The data store 34 may include executable instructions, operating parameters, etc. The computing device 30 also includes an input interface 35 that allows external devices to communicate with the computing device 30. For instance, the input interface 35 may be used to receive instructions from an external computer device, from a user, etc. The computing device 30 also includes an output interface 36 that interfaces the computing device 30 with one or more external devices. For example, the computing device 30 may display text, images, etc. by way of the output interface 36.
Additionally, while illustrated as a single system, it is to be understood that the computing device 30 may be a distributed system. Thus, for example, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 30.
The foregoing description illustrates and describes the disclosure. Additionally, the disclosure shows and describes only the preferred embodiments but, it is to be understood that the preferred embodiments are capable of being formed in various other combinations, modifications, and environments and are capable of changes or modifications within the scope of the invention concepts as expressed herein, commensurate with the above teachings and/or the skill or knowledge of the relevant art. The embodiments described herein above are further intended to explain the best modes known by applicant and to enable others skilled in the art to utilize the disclosure in such, or other, embodiments and with the various modifications required by the particular applications or uses thereof. Accordingly, the description is not intended to limit the invention to the form disclosed herein. Also, it is intended that the appended claims be construed to include alternative embodiments. It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated above in order to explain the nature of this invention may be made by those skilled in the art without departing from the principle and scope of the invention as recited in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7532150 | Abatzoglou et al. | May 2009 | B1 |
8344934 | Ryland | Jan 2013 | B2 |
10107895 | Cho et al. | Oct 2018 | B2 |
10330786 | Musgrove | Jun 2019 | B1 |
10495750 | Musgrove | Dec 2019 | B1 |
10845461 | Phelan et al. | Nov 2020 | B2 |
11144794 | Lee | Oct 2021 | B2 |
11361398 | Lee | Jun 2022 | B2 |
20070164894 | Sherman | Jul 2007 | A1 |
20080042893 | Connell | Feb 2008 | A1 |
20120032677 | Dannels | Feb 2012 | A1 |
20130129253 | Moate | May 2013 | A1 |
20230326057 | Blanche | Oct 2023 | A1 |
Number | Date | Country |
---|---|---|
104330796 | Feb 2015 | CN |
112444783 | Mar 2021 | CN |
2015141124 | Aug 2015 | JP |
WO-2008115175 | Sep 2008 | WO |
Entry |
---|
Boag, “A fast multilevel domain decomposition algorithm for radar imaging,” in IEEE Transactions on Antennas and Propagation, vol. 49, No. 4, pp. 666-671, Apr. 2001, doi: 10.1109/8.923329 (Year: 2001). |
Zhao et al., “A Temporal Phase Coherence Estimation Algorithm and Its Application on DInSAR Pixel Selection,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, No. 11, pp. 8350-8361, Nov. 2019, doi: 10.1109/TGRS.2019.2920536. (Year: 2019). |
Li et al., “A method to improve the accuracy of SAR image change detection by using an image enhancement method,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 163, May 2020, pp. 137-151 (Year: 2020). |
Onhon et al., “A Sparsity-Driven Approach for Joint SAR Imaging and Phase Error Correction,” in IEEE Transactions on Image Processing, vol. 21, No. 4, pp. 2075-2088, Apr. 2012, doi: 10.1109/TIP.2011.2179056. (Year: 2012). |
Yue et al., “Synthetic Aperture Radar Image Statistical Modeling: Part One-Single-Pixel Statistical Models,” in IEEE Geoscience and Remote Sensing Magazine, vol. 9, No. 1, pp. 82-114, Mar. 2021, doi: 10.1109/MGRS.2020.3004508. (Year: 2021). |
Number | Date | Country | |
---|---|---|---|
20230135348 A1 | May 2023 | US |