The present specification generally relates to systems for and methods of increasing the location accuracy (e.g., geographic location accuracy). More particularly, the present specification relates to increasing location and/or targeting accuracy using an image product.
It is desirable to increase location and/or targeting accuracy in a variety of systems. An example of a system that could benefit from increased location and/or targeting accuracy is a system that uses a synthetic aperture radar or other battlefield image products.
Military planners and campaigners heretofore have not achieved the full capacity of airborne synthetic aperture radar (SAR) implemented on unmanned aerial vehicles (UAV). One large advantage of UAV SAR systems in a tactical environment is their relatively low cost, flexibility of deployment, useful resolution (˜0.25 m), and all weather and day/night imaging capability. A significant disadvantage of UAV SAR systems is lower geographic location accuracy of targets within their field of view.
One measure of munition capability is the circular error probable (CEP) or radius from the mean impact location it lands 50% of the time. R95 is a factor proportional to CEP and represents the radius from the mean landed in 95% of the time. These are measures of munition repeatability. Kill radius (Rk) sets the combined munition/target area in which the designated target is destroyed if it lands within this area. For hard targets (tanks, bunkers, etc), Rk is a strong function of position on the target. Target ‘soft spots’ have larger Rk values while physically missing the target aim point by much at all leads to a zero kill radius (Rk=0).
For a soft target (unarmored vehicle, standard building, antenna, etc), Rk is a very weak function of position.
The plot of curve 1 is largely linear with Rt/R95˜Rk/R95−0.83 for Rk/R95>1.2. Since soft targets are usually destroyed by energetic shrapnel whose energy incident on a unit area ˜1/R^2, the minimum munition weight is ˜Rk^2.
Munitions designed for attacking hard targets generally penetrate the target to some depth followed by an explosion. To accomplish this, the munition must land within some distance (Rs) from the soft spot and no further away. Increasing the explosive power does not by much increase Rs. In this case, the maximum allowed targeting error (Rt) relative to R95 is still given by curve 1 in
If targeting capability limits the ability to achieve high kill probabilities, multiple munitions per target can be launched. However, launching multiple munitions per target is costly. The targeting probability can be taken as uniform within a circle of radius Rt centered on the aim point and the size of the target soft spot is Rs. Plots of curves 3, 4, 5 and 6 in
Submeter accuracy today is generally achieved by active man-in-the-loop continuous adjustment (“joy sticking”) of the munition trajectory. This requires personnel and aircraft to remain in the proximity of the munition drop off point. Achieving high Pk's (kill probabilities) in this manner places high value assets (aircraft and personnel) at greater risk than attacking from a larger standoff distance and immediately leaving the munition drop off point. A further drawback of joy sticking for accuracy is that a single munition requires the full attention of the bombardier for the duration of the drop (˜1 minute). This precludes attacking multiple targets in parallel with the same high precision.
Thus, there is a need to increase targeting and/or location accuracy. There is also a need to increase targeting accuracy for a UAV. There is also a need for a UAV SAR with greater geographic accuracy. There is further a need for precision target geographic location utilizing a lower accuracy immediate image product. There is a further need for improving aircraft navigational fix using an image product for location determination. There is further a need for a system and method of increasing location or targeting accuracy using a remapping technique.
An exemplary embodiment relates to a method of mapping a target region image to a referenced image. The method includes steps of acquiring the target region image and acquiring the referenced image overlapping the target region image. The method further includes determining a number of common subregions in an intersection of the referenced image and the target region image, determining offsets between the common subregions, computing a distortion map of the target region image over the intersection, and remapping the target region image to match the reference image.
Another exemplary embodiment relates to a method for improving target accuracy of a GPS guided munition. The method includes acquiring a firing station image and precisely locating firing and placement locations using a remapped image. The method further includes acquiring GPS locations of the firing emplacements, and using a difference from the firing emplacement locations and the GPS locations to correct targeting.
Yet another exemplary embodiment relates to a method of improving a navigational position fix for an aerial platform. The method includes acquiring a ground image using sensors imbedded on the aerial platform. The method also includes recording navigational telemetry and other flight information including a GPS signal, locating the ground image using a remapped image, and locating the aerial platform using the ground image. The method also includes correcting the in situ navigational data of the aerial platform using information gained in the locating steps.
Still another exemplary embodiment relates to a method precisely targeting a munition. The method includes acquiring a firing image station image, precisely locating firing emplacement locations using a remapped image technique, transmitting the firing emplacement locations to firing control, acquiring a target image, and locating targets within the target image using the remapped image technique. The method also includes transmitting the firing emplacement locations to firing control and releasing munitions utilizing the emplacement locations and the target locations.
Exemplary embodiments will be described hereafter with reference to the accompanying drawings, wherein like numerals denote like elements, and:
By improving targeting, and/or location accuracy, the method and system disclosed herein enables the advantageous tactics of multiple targeting at greater standoff distances with less time spent in high danger zones in one embodiment. Although described below with respect to UAV and targeting applications, the method and systems described herein can be utilized in a variety of targeting, location, navigational, and control applications without departing from the scope of the invention. The methods disclosed herein can be implemented using a computer system or other electronic hardware implementing the steps disclosed herein as routines, instructions or other computer commands.
Taken by themselves, typical SAR errors can be ˜5 m which is generally not useful for most hard targets (Rs˜1m) and for soft targets requires larger munitions and associated collateral damage. A remapping technique for target location that significantly reduces these geographic location errors can be achieved as disclosed herein in one embodiment. In one embodiment, one of the consequences of such a technique are increasing the effective range of inertially guided munitions and improving the performance of seeker munitions by narrowing their required field of regard, thus reducing terminal target misidentification/confusion.
While errors in UAV SAR imagery are large for many targeting purposes, the ability to transmit timely, all condition, engagement theatre imagery provides reason to use UAV SAR systems. To usefully improve the accuracy of SAR imagery, subregions of the SAR image are correlated with a reference image of known accuracy in one embodiment. The SAR image is shifted and possibly undistorted to match the reference image in one embodiment. The resulting remapped SAR image shares the same accuracy as the reference image to within the limits of remapping in one embodiment.
With reference to
Acquire Target Region Image
Method 100 includes an acquire target region step 301. At step 301, the SAR image product (target image) of the region where there are potential targets is acquired from a UAV, satellite or other platform. The image can be received and stored in a computer system. The target image may contain Target Region Image known geometric distortions which are removed either before or as part of the subsequent processing in one embodiment.
In the case of spotlight SAR imagery, this distortion includes key stoning and remapping from the native slant plane to the ground plane (local tangent plane to the earth). For satellite based SAR maps with known antenna pointing errors, these errors would be corrected for in this step in one embodiment while for optical imagery, standard perspective corrections are applied in one embodiment. For other image acquisition technologies, the appropriate correction can be applied. After transforming to the ground plane (local tangent to earth surface), an image is provided which is described by a generalized amplitude, A(ie,in), that is defined over a usually polygonal region (support region) with ie, in the east, north direction pixel indices representing points dpix (meters) apart and running over ranges ie=1:ne, in=1:nn representing an image with ne*nn total pixels according to one embodiment.
Meaning of generalized amplitude can depend on the acquisition technology, being the return amplitude (presumably range corrected) for a SAR system, a grayscale or R/G/B color intensity in an optical image, a grayscale intensity in an infrared system, etc. The target image possibly contains target objects (targets) that are intended to be attacked and requires prompt evaluation and processing (typically on the order of minutes) to be of major tactical utility. Thus, for targeted building or structures (static targets), the friend or foe presence dictates the time line for offensive action while vehicles, airplanes, armor, IED placement actions all represent transient targets that can possibly shift or move before action can unfold. In addition to the generalized image amplitude (which is referred to as amplitude when no confusion can arise as to its meaning) and the support region, a nominal geographic location (latitude, longitude) is also conveyed by the imaging platform (UAV, satellite, etc. for example) in one embodiment.
Depending on the nature of the reference image, the pixel amplitudes, A, may be rescaled to improve the quality of the subsequent image matching steps. With reference to
With reference to
Acquire Reference Image
In step 302 of method 100, the reference image is acquired. The reference image is generally from a pre-stored image atlas encompassing the theatre of action and is quickly fetched up based on the target image nominal geographic location and extent. The reference image can be received and stored in a computer system. Again, the reference image consists of generalized amplitude A′(ie′=1:ne′,in′=1:nn′), with support over a well defined region, with pixel size dpix′, and a geographic location that is preferably the same as target geographic location but could be different. The reference image preferably significantly overlaps the target image over the region of interest (ROY) and preferably completely encompasses it. The reference image is preferably in the ground plane but if there are well defined geometrical distortions (vide supra) from a ground plane map, the distortions can be taken out as part of the acquisition process in one embodiment. A goal is targeting accuracy in the pixel (dpix) to sub pixel (<dpix) range. So the pixel size of this image (dpix′) need be no smaller than dpix in one embodiment. This means either the image atlas can be stored in a greatly reduced data volume or before transmitting to the processor, the reference image can be down sampled (typically using a tapered local average with interpolation) to the required resolution in one embodiment. This provides an enormous savings in data transmission time. When the pixel size in the image atlas is dpix′>dpix (this is typical of commercial satellite imagery with dpix′=0.41 m, ref 1), no such down sampling is required. In this case, reference image could be interpolated to the dpix level or the target image down sampled to the dpix′ level. Generally, when dpix<dpix′<2-4*dpix=dupix, the reference image is interpolated to dpix sized pixels using a quadratic spline over each dpix′ pixel that preserves the total amplitude within each pixel and is continuous at the edges in one embodiment. Other interpolation schemes are possible and can be carried out either at a computer processor collocated with image atlas site or with the local processor as part of step 302. When dupix<=dpix′, the target image is down sampled and interpolated to match the reference image pixel size dpix′.
As an example,
At step 303 of
(dx,dy)(ie′,in′)=(tx+dx/die′*(ie′-<ie′>)+dx/din′*(in′−<in′>),
ty+dy/die′*(ie′−<ie′>)+dy/din′*(in′−<in′>) (eq 1)
where:
From the residuals (differences between measured offsets and eq 1), an updated estimate for nu is obtained which can be used in choosing our subregion sizes provided the subregion are offset according to eq 1 in one embodiment. Proceeding in this manner, it is clear the value of nu and the size of subregions evolves over the course of the calculation in one embodiment.
The subregions need not be square and aligned with the east-north axes, this is mainly for illustration. More typically, the situation illustrated in
Determine Offsets Between Subregions
In step 304 (
R2(dx,dy)=<(A(x+xc,y+yc)−<A>)*(A′(x+xc+sx+dx,y+yc+sy+dy)−<A′>)>2/σ2Aσ2A, (eq 2)
The oversized (reference) image is shifted to maximize the number of offsets computed in one embodiment. Eq 2 could also be implemented with the roles of A and A′ reversed in one embodiment.
Eq 2 involves a number of discrete convolutions which can be accomplished using fast Fourier transforms (FFT's). At any given stage in the calculation (determined by the confidence in estimated offsets sx, sy and the corresponding size of subregion), this portion of the computation can be parallelized in one embodiment. In
After determining R^2 as a function of shift (dx, dy) (
This step may involve looping back to step 303 to determine new subregions based on the shifts determined up to that point in the calculation.
At the end of step 304, a list of subregion offsets or distortions (
Compute Distortion Map
At a step 305, all of previously determined distortions are gathered and synthesized into a distortion map (e.g., in a computer system). Initially this amounts assigning a weight, w, to each subregion offset (dxs, dys). Weights could be based on peak R^2 values. More significantly, whatever R^2 based weighting are chosen, a different weight for the projection of (dxs, dys) in the each of the 2 directions of principal curvature can exist. A typical weighting function using the curvature principal C (units of 1/pixels) is:
w˜−C*(1+C)|0 for −C<1|−C>=1 (eq 3)
which produces a non-negative weight since C is always non-positive at a maximum point. If a section of straight highway or railroad runs through our sub image, this formula minimizes the contribution of directions where there is little discrimination (parallel to the highway) and maximizes those in the directions of higher discrimination (perpendicular to the highway).
Another method for assessing quality is to compute further subregion offsets for additionally defined subregions that are displaced from a given subregion by amounts typically in the range of 1 or 2 pixels. These substantially overlapping regions should have substantially the same computed offset. The variance, σ2, of this offset from the mean is then a metric for the weight (something like w˜1/(σ2+b).
So far, offsets and corresponding confidences (expressed as weights) at a number of discrete points in the intersection have been computed. For the subsequent remapping step, this map is extended to cover all points. A number of approaches are applicable. Simplest is a linear fit such as that in eq 1 with the residuals interpolated from their Delaunay triangulation in one embodiment. For points outside the Delaunay mesh, only the linear fit is used. Similarly, instead of a linear fit a higher order polynomial fit could be used in combination with a Delaunay mesh. Another approach directly uses an over determined, 2-dimensional, polynomial fit to the data in one embodiment. As an additional exemplary refinement, points which are more than k (typically ˜3) standard deviations away from the fit are discarded (weights set to 0) and the process repeated (possibly more than twice) in one embodiment. A further refinement fits to successively higher order polynomials until there is no longer significant reduction in the variance of the residuals in one embodiment.
In fitting to SAR data in regions with significant orography, layover effects can be considered. Layover effects are generally not removed as part of step 301 in one embodiment. In deeply undulating terrain, the effect of layover error could show up as rapidly changing offsets as a function of position that are not well modeled by a low order polynomial. If a reference image atlas can also provide simple orographic data such as the standard deviation, or 95th percentile largest land slope within the reference image, then from the data acquisition geometry that accompanies SAR imagery (block 301) the rapidity of changes in subregion distortions due to the layover effect can be estimated. With such an estimate, a determination of whether this aspect of the layover effect is negligible or whether the degree of fitting needs to be increased can be made in one embodiment. Note that in the process of correction, automatic removal of orographic layover effects can be achieved in one embodiment.
Remap Target Region
Having determined what distortion or error exists in the target image. Step 306, the target image can be remapped by shifting target pixels in the original image according to the determined distortion and appropriately interpolating. More directly through, any targets to be engaged can at this point be identified by a human (or possibly automatically), designated for attack, and the remapped position transmitted to the appropriate fire control authority.
Method 100 can be modified. In an alternative embodiment, method 100 can follow the same steps as shown in
Under normal circumstances, global position systems (GPS) have accuracies in the range of a few cm (sophisticated land surveying systems using maximum # of satellites) to 20 m (ref 2). Targeting of GPS weapons can be greatly improved by employing method 100. In one embodiment, if the platform (UAV SAR, for example) en route to a targeting region images one or several friendly firing stations (e.g., missile batteries) or acquires an image of the firing station, then by correlating the resulting SAR image with a satellite optical image, the precise launching locations of said missile batteries are determined. This information is saved either on the platform and/or transmitted to firing control and used to correct any errors in initial launch position of GPS or inertially guided munitions.
With reference to
Against well matched adversaries, degradation of GPS capability is expected due to enemy anti-satellite activity and jamming/deception measures. With ineffective or unknown GPS disposition, accurate targeting of autonomous munition systems (especially inertial guidance controlled) becomes paramount. Under these circumstances, the double look method 1600 of
Either the reference image, the target image, or both could be from satellite based SAR systems (resolution permitting). Furthermore, the reference image could consist of several discrete images that are stitched together upon use.
A satellite or other high altitude (stratospheric) platform could also acquire target images containing moving targets (cruise missiles, aircraft, helicopters) in one embodiment. These would typically be optical or infrared images. Using method 100 of
Rectangular pixels can be used instead of square pixels in one embodiment. In another embodiment, the reference image can be stitched together. Reference image can be reduced to range×range coordinates.
The calculations discussed herein can be implemented on dedicated electronic hardware, a processor or computer.
In an alternative embodiment, instead of acting directly on the target coordinates transmitted to fire control, fire control can add an additional offset to account for known errors in the reference image atlas. If the reference atlas is commercial satellite imagery (unclassified & widely available), it may not provide sufficient targeting accuracy for the intended munition/target combination. In that case, a second conversion table that corrects for known errors in the reference image atlas would supply the necessary correction. This partitioning of accuracy may be necessary to efficiently meet mandated information security protocols
It is understood that while the detailed drawings, specific examples, equations, steps, and particular values given provide exemplary embodiments of the present invention, the exemplary embodiments are for the purpose of illustration only. The method and apparatus of the invention is not limited to the precise details and conditions disclosed. For example, although specific types of images and mathematical operations are mentioned, other image data and algorithms can be utilized. Various changes may be made to the details disclosed without departing from the spirit of the invention which is defined by the following claim.
The present application is a continuation of U.S. patent application Ser. No. 13/443,684, filed on Apr. 10, 2012, now U.S. Pat. No. 9,074,848, which claims the benefit of and priority to U.S. Provisional Application Ser. No. 61/517,141, filed Apr. 13, 2011, both of which are incorporated by reference in their entireties and for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
3680086 | Valstar | Jul 1972 | A |
3737120 | Green | Jun 1973 | A |
3740747 | Hance et al. | Jun 1973 | A |
3808596 | Kazel | Apr 1974 | A |
4133004 | Fitts | Jan 1979 | A |
4162775 | Voles | Jul 1979 | A |
4476494 | Tugaye | Oct 1984 | A |
4490719 | Botwin et al. | Dec 1984 | A |
4771287 | Mims | Sep 1988 | A |
4975704 | Gabriel et al. | Dec 1990 | A |
4993662 | Barnes et al. | Feb 1991 | A |
5018218 | Peregrim et al. | May 1991 | A |
5213281 | McWilliams et al. | May 1993 | A |
5289993 | McWilliams et al. | Mar 1994 | A |
5309522 | Dye | May 1994 | A |
5606627 | Kuo | Feb 1997 | A |
5626311 | Smith et al. | May 1997 | A |
5644386 | Jenkins et al. | Jul 1997 | A |
5647015 | Choate et al. | Jul 1997 | A |
5884219 | Curtwright et al. | Mar 1999 | A |
6031568 | Wakitani | Feb 2000 | A |
6654690 | Rahmes et al. | Nov 2003 | B2 |
6707464 | Ham et al. | Mar 2004 | B2 |
6898332 | Matsuhira | May 2005 | B2 |
7301568 | Smith | Nov 2007 | B2 |
7408629 | Qwarfort et al. | Aug 2008 | B2 |
7567694 | Lu et al. | Jul 2009 | B2 |
8345979 | Davis | Jan 2013 | B2 |
9074848 | Hunter, Jr. | Jul 2015 | B1 |
20040041999 | Hogan | Mar 2004 | A1 |
20050018904 | Davis | Jan 2005 | A1 |
20060072843 | Johnston | Apr 2006 | A1 |
20060098861 | See | May 2006 | A1 |
20060293854 | Chiou | Dec 2006 | A1 |
20080177427 | Marty et al. | Jul 2008 | A1 |
20080314234 | Boyd | Dec 2008 | A1 |
20100254612 | Oldroyd | Oct 2010 | A1 |
20110044543 | Nakamura | Feb 2011 | A1 |
Entry |
---|
The GPS System, Kowoma.de, Apr. 19, 2009, 2 pages. |
Number | Date | Country | |
---|---|---|---|
61517141 | Apr 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13443684 | Apr 2012 | US |
Child | 14791054 | US |