The present disclosure relates to the determination of a target location and the determination of misalignment between the orientation of a sensor and the orientation of the aircraft on which the sensor is arranged.
Passive geo-location of ground target emitters is performed by surveillance aircraft using Line-of-Sight (LOS) angle measurements provided by a sensor mounted on the aircraft. In order to completely specify the LOS from the sensor to the target, the LOS must be resolved into two angles, such as an azimuth angle and an elevation angle. For some sensors, such as a linear antenna arrays, only azimuth is determined. In this case, it is common to refer to the azimuth angle as Angle of Arrival (AOA) and to perform geo-location using Direction of Arrival (DOA), which is a transformation of AOA into a plane tangent to the Earth’s surface at some specified point within the area of operation. Geo-location performance improves when both azimuth and elevation are provided.
In order to use LOS angles for geo-location, the angles must be transformed from a coordinate frame attached to the sensor to a coordinate frame attached to the Earth. Many passive sensors do not contain a navigation system and therefore are not able to directly measure their orientation relative to the Earth. In this case, the transformation of LOS angles requires that a coordinate frame defined by the aircraft’s navigation system, such as the aircraft body frame, be used as an intermediate frame between the sensor and the Earth. The aircraft’s navigation system typically provides an accurate measurement of the orientation of the aircraft body relative to the Earth. However, the orientation of the sensor relative to the aircraft body may not be well known. Any errors in the assumed sensor mounting angles on the aircraft will produce errors in the transformed LOS angles used for geo-location. It is not difficult to envision a scenario where sensor misalignment angles of several degrees exist, which would result in significant error in the target location estimate.
Example embodiments of the techniques of the present disclosure provide for methods, apparatuses and computer executable mediums that implement operations providing for the simultaneous computation of sensor attitude and target location, using only line of sight angle measurements. According to example embodiments, data indicative of a line of sight to a target is obtained from a sensor mounted on an aircraft. Data indicative of an orientation of the aircraft relative to the Earth is obtained from a navigation system associated with the aircraft. An expression is generated that couples a first variable indicative of misalignment of an orientation of the sensor and the orientation of the aircraft and a second variable indicative of the location of the target relative to the Earth. The first variable and second variable are determined using the data indicative of the line of sight from the sensor to the target and the expression coupling the first variable and the second variable. The misalignment of the orientation of the sensor and the orientation of the aircraft is compensated for using the first variable. The location of the target relative to the Earth is determined using the second variable.
Provided for herein are techniques for simultaneous estimation of target location and sensor misalignment angles. As illustrated with reference to
Sensor 105 may be embodied as an antenna array, an optical sensor, an infrared sensor, or other types of sensors known to the skilled artisan. The attitude (orientation) of a sensor pod attached to an aircraft, such as sensor 105, is rarely known exactly. For example, it may be assumed that the sensor coordinate frame is aligned with the aircraft body forward-right-down frame, but the actual sensor yaw, pitch, and roll angles may differ from those of the aircraft by several degrees. Even if the sensor pod mounting angles are known exactly at some point, they may change slightly each time the pod is removed and reinstalled on the aircraft. These misalignments may degrade performance if the sensor provides Direction Finding (DF) angle measurements for geo-location. The typical result of these misalignments is not an increase in the size of the error ellipse, but an increase in miss distance and a lack of containment by the error ellipse.
As illustrated in
Accordingly, when sensor 105 is used to determine an azimuthal angle φ to target 115 (as illustrated in
According to related art techniques, the misalignment angles α, β and γ may be determined through test flights of aircraft 110 and using sensor 105 to detect the location of a target 115 with a known location. If the location and orientation of the aircraft 110 are known and the location of the target 115 is known, then the misalignment angles α, β and γ may be determined. According to other related art techniques, optical techniques are used to align the orientation of sensor 105 with that of aircraft 110 while aircraft 110 is on the ground. Both of these techniques are costly in terms of time and/or money used in the flights and alignment processes. The techniques of the present disclosure may improve upon these related art techniques by eliminating the need to perform such test flights or on-ground alignment procedures.
According to the techniques of the present disclosure, the misalignment of a sensor orientation is calculated using the same expression as that used to determine the location of a target from the data acquired from the sensor. Using the example described above with reference to
An example process for implementing the techniques of the present disclosure is illustrated in the flowchart 300 of
In operation 310, data indicative of the orientation of the aircraft relative to Earth may be acquired from a navigation system associated with the aircraft. For example, the location and orientation of the aircraft may be well known according to, for example, a Global Positioning System (GPS) device and an inertial measurement unit (IMU) associated with the aircraft, or another system, such as a radar tracking system that monitors the location of the aircraft. Accordingly, this location and orientation data may be obtained from such GPS, IMU, and/or aircraft tracking systems. Aircraft location and orientation data may be obtained that corresponds to the line of sight data obtained in operation 305. For example, each time the sensor acquires line sight data to the target in operation 305, corresponding aircraft orientation and location data may be acquired in operation 310.
In operation 315, an expression is generated in which a first variable indicative of misalignment of an orientation of the sensor relative to the orientation of the aircraft is coupled to a second variable indicative of the target relative to the Earth. According, to example embodiments, the expression generated in operation 315 may be embodied as an expression generated from transforming a location of the target relative to a coordinate system centered on the aircraft to an Earth-centric and/or Earth-fixed coordinate system. According to the even more specific example embodiments described below, the expression generated in operation 315 may be embodied as an objective function minimized using numerical methods, such as an Iterated Least-Square (ILS) method or a Kalman filter method. Additionally, while operation 315 recites a single first variable and a single second variable, it is understood by the skilled artisan that the expression generated in operation 315 may be embodied using multiple variables. For example, the expression generated in operation 315 may include multiple variables that are indicative of misalignment of an orientation of the sensor and the orientation the aircraft, such as the misalignment variables α, β and γ, described above with reference to
In operation 320, the first variable and the second variable are determined using the data indicative of the line of sight from the sensor to the target and the expression coupling the first variable and the second variable. As suggested in the discussion above with respect to operation 315, operation 320 may include determining a plurality of variables indicative of misalignment of the orientation of the sensor and the orientation of the aircraft and a plurality of variables indicative of the location of the target relative to the Earth. As also noted above, operation 310 may be carried out using an ILS method or a Kalman filter method.
In operation 325, the misalignment of the orientation of the sensor and the orientation of the aircraft is compensated for using the first variable. For example, the locating of subsequent targets may be determined using the results of operation 320 so that the process of solving for the misalignment of the orientation of the sensor and the orientation of the aircraft does not need to be reproduced.
Finally, in operation 330, the location of the target is determined relative to the Earth. For example, the value of the second variable and/or other variables determined in operation 320 may be used to determine the location of the target relative to the Earth. According to specific example embodiments, operation 330 may determine the location of the target according to WGS84 longitude and geodetic latitude values.
With reference made to
WGS84 is an Earth-centered, Earth-fixed terrestrial reference system and geodetic datum. WGS84 is based on a consistent set of constants and model parameters that describe the Earth’s size, shape, and gravity and geomagnetic fields. WGS84 is the standard U.S. Department of Defense definition of a global reference system for geospatial information and is the reference system for GPS. It is compatible with the International Terrestrial Reference System (ITRS). The current realization WGS84 (G1762) follows the criteria outlined in the International Earth Rotation Service (IERS) Technical Note 21 (TN 21). The reference values for the WGS84 system are as follows, as illustrated in the coordinate system 400 of
WGS84 identifies four defining parameters. These are the semi-major axis of the WGS84 ellipsoid, the flattening factor of the Earth, the nominal mean angular velocity of the Earth, and the geocentric gravitational constant as specified below in Table 1
According to the present example embodiment, it is assumed that both azimuth and elevation angles are available for geo-location of the aircraft, and the sensor misalignment is defined by a 3-dimensional Euler rotation sequence. Geo-location is performed by simultaneously estimating the target’s WGS84 longitude and geodetic latitude and the 3 sensor misalignment angles. Maximum A Posteriori estimation may be used so that limits on the misalignment angle magnitudes may be incorporated. In other words, it may be assumed that the there is an upper limit to the misalignment of the sensor, which places limits on the possible values to be calculated during the simultaneous calculation of the target’s WGS84 longitude and geodetic latitude and the 3 sensor misalignment angles. For example, it may be assumed that when the sensor is physically attached to the aircraft, its misalignment will be under a particular threshold value.
In addition to the WGS84 coordinate system illustrated in
As noted above, the ECEF frame E has its z axis through the North Pole, its x axis through the intersection of the equator and the Greenwich Meridian, and its y axis oriented to create a right handed coordinate system. The NED frame L has its x axis directed north along the local longitude line, its y axis directed east, and its z axis directed down along the local vertical. The aircraft body frame B has its x axis directed forward out of the nose of the aircraft, its y axis directed to the right from the pilot’s perspective, and its z axis directed down out of the bottom of the aircraft.
If the vector EP is the ECEF position for any object in frame E, then this vector is defined as follows in the WGS84 coordinate system:
where θ is the object’s geodetic latitude, ψ is its longitude, α is altitude, and ε, which equals 0.08181919, is the Earth’s eccentricity. The term rE used above is the Earth’s transverse radius of curvature defined by:
where req = 6378137 is the Earth’s equatorial radius in meters. If (θ, ψ, α) are known, then the components of EP may be determined simply by using (1) and (2). However, if the components of EP are known and (θ, ψ, α) must be determined, then (1) and (2) must be inverted. This can be done numerically or by using a closed-form solution.
The matrices associated with a rotation δ about the x, y, or z axis of a coordinate frame are:
Accordingly, the relative orientation of frames E (the ECEF coordinate frame) and L (the NED coordinate frame) is defined in terms of aircraft longitude ψ and geodetic latitude θ using the following rotation matrix:
where the “TEL” notation is interpreted to mean the “Transformation from the Frame L to the frame E.” In other words, EP = TEL • LP; i.e., the rotation matrix TEL converts a point in the NED coordinate frame to a point in the ECEF coordinate frame. The relative orientation of any two coordinate frames may be found by multiplying the appropriate rotation matrices. For example, the relative orientation of frames E (the ECEF coordinate frame) and B (the aircraft body coordinate frame) is given by TEB = TEL • TLB. Furthermore, the inverse of any rotation matrix is given by its transpose. For example, TLE = [TEL]T.
The relative orientation of frames L (the local NED coordinate frame) and B (the aircraft body coordinate frame) is defined by the following rotation matrix:
The relative orientation of frames B (the aircraft body coordinate frame) and A (the assumed sensor coordinate frame) is defined by the rotation matrix TBA, which is constructed using the assumed mounting angles of the sensor.
The sensor misalignment, i.e., the relative orientation of frames A (the assumed sensor coordinate frame) and C (the actual sensor coordinate frame) is represented by the rotation matrix:
where, as noted above, α, β and γ are the sensor misalignment angles that the techniques of the present example embodiment will estimate in conjunction with location of a target sensed by the sensor.
As explained above with reference to
This definition of the LOS angles is such that if frame C (the actual sensor coordinate frame) is aligned with frame B (the aircraft body coordinate frame), then a positive azimuth indicates that the target is to the right of the pilot, and a positive elevation indicates that the target is above the pilot. According to the techniques of the present example embodiment, a relationship between the LOS angles of the sensor to the ECEF coordinate frame is determined.
With this background in place, the estimation problem may now be described. Specifically, what is unknown is the location of the target in the ECEF coordinate frame and the sensor misalignment relative to the orientation of the aircraft. Accordingly, if ψ and θ represent the target’s WGS84 or ECEF longitude and geodetic latitude, respectively, then the 5 × 1 parameter vector of the unknown values to be estimated is:
where α, β and γ are the sensor misalignment angles defined in equation (8), above. In other words, equation (10) defines a 5 × 1 vector of the values that will be simultaneously solved for using the techniques of the present disclosure - the WGS84 geodetic latitude θ of the target, the WGS84 longitude ψ of the target, and the three sensor misalignment values, α, β, and γ.
To summarize the calculations that follow, the five values to be estimated are the elements of the vector q shown in equation (10), above. The sensor measurements used to estimate these quantities are azimuth and elevation angles determined by the sensor and are the components of the vector z shown in equation (11), below. Equation (12), below, gives the mathematical relationship between sensor measurements and the quantities to be estimated through the vector function h, which is shown implicitly in the derivatives of equations (36)-(39).
More specifically, the problem to be solved is to determine the vector q that best fits the measurements z while accounting for the function h and the statistics of the measurement errors. The optimal value of q is the one that minimizes the objective function shown in equation (18), below. This is sometimes referred to as an “inverse problem” since the objective is to determine q as a function of z, and the given function h is z as a function of q.
The technique used to solve this problem in the following example is Iterated Least-Squares (ILS), which is described with reference to equations (29)-(31), below. ILS is a “batch processing” numerical method where all measurements are processed simultaneously to compute a single solution. The solution consists of an estimate q̂ of q (shown in equation (29)), and the estimation error covariance matrix P (shown in equation (31)). This covariance matrix gives the uncertainty in the estimate q̂.
To solve for the values in q of equation (10), n different sensor values will be used. According to the specific example embodiment, the n different sensor values are embodied as n pairs of azimuth/elevation measurements (φi, ηi) of sensor values for LOS to a target. The n pairs of sensor values may be stored in the following 2n × 1 vector:
Let h(q): ℝ5 → ℝ2n be a function that gives the true values of these quantities. Then
where ε is a 2n × 1 vector of measurement errors. For purposes of the present example embodiment, it is assumed that the measurement errors are Gaussian, zero-mean, and uncorrelated. As a result:
where R is a known 2n × 2n diagonal positive definite measurement error covariance matrix having the following form:
According to the techniques of the present example embodiment, the unknown parameter vector q is not treated as a constant, but as random variable with a known α priori distribution:
where Q is a diagonal positive definite matrix. Also according to the techniques of the present example embodiment, q and the measurement error vector ε are uncorrelated, so that
where E[·] is the statistical expectation operator. As understood by the skilled artisan, the Maximum A Posteriori (MAP) estimate of q is given by:
where:
This function g(q) is the objective function to be minimized using the numerical techniques of the present example embodiment.
As indicated above, the objective function g(q) of equation (18) is a function of q. As also noted above in equation (15), q may have a Gaussian or normal distribution. It will now be described how the α priori distribution of equation (15) of the parameter vector q given in equation (10) is constructed. According to the present example embodiment, the α priori mean and covariance of q are provided as follows:
For purposes of the present example embodiment, it is assumed that the magnitude of each misalignment value, α, β, and γ, may lie anywhere between zero and some known maximum possible value, such as 5°. Since each misalignment value is assumed to have a Gaussian distribution, it is assumed that the maximum possible value of each misalignment is “3-sigma.” Accordingly:
In general, there will be no α priori information on the target location. For MAP estimation, this may be modelled as:
As will be described in detail below, the numerical method used to solve the estimation problem of the present example embodiment does not require Q, but requires Q-1. Therefore, the α priori distribution for q will be defined using:
With u and Q-1 determined as set forth above, an ILS method is used to determine the solution for equations (17) and (18) above, and this solution is provided by:
where H(q̂k) is the 2n × 5 gradient matrix of the function h, provided as follows:
The estimation error covariance matrix at the kth step is:
As will be shown through the development below, the gradient matrix H needed for the ILS method, may now be determined.
With respect to the gradient matrix H, each row of the gradient matrix H is the derivative of either azimuth φ or elevation η, the values determined by the sensor, with respect to the parameter vector q. Therefore, the gradient matrix H may be determined. These derivatives may be partitioned as follows:
To compute these derivatives, the sensor and target positions in the ECEF coordinate frame are given by ps and pT, respectively. Accordingly, the relative position vector from the sensor to the target is:
The unit vector from the sensor to the target (i.e., the LOS from the sensor to the target), is determined as follows:
For example, the above unit vector when represented using LOS angles in frame C (the actual sensor coordinate frame) was given in equation (9). The chain rule of differentiation may be used to construct the terms in (32) and (33) above, as follows:
The above derivatives of equations (36)-(39) are computed using only the current sensor position and the current estimate of the parameter vector q. The LOS angle measurements are not used in these computations. These derivatives are needed to construct the gradient matrix H shown in equation (30), which in turn is required by the ILS algorithm used to compute the parameter vector q shown in equation (10).
What follows is a determination of the terms need for equations (36)-(39).
First, using the definition of cu given in equation (9) provides the following:
Equations (40) and (41), in turn, give the following:
The expressions of equations (42) and (43) are the first terms in equations (36)-(37) and (38)-(39), respectively.
As would be understood by the skilled artisan, if:
then, for any nonzero vector p:
where I is the 3 × 3 identity matrix.
Therefore, in frame C:
where ||cp|| is the range from the sensor to the target. Equation (46) gives the second term in equations (36) and (38).
In order to determine the position vectors, we know from above that a position in coordinate frame C (the actual sensor frame) is given by the transformation TCE applied to the point in the coordinate from E (the ECEF frame):
Therefore:
And:
which gives that:
Equations (48) and (50) give the third and fourth terms in equations (36) and (38).
To simplify the differentiation of equation (1) above, the following equation may be used:
where:
The derivative need in equations (36) and (38) is:
The 3x2 matrix shown in equation (54) is the fifth term in equations (36) and (38). Using equation (2) and (51)-(53) gives:
where:
The derivative needed in (37) and (39) is:
This 3x3 matrix is the second term in equations (37) and (39). By definition:
Substituting equation (8) into equation (62) gives:
Differentiating equation (63) gives:
From equations (3)-(5) we have the following derivatives needed in (64)-(66):
According to specific scenarios, there may be situations where only the aircraft azimuth φ data is available. In such situations, it may be difficult to estimate the misalignment angles β and γ. Accordingly, when only azimuth φ data for the aircraft is available, it may be beneficial to simplify equation (8) to:
It may also be beneficial to simplify equation (9) to:
Using these simplifications, equations (70) and (71) give that:
Therefore:
where Aφ is the azimuth angle measurement in frame A. In this situation, the misalignment angle α is equivalent to an azimuth measurement bias.
As noted above, the previous example solved for the target location and sensor misalignment values using an ILS method. An alternative method utilizing a Kalman filter method will now be described.
A Kalman filter provides a solution that is statistically equivalent to the one computed by ILS, but it is a recursive algorithm. Each measurement is processed individually to produce a sequence of solutions, where each filter output is a refinement of the previous output. A Kalman filter is simply a recursive implementation of ILS. The advantages of a Kalman filter when compared to ILS are that it requires less data storage and the code developed for implementation is simpler and faster. The advantages of ILS are that it does not require a value of the covariance matrix P for initialization and is often more stable.
As with the ILS example above, the present Kalman filter example embodiment, utilizes n azimuth and n elevation sensor measurements, so the vector z in equations (11) and (12) has 2n components.
A Kalman filter solution is computed using the following three steps for each of k = 1,2, ... , 2n:
This recursion requires initial values q̂0 and P0. The final outputs from the Kalman filter after processing all 2 n measurements are q̂2n and P2n. These are the values to be compared with those computed by ILS.
In equation (74) above, Hk is equivalent to row k of the matrix H shown in equation (30), above. The value
is the measurement error variance in position (k, k) of the matrix R shown in equation (14), above. In equation (75), zk is element k of the vector z in equations (11) and (12), above, and h is the function in row k of the vector function h(q) shown in equation (12), above. In (76), I is the 5×5 identity matrix.
In addition to the ILS and Kalman filter techniques described above, a grid search method may be used to implement the techniques of the present disclosure. Grid search is a type of hyperparameter optimization. Specifically, grid search is a process that searches exhaustively through a manually specified subset of the hyperparameter space of the targeted algorithm. One specific grid search example embodiment of the techniques of the present disclosure may be implemented through the following operations.
First, a set of target locations and sensor misalignment angles are defined for consideration. These are the components of the 5×1 vector q shown in equation (10). Each grid point is an estimate q̂ of q, as discussed above.
Second, at each grid point, the expected azimuth and elevation angle values are determined. These are the elements of the 2n×1 vector h(q̂) shown in equation (29), where the function h was introduced in equation (12).
Third, at each grid point, the quantity (z - h(q̂))T R-1(z - h(q̂)) is calculated where z is the 2n×1 vector of measurements shown in equation (11) and R is the 2n×2n measurement error covariance matrix shown in equation (14). Finally, the grid point having the smallest value of this quantity is selected. The selected grid point gives the estimated target location and sensor misalignment angles.
Advantages of a grid search algorithm when compared to ILS or a Kalman filter methods may include:
On the other hand, the grid search set may not contain the true parameter values and the computation time needed to implement a grid search method may be long for finely-spaced grid points, which are required for accurate parameter estimation.
With reference now made to
The parameters defining the scenario being simulated are given in Table 2, below.
The aircraft-target geometry is shown in
According to the simulation, the sensor is an antenna array mounted on the left side of the aircraft with an assumed orientation defined by TBA = R(-π/2, z), as illustrated in
The values of the misalignment angles α, β, and γ, the maximum misalignment angle value, and the angle measurement parameters are given by Table 3, below.
The true azimuth and elevation angle values in frame C (i.e., the actual sensor coordinate frame) are shown in
Given these values, geo-location of the target was performed according to the ILS techniques of the present disclosure and compared with geolocation performed without using the misalignment estimation techniques of the present disclosure (i.e., geo-location was performed with only target longitude and latitude in the parameter vector q of equation (10), above; α, β, and γ were not included).
Geo-location and misalignment angle estimation performance values are given in Table 4, below.
With reference now made to
As depicted, the device 800 includes a bus 812, which provides communications between computer processor(s) 814, memory 816, persistent storage 818, communications unit 820, and input/output (I/O) interface(s) 822. Bus 812 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, bus 812 can be implemented with one or more buses. I/O interfaces 822 may be configured to receive data from external devices 828. Examples of such external devices may include the sensors and/or aircraft navigation systems described above with reference to
Memory 816 and persistent storage 818 are computer readable storage media. In the depicted embodiment, memory 816 includes random access memory (RAM) 824 and cache memory 826. In general, memory 816 can include any suitable volatile or non-volatile computer readable storage media. Instructions for the non-classical imaging techniques of the present disclosure may be stored in memory 816 or persistent storage 818 for execution by processor(s) 814. The control logic stored in memory 816 or persistent storage 818 may implement the non-classical imaging techniques of the present disclosure. Additionally, memory 816 and/or persistent storage 818 may store the data received from, for example, the sensors and/or aircraft navigation systems described above with reference to
One or more programs may be stored in persistent storage 818 for execution by one or more of the respective computer processors 814 via one or more memories of memory 816. The persistent storage 818 may be a magnetic hard disk drive, a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer readable storage media that is capable of storing program instructions or digital information.
The media used by persistent storage 818 may also be removable. For example, a removable hard drive may be used for persistent storage 818. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage 818.
Communications unit 820, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 820 includes one or more network interface cards. Communications unit 820 may provide communications through the use of either or both physical and wireless communications links. Finally, computing device 800 may include an optional display 830.
The above description is intended by way of example only. Although the techniques are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made within the scope and range of equivalents of the claims.