In recent years, unmanned aircraft systems (“UAS”) (a.k.a. “unmanned aerial vehicles,” “UAVs,” “drones,” etc.) have become widely used in a wide array of military and civilian applications.
UAS technologies have also continued to develop. One difficulty that has been encountered is navigation of fixed-wing UASs in environments where absolute location information SPS is unavailable, partially unavailable, or degraded. What is needed is an improved system and method for navigation of fixed-wing UASs in a GPS-denied GPS degraded environment.
A UAS may use a front-end nodule to estimate relative pose change, and may periodically publish the relative pose change to a back-end module for use in generating, modifying, updating, and optimizing a global pose model. In one embodiment, the front-end relative pose model may be an extended Kalman filter that may be iteratively updated based on ground features identified and tracked by the front end. Based on a reset criteria (e.g. time, accumulated covariance, etc.), the front end may periodically determine to reset by publishing the relative pose model's delta pose and associated covariance to the back end for incorporation into the global pose model as a pose graph edge, and additionally zeroing out its delta pose and covariance.
Two UASs may share range information to improve back-end pose graph optimization. A first UAS may obtain a range measurement to a second UAS, may share the range measurement with the second UAS, and both UASs may then perform a coordinated reset as described above. As part of the coordinated reset, and in in addition to adding a delta-pose edge to the respective global odometry pose graphs, each UAS may add an edge and node to incorporate the range measurement as a constraint. In one embodiment, each UAS may add at least as much of the other UAS's pose graph is may be necessary to provide an global anchor/linking point. Sharing range information may improve pose graph optimization and associated global pose estimates.
This Application claims priority to U.S. Provisional. Application No. 63/008,462, filed on Apr. 10, 2020, the first inventor of which is Gary Ellingson, and which is titled “Cooperative Rel ye Navigation of Multiple Aircraft in GPS-Denied/Degraded Environment.”
An improved system and method for navigation of fixed-wing UASs GPS-denied or GPS-degraded environments is disclosed.
Table of Reference Numbers from Drawings:
The following table is for convenience only, and should not be construed to supersede any potentially inconsistent disclosure herein.
Relative Navigation
Sensor module 130 may comprise a camera 132 and inertial measurement unit (“IMU”) 134. IMUs for aircraft are known in the art. IMU 134 may comprise several inertial measurement sensors, e.g., accelerometers and/or rate gyroscopes. Camera 132 may supplement IMU 134 by measuring motion of UAS 100 relative to UAS 100's surroundings usually features on the ground. Even using IMU 134 in conjunction with camera 132, global position and yaw angle are unobservable, and UAS 100's estimate of global position will therefore eventually diverge from its actual global position.
Relative navigation is an approach that uses a combination of local estimation and global estimation. Local estimation may be referred to as “front-end,” and global estimation may be referred to as “back-end.” For example, an exemplary relative navigation scheme may comprise an extended Kalman filter (“EKF”) for front-end estimation relative to the local environment and a back-end optimization that incorporates the relative front-end information to estimate global position/variables.
The front-end system estimates changes in position and attitude with respect to a local frame where the states can remain observable and the Gaussian distribution can accurately represent uncertainty, thus enabling the computational advantage of an EKF to be utilized. The back-end system uses a pose graph (or other modeling paradigm known in the art, e.g., nonlinear unconstrained optimization problem) that can accurately represent nonlinearities in position and heading and can be robustly optimized when given additional constraints. The front-end system may be EKF, MSCKF (as described herein below), or any batch or similar system that delivers estimates of pose change and associated uncertainty to a back end. By pre-marginalizing data gathered in the front end (which could be quite significant for captured images and other captures sensor data), the front-end data is packaged as a small (amount of data) estimate of pose change and co-variance, which is then more easily shared (because of the modest bandwidth requirements) to various components, modules, or across networks.
Extended Kalman filters are well-known the art for navigation of robots and other autonomous or partially autonomous vehicles.
Camera
In one embodiment, camera 132 may be a monocular camera that provides no direct information about the distance to observed features. Many cameras are known in the art and may be capable of taking and provide a visual image capture as described herein below. Non-visual cameras could also be used. For example, a thermal (infrared IR)) or depth (e.g., lidar) imager could be used. In general, it may be possible to use ally camera that is able to detect static features in consecutive images.
Relative Navigation
The relative pose (i.e., change in x position, change in y position, and yaw angle) of a UAS may be updated using an extended Kalman filter (“EKF”) approach. This approach is known in the art. E.g., Thrun S, Burgard W, and Fox D, Probabilistic robotics. MIT Press, 2005; Leishman R C, Macdonald J C, Beard R W, and McLain T W “Quadrotors and Accelerometers: State Estimation with an Improved Dynamic Model,” IEEE Control Systems, vol 34, no. 1, 2014, pp. 28-41; Beard R W and McLain T W, Small Unmanned Aircraft: Theory and Practice. Princeton University Press, 2012. MSCKF is one example of such a system/approach. A. I. Mourikis and S. I. Rourneliotis, A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation, Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, pp. 3565-3572.
In one embodiment, the EKF may be a Multi-State Constraint Kalman Filter (“MSCKF”), which is a visual-intertial navigation approach that blends traditional filtering with a batch-like optimization. In MSCKF, the Kalman filter tracks visual features as they move through the camera's field of view, adding as states the vehicle or camera pose when each image is taken. MSCKF and then uses the time history of images to the saline feature to improve estimates of the vehicle's current pose. As each feature is lost from view, all prior observations and vehicle/camera pose estimates at that time are used in a least-squares fashion to solve for an estimated feature position. Then, based on the estimated feature position and the estimated poses, the expected camera measurements to the feature at each of the prior poses ts calculated and the differences between expected and actual image measurements to the feature are then used in a specialized Kalman update (one that uses a null space approach to avoid double counting information) to improve current state estimates. If the past state/camera poses no longer have active features remaining, features still in the current field of view, those poses are then removed from the state vector. The Filter continues to propagate state estimates, add new features, track the other features, add pose states to the filter each time a new image is taken, and update filter states whenever another feature exits the field of view. In this way the MSCKF significantly bounds the error growth in the filter's position and attitude.
VIO Motion Estimation
Using camera 132 and IMU 134, navigation system 110 may use a visual-intertial-odometry (“VIO”) approach to estimate motion of UAS 100. A VIO approach is well known in the art and is described in detail in many sources. E.g., M. Li, A. Mourikis, High-precision consistent EKF-based visual-inertial odometry, The International Journal of Robotics Research, vol. 32, no. 6, 2013, pp. 690-711; Forster C. Zhang Z, Gassner M, Werlberger M, and Scaramuzza D, SVO:Semidirect Visual Odometry for Monocular and Multicamera Systems, IEEE Transactions on Robotics, vol. 33, no. 2, 2017, pp. 249-265. In a VIO, camera 132 is pointed toward the ground at an angle that is fixed relative to UAS 100 and IMU 134. Using image processing and pattern recognition techniques that are well known in the art, Relative VIO module 122 may receive image data from camera 132 and may process the received image data to identity features on the ground. E.g., Carlo Tomasi and Takeo Kanade. Detection and Tracking o Point Features. Carnegie Mellon University Technical Report CMU-CS-91-132, April 1991; Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Coference on Computer Vision and Pattern Recognition, pages 593-600, 1994. Image data received from camera 132 may comprise a time-sequence set of images frames, wherein one frame comprises the visual data captured by camera 132 at a specific time, i.e., an image snapshot. Once a feature on the ground has been identified, relative VIO module 122 may track that feature in subsequently received image frames. By tracking the pixel position (position within the frame capture) of one or more features over subsequent image capture frames, and by combining the change in pixel position with IMU data (e.g., accelerations, angular rates), relative VIO module 122 may be able to estimate relative movement over a time period.
As described herein below, some image capture frames may be characterized as “keyframes.” When a feature is identified in a keyframe, and its location within and relative to that keyframe has been determined, the feature's position within subsequent image frames leading up to a subsequent keyframe may be used, in conjunction with other inputs, to estimate the UAS's relative pose change (change in x position, change in y position in yaw angle) from the first keyframe to the second keyframe.
A given feature is tracked across a sequence of images and, once it leaves the camera field of view, the feature track is residualized as a single measurement-update step.
For example, by tracking the frame-position of nine ground features over a 3.0 second period (in one embodiment a camera may capture 30 frames/images per second), and based on acceleration, pitch, and roll data from IMU 134 over the same period, relative VIO module 122 may determine that UAS 100 has moved, during the 3.0 second period, to a location that is 150.0 feet to the north and 30.0 feet to the east of UAS 100's location at the beginning of the 3.0 second period, and that, at the end of the 3.0 second period, UAS 100 is moving in a direction described as 45 degrees to the east of direct north.
Relative VIO module 122 may make this determination using an extended Kalman filter (“EKF”). E.g., Thrun Burgard W, and Fox D, Probabilistic Robotics. MIT Press, 2005. EKFs utilize a linear Gaussian representation of the state belief to take advantage of the computational convenience of a Kalman update but maintain the nonlinearity of the process propagation. This combination of properties performs well when errors remain small, such as when the availability of GPS measurements is used to regularly remove drift errors, or when the EKF is periodically reset. The nonlinear nature of the process, however, causes the Gaussian representation of the belief to become inconsistent when errors are large due to estimates drifting from the true value.
Relative Reset Step
Because of this drift phenomenon for EKFs, it may be necessary to periodically reset an EKF. Otherwise sensor errors will accumulate, thereby excacerbating drift. Resetting the EKF and thereby adding a new edge and node to the back-end pose graph—where the new edge includes delta pose and associated uncertainty (often represented by a covariance matrix)—allows for avoiding excessive EKF drift and also allows for adding constraint information (in the form of an edge) to the back-end pose graph.
At a reset, as UAS 100 travels from the current origin, the front-end EKF resets the EKF origin by setting the EKF origin (change in x position, change in y position, change in yaw, covariance) to zeros, where the reset coincides with the declaration of a keyframe image. The front-end states then continue to evolve with respect to the newly-declared reference frame. The state from just prior to the reset then forms a transformation from one reset to the next and, together with the associated covariance, is provided to the back end. The transformations form a directed pose graph, where each reset origin is a node (or node frame) and each transformation is an edge. Because the EKF operates only with respect to a local origin, it is observable, as well as consistent, by construction. The uncertainty is regularly removed from the filter while a Gaussian is still able to accurately represent it, and no linearities are handled appropriately in the back-end graph.
Many different criteria may be used for determining when to perform a reset step, e.g., change in position, change in yaw angle, time since previous reset, accumulated uncertainty, time since previous global constraint input, a combination of these, or any other useful criteria. In one embodiment, the determination of when to do a reset may depend on the number of features in the current frame that are persistent since the most recent keyframe. For example, when a threshold number of features identified in the most recent keyframe are no longer identified in the current frame, then front-end 120 may determine to do a reset step and mark the current frame as a keyframe. In one embodiment, the threshold number may be nine, and front-end 120 may be configured to perform a reset step when the number of common features in the current frame are common with the number of features in the most recent keyframe drops below nine.
The following table provides an example of the frames at which an example front-end 120 may declare a key frame and perform a reset step. The captured frame Fn at time n may be a keyframe at which front-end 120 may perform a reset step. Front-end 120 may identify 11 features (f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11) in Fn. At time n+1, front-end 120 may identify the same 11 features in Fn+1 as in Fn. In Fn+2, front-end 120 may identify new feature f12. At time n+2, front-end is still tracking 11 features that were identified in keyframe Fn.
At time n+3, front-end 120 is no longer able to identify f6, and is therefore tracking only 10 features that were also identified in keyframe Fn. In Fn+4, front-end 120 may identify new feature f13. At times n+5 and n+6, front-end 120 may identify the same features as at time n+4. At time n+7, front-end 120 is no longer able to identify f9, and is therefore tracking only nine features that were also identified in keyframe Fn. At times n+8 and n+9, front-end 120 may identify the same features as at time n+7. At time n+10, front-end 120 may identify new feature f14 in Fn+10. At time n+11, front-end 120 may identify the same features as at time n+10. At time n+12, front-end 120 is no longer able to identify f2, and is therefore tracking only eight features that were also identified in keyframe Fn. Because the number of features in common with the most recent keyframe Fn has dropped below the threshold of nine, front-end 120 declares Fn+12 as a keyframe and performs a reset step. At times n+13 and n+14, front-end 120 may identify the same features as at time n+12. At both n+13 and n+14, frames Fn+13 and Fn+14 each share 11 features in common with the most recent keyframe Fn+12. At time n+15, front-end 120 may identify new feature f15 in Fn+15.
In this approach, the common feature threshold is the number of common features that must be maintained since the most recent keyframe. When the number of common features since the most recent keyframe is less than the common feature threshold, a reset occurs and a new keyframe is declared. The common feature threshold depends on multiple factors, and is often sensitive to a specific application or environment. In general, the system and method disclosed herein may perform best when the common feature threshold is at least 3 to 5 features because of the ability to determine relative changes in direction and to overcome error from other features. In general, performance improves as the common feature threshold is increased, but requires computational resources may also increase. On the other hand, using a common feature threshold that is too low may result in a reset frequency that overtaxes available resources.
Determining a good common feature threshold may require trial and error, tuning, and/or problem domain knowledge (e.g., distinctness and identifiability of features, ground topography, etc.). In one embodiment for flying a small UAS over relatively flat farmland, it was determined that a common feature threshold of nine produced acceptable results without overtaxing available resources.
In one embodiment, the common feature threshold may be a percentage of the nominal number of good features in the image, e.g., 20 percent. With this approach, a new keyframe would be declared when the common feature overlap between keyframe image and a subsequent image dropped below ⅕th of the features in the keyframe image.
Many different approaches, heuristics, and/or criteria may be used to determine when to perform a reset. The overarching principle is to resets with sufficient frequency so that the change in pose is fairly accurate and the covariance is representative. Designing a reset function/criteria, for a specific application may require some domain-specific information, experimentation, and tuning.
The back-end accounts for and is responsible for the global pose estimate. The back-end calculates global pose estimates by using the keyframe-to-keyframe transformations as edges in a pose graph or analogous data structure or model. The global pose, which is necessary for accomplishing a mission with a global goal, can be produced by combining, or composing, the transforms. This process may be referred to pose graph optimization.
The back-end may be able to opportunistically improve optimization of the pose graph by incorporating constraints such as opportunistic FPS measurements and/or place-recognition loop closures. Using these techniques, relative navigation deliberately avoids global updates to the rout-end filter and thereby increases EKF robustness.
The division of the front end and back end also provides additional benefits for scalable UAS operations. First, because the front-end EKF may implicitly draw on the Markov assumption (i.e., the current state and covariance completely represent the previous sequence of events and measurements), it essentially compresses the high-rate sensor information into edges that are published at a low frequency. This compression, effectively pre-marginalization of the graph factors, helms to make the back-end scale for long-duration flights. Also, as the back-end graph grows and the computation of optimize non increases, the decoupling of the front end allows the graph optimization to be completed slower than real-time if needed, while the front end is still providing full-rate state estimates necessary for vehicle control.
Publishing Relative Motion Estimates to Back End
Relative VIO module 122 may periodically send estimates of relative pose change (and associated covariance) relative movement information to back-end module 110. Such estimates of relative pose change may be referred to as “delta pose.” Back-end module 110 may use this received delta pose (with associated covariance) model and estimate UAS 100's global path and pose. As used herein, “global” may refer to a measurement relative to a fixed point on the ground, e.g., a starting point. In one embodiment, and as is well-known in the art, back-end module 110 may use a pose graph to model and estimate global path and pose. Back-end module 110 may use relative delta pose information (and associated covariance) from front-end 120 to generate and/or update edges in a pose graph.
In the exemplary embodiment described herein below, nodes in a pose graph are global pose estimates. Each global-pose-estimate node is associated with a pose graph edge, where the edge comprises a delta pose and associated covariance published from the front end to the back end in conjunction with a front-end reset and designation of a keyframe.
Publishing odometry estimates, which may be referred to as a change in pose or “delta pose,” along with a corresponding covariance matrix (or the diagonal from a covariance matrix) allows for back-end pose graph optimization (for estimating global pose) using a very small amount of data that is orders of magnitude smaller than the data being collected in the VIO module.
In the example shown in
At t=0.0 s in row 230, front end 120 has identified four visual features: 220a , 220b , 220c , and 220d . Because the flight/path has just begin, the EKF values are 0 s and the only node in the pose graph is the origin node Nt.
At t=1.0 s (row 231), front end 120 has identified the same four features (220a, 220b, 220c , and 220d ) identified in the previous capture. Because the number of common features with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
At t=2.0 s (row 232), front end 120 has identified the same four features (220a , 220b , 220c , and 220d ) identified in the previous capture. Because the number of common features with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
At t=3.0 s (row 233), front end 120 has identified the three of the four features (220a , 220c , and 22d) identified in the most recent keyframe (t=0.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non zero because image processing of the features in the frame captures shows movement.
At t=4.0 s (row 234), front end 120 has identified three of the four features (220a , 220c , and 22d) identified in the most recent keyframe (t=0.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
Continuing to
At t=6.0 s (row 236), front end 120 has identified only one of the four features (220d ) in the most recent keyframe (t=0.0 s). Because the number of common features (with the most recent keyframe) is now less than the threshold (2), front end 120 designates a keyframe, publishes edge E1 to the pose graph, and resets the EKF.
At t=7.0 s (row 237), front end 120 has identified three of the four features (220e , 220f, 220g) identified in the most recent keyframe (t=6.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
At t=8.0 s (row 238), front end 120 has identified two of the four features (220f, 220g) identified in the most recent keyframe (t=6.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
t=9.0 s (row 239), front end 120 has identified only one of the four features (220g) in the most recent keyframe (t=7.0 s). Because the number of common features with the most recent keyframe) is now less than the threshold (2), front end 120 designates a keyframe, publishes edge 132 to the pose graph, and resets the EKF.
At t=10.0 s (row 240), front end 120 has identified both features (220g, 220h) identified in the most recent keyframe (t=9.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
At t=11.0 s (row 241), front end 120 has identified only one of the four features (220h) in the most recent keyframe (t=9.0 s). Because the number of common features (with the most recent keyframe) is now less than the threshold (2), front end 120 designates a keyframe, publishes edge E3 to the pose graph, and resets the EKF.
At t=12.0 s (row 242), front end 120 has identified three of the features (220i, 220j, 220k) identified in the most recent keyframe (t=11.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
At t=13.0 s (row 243), front end 120 has identified two of the features (220j, 220k) identified in the most recent keyframe (t=11.0 s). Because the number of common features (with the most recent keyframe) is not less than the threshold (2), front end 120 does not do a reset. The EKF values are non-zero because image processing of the features in the frame captures shows movement.
t=14.0 s (row 244), front end 120 has identified none features in the most recent keyframe (t=11.0 s). Because the number of common features (with the most recent keyframe) is now less than the threshold (2), front end 120 declares a keyframe, publishes edge E4 to the pose graph, and resets the EKF.
Opportunististic GPS Measurements
In one embodiment, an opportunistic GPS measurement, or other opportunistic constraints, may be incorporated into a UAS's back-end pose graph as a factor by performing an EKF reset and inserting the GPS or other (global frame) measurement factor into the pose graph connected to the associated global-pose-estimate node or delta-pose edge created in conjunction with the EKF reset, thereby adding an additional constraint to improve pose-graph optimization.
Scale Bias
In one embodiment, for back-end module 110 to more accurately utilize relative movement information from front-end 120, back-end module 110 may use a scale-bias model to account for correction of scale bias errors between edges in a pose graph or other representation of global location. Scale errors may arise from potentially unobservable velocity associated with straight-and-level flight and from the correlation from one graph edge to the next. The bias error may be removed from an edge by scaling the edge's Δx (change x position) and Δy (change in y position) components. Modeling the correlation of of the velocity error between back-end pose graph edges improves the ability of the back-end optimization scheme/algorithm to remove bias error when intermittent global measurements of other loop-closure-lie constraints are available. The bias error for edges in the back-end pose graph may be modeled as a slowly varying random process where the bias in successive edges is not independent. This may be done by adding a cost proportional to the change in scale between successive edges. The scale bias error is removed in the back-end optimization when global measurements are available to remove the bias errors.
In essence, this approach spreads out the global pose correction resulting from an intermittent GPS or other global measurement over multiple delta-pose edges, instead of concentrating the correction at or near the delta-pose edges adjacent to or near the graph edge or node to which the GPS or other global measurement is connected in the pose graph. This scale bias correction approach may be implemented numerous ways. In one embodiment, this scale bias correction approach may be implemented by adding trinary bias correction factors for front-end delta-pose edges to the pose graph.
The loss function is defined as:
where xk and yk are the x and y coordinates, respectively, of the global pose estimate at Nk; xk+1 and yk+1 are the x and y coordinates, respectively, of the global pose estimate at Nk+1; bx and by are the x and y scale bias (i.e., x and y scaling coefficients, calculated as bx=(Gnx-Ghx)/(mx-Ghx) and by=(Gny-Ghy), where Gnx is the x coordinate of a GPS measurement associated with global pose estimate node n, Gny is a the y coordinate of a GPS measurement associated with global pose estimate node n, Ghx is the x coordinate of a GPS measurement associated with global pose estimate node b where b<n, Gny is a the y coordinate of a GPS measurement associated with global pose estimate node b where b<n and mx and my are the Δx and Δy components of Ek. This output of the loss function 1 is an updated global pose estimate Nk+1 that has been corrected to account for scale bias.
Binary factor L (870a-d) is a covariance between respective bias variables (e.g., factor L 870a is a covariance between bias variable Bn+3 860a and Bn−2 860b ). In one embodiment, L may be 0.0001I2×2, where I is 2×2 identity matrix. In practice L may be hand-tuned.
Using this approach, scale bias may by distributed across multiple (possibly all) affected delta pose edges (and associated global pose estimates), thereby improving the accuracy of the pose graph's estimated global pose values between intermittent GPS measurements or other global measurements/constraints.
Cooperative Navigation
In one embodiment, a UAS may benefit from sharing (sending and receiving) constraint information with another UAS. Constraint information may refer to any information that may constrain, guide, or direct optimization of the back-end model of the UAS's global pose. As described herein, a pose graph is one model that may be used to store and optimize information about global pose.
Information that may be shared between two UASs includes but is not limited to odometry measurements (e.g., x,y position changes, yaw angle changes) EKF information (e.g., uncertainties, uncertainty matrices, covariance matrices), intervehicle range measurements/information, and any other constraint information
In one embodiment, two UASs may share a range measurement (distance between the two UASs) and, in conjunction (i.e., substantially simultaneously) with sharing a range measurement, the UASs may perform a coordinated/simultaneous reset step. Performing a coordinated/simultaneous reset at a first UAS provides a node in the first UAS's back-end pose graph—the new global-pose-estimate node at the end of the newly added delta-pose edge—to which a new binary range edge may be added and connected as a constraint. The binary range edge is the range distance (and variance) to a second UAS at (or substantially near) the time of performing the reset. the new range edge connects the first UAS's new global-pose-estimate node with the second UAS's global-pose-estimate node created by the second UAS in conjunction with measuring the distance between the first and second UASs and simultaneously performing a reset (at both the first UAS and the second UAS).
When each UAS involved in an information-sharing communication/conversation performs a reset by publishing delta-pose (and associated uncertainty information, e.g., a covariance matrix information to its respective back-end, each respective back-end creates a new a delta-pose edge (relative odometry information) and covariance matrix and a new global pose estimate node at the end of the new delta-pose edge, the new node representing a global pose variable.
In one embodiment, a set or swarm of two or more UASs may be involved in a coordinated/simultaneous reset as long as all of the UASs in the set become aware of the range measurements between between all pairs or UASs in the set simultaneously (or sufficiently simultaneously so that any error from non-simultaneity is inconsequential relative to the flight time). In this situation, if, e.g., three UASs perform a coordinated/simultaneous reset, all three UAS's would simultaneously perform a reset by zeroing out their respective front-end EKFs, sending delta-pose (including covariance information) information o their respective back-ends to generate a new delta-pose edge in the pose graph, and adding a new range edge and pose graph node for the range information for each of the other two UASs in the set.
In general, when multiple UASs are using the potential reset criteria described herein above, e.g., change in position, change in yaw angle, time since previous reset, accumulated uncertainty, time since previous global constraint input, and/or number of persistent visual feature tracks, the UASs will likely perform at least some asynchronous resets. UASs may perform synchronous resets when sharing information with each other, e.g., distance between UASs or other information. Such a synchronous reset, e.g., when multiple UASs are sharing information with each other, may be referred to as a “coordinated reset.”
Several technologies may enable UASs to generate or obtain inter-UAS range measurements, i.e., the distance from a first UAS to a second UAS. In general, the distance from first UAS to a second UAS may be defined as the distance between a fixed point, e.g., a defined center, on each UAS. These technologies include but are not limited to ranging radios, specialized radios that produce distance measurements using time-of-flight, received-signal-strength-indicator (RSSI), radio-frequency identification (RFID), carrier frequency signals carrying additional information, range measurement to a fixed ground-based range station using a distance-measuring-equipment (DME) transponder, and/or real-time location services (RTLS).
In one embodiment, a first UAS may share some or all of the following information with a second UAS in the same swarm: edge information and range measurement information. As described herein below, edge information may comprise pose change and pose change uncertainty. In general, UAS's may share with each other any information a UAS may have about any other UAS's pose graphs, thereby helping each other to build and maintain, or at least attempt to build and maintain, a complete copy of all pose graphs for an entire swarm.
In one embodiment, to decrease the amount of data communication between one or more UASs, a first UAS's edge uncertainty information that is shared with a second UAS may be represented using only the diagonal elements of the first UAS's covariance matrix from its front-end EKE. Although this simplification is a slight mismodeling of odometry uncertainty, the decreased communication data may be justified because, in general, the off-diagonal cross correlation terms remain small due to relatively frequent resets in the front-end estimator. A UAS may determine to share non-diagonal elements based on, e.g., time between resets or some other metric or heuristic that may justify or necessitate such additional co variance information. Additionally, various constraints or problem domain characteristics may make it beneficial to include some or all non-diagonal elements.
In one embodiment, to implement sharing, each UAS in a UAS swarm or group may be uniquely identified by a UAS-id, which may be an 8-bit character or other identifier as may be known in the art. Because a UAS may add an edge to its pose graph for both delta pose and range measurements, a UAS's pose graph may have at least two different types of edges: delta-pose edges and range measurement edges. To uniquely identify a delta-pose edge or node in a UAS's back-end pose graph, a unique identifier may be assigned to each delta-pose edge, and that unique edge identifier may additionally additional uniquely identify the node to which the delta pose edge points, i.e., the node at the end of (i.e., created in conjunction with) delta-pose edge. In one embodiment, a UAS may use incremental integers to identify delta pose edges in its back-end pose graph, assigning incremental integers to delta-pose edges acid delta-pose nodes as delta-pose edges are added to the pose graph. For example, a first UAS identified as U1, may have back-end pose graph delta-pose edges E1-1, E1-2, E1-3, E1-4, E1-5, E1-6, E1-7, and E1-8 after U1 has added eight edges to its back-end pose graph, and the next edge added may be identified as E1-9.
A delta-pose edge may comprise values for the change in x,y position, change in yaw angle, and a variance/uncertainty element. Change in z positron, i.e., elevation/altitude, may also be used, but may be unnecessary because a UAS may be able to determine absolute a position through use of an altimeter device, many of which are known in the art.
A delta-pose edge may therefore be identified and described by Ei-e:=(Δ, σ2), where the subscript i is the UAS's unique identifier; the subscript e is the unique identifier for a delta-pose edge in the back-end pose graph of UAS U1; delta pose Δ:=(Δx, Δy, Δψ) comprises the change of x position Δx, change of y position Δy, and change of yaw angle Δψ; and covariance matrix σ2 comprises a 3×3 covariance matrix. The dimensions of the covariance matrix are 3×3 because there are three variables: Δx, Δy, Δψ. In an embodiment in which only the diagonals of the covariance matrix are used, covariance σ2 may be described as σ2:=(σ2x, σ2y, σ2ψ) where σ2x is a variance for Δx, σ2y is a variance for Δy, and σ2ψ is a variance for Δψ.
In some embodiments, a delta-pose edge may additionally include a time stamp t.
To uniquely identify a range edge or node in a UAS's back-end pose graph, a unique identifier may be assigned to each range edge, and that unique range edge identifier nay additionally uniquely identify the global-pose-estimate node and delta-pose edge associated with the range edge. In one embodiment, a UAS may use incremental integers to identify range edges in its back-end pose graph, assigning incremental integers to range edges and range nodes as range edges are added to the pose graph. For example, a first UAS, identified as U1 may have back-end pose graph range edges R1-1, R1-2, R1-3, R1-4, R1-5, R1-6, R1-7, and R1-8 after U1 has added eight range edges to its back-end pose graph, and the next range edge added may be identified as R1-9.
A range edge may comprise a distance to another UAS, identifier for the other UAS, and a variance for the distance. In one embodiment, a range edge may be identified and described by Ri-e:=(d,v,j,σ2), where the subscript i is the UAS's unique identifier; the subscript e references Ei-e, which is the delta-pose edge with which Ri-e is associated; distance d is the distance from Ui to Uv at the time of the reset resulting in Uv's creation of delta-pose edge Ev-j; and σ2 is the variance for d.
A range measurement may be identified and described as Φp,e,q,f:=d, where d is the distance between Up and Uq at the time of the coordinated reset giving rise to the creation of Ep-e and Eaf at UAS Up and Uq, respectively. In some embodiments a range measurement may additionally comprise a time stamp,
Depending on the range technology, determining a range measurement may occur in various ways. Some technologies use a short message exchange/conversation between two UAS, e.g., a first UAS sends a message to a second UAS, which responds to the first UAS and, based on the response from the second UAS, the first UAS is able to determine the distance to the second UAS. The first UAS may then transmit the range to the second UAS. In general, although taking a range measurement and transmitting a range measurement to another UAS do take some time, the time for such actions (milliseconds or less) is orders of magnitude smaller than the magnitude of pose change that may occur during the time required to take a range measurement and/or transmit a range measurement of message to another UAS. Because of this order-of-magnitude disparity, the time necessary to tale a range measurement and/or transmit a message may be inconsequential and may be ignored, thereby allowing for treating such actions as instantaneous relative to back-end pose graph optimization.
A UAS Up may obtain the distance for a range measurement d to UAS Uq using any one of the methods disclosed herein. In conjunction the distance d to Uq, Up may determine the current edge index (i.g., the highest edge index) for Uq, e.g., by sending a message to Uq to request the index of Uq's current edge index. When Up has obtained the distance d to Uq and has received Uq's current edge index f, then Up may send a range message Φp,e+1,q,f+1 to Uq. In conjunction with sending Φp,e+1,q,f+1 to Uq, Up may perform a reset operation by adding an delta pose edge Ep e+1 (and associated node) to its back-end pose graph, where Δ comprises Un's change in x position, change in y position, and change in yaw angle Ψ since Ep-e; and σ2 is the respective variances for the three components of Δ. Additionally, in conjunction with adding edge Ep-e+1, Up may use Φp,e+1,q,f+1 to add a range edge Rp-e+1 to its back-end pose graph. This added range edge provides another constraint to improve pose graph optimization. In general, variance σ2 in range edge Rp-e+1 is dependant on the range hardware (and/or other aspects of the range technology) and will be small relative to the range distance d.
Whep Uq receives Φp,e+1,q,f+1 from Up, Uq will perform a reset operation similar to Up's reset operation. Uq may add edge Eq-f+1 (and associated node) to its back-end pose graph, where Δ comprises Uq's change in x position, change in y position, and change in yaw angle Ψ since Eq-f′; and σ2 is the respective variances for the three components of Δ. Additionally, in conjunction with adding edge Eq-f+1, Uq may use Φp,e+1,q,f+1 to add a range edge Rq-f+1 to its back-end for use in optimizing Uq's back-end pose graph.
This description is one potential embodiment for two UASs Up and Uq to share a range measurement and perform a coordinated reset. This approach assumes that the time necessary to send and receive message is sufficiently small (possible negligible) so that the message lag time does not substantially detract from the value of the sharing of edges and ranges.
Many implementations may be used for UASs to share edge and/or range information. In one embodiment, each UAS may iteratively (or according to some other pattern) exchange range and edge information with each other UAS in the swarm. In another embodiment, a UAS Up may send a request message to Uq, requesting for Uq to send to Up any edge and/or range information that Uq has but Up does not have. This may be accomplished, for example, by Up sending to Uq, in Up's request, the highest edge index of which Up is aware for each UAS in the swarm (assuming that Up has a lower-indexed edges). Upon receipt of Up's request, Uq may check its records and, upon a determination that Uq has edge information that Up does not have, Uq may respond by sending to Up the edge information that Up does not have. Similarly, Up may send to Um a list or summary of all the range measurements of which Up is aware for any pair of UASs in the swarm, and Uq may respond by sending to Up any range measurements of which Uq is aware but Up is not,
In one embodiment, UASs may attempt communications with each other on a set schedule, e.g., with a frequency of at least one range measurement between each unique pair of UASs per time period. For example, the UASs in a swarm may attempt, for each unique paid of UASs, to share a range measurement and performed a coordinated reset at least once every 10.0 seconds. In other embodiments, UASs may be configured to share information and perform coordinated resets as frequently as possible, although this may be limited by the available communications bandwidth. In another embodiment, the frequency with which UASs share range measurements and perform coordinated resets may depend on perceived pose graph accuracy, known environmental conditions, or on any other factor that may affect the need or ability to share edge and/or range measurement information. In some embodiments, information about a swarm's edges and range measurements may be shared in conjunction with sharing of a new range measurement (and associated coordinated reset) between two UASs. In other embodiments, information about a swarm's edges and range measurements may be shared independent of sharing of a new range measurement (and associated coordinated reset) between two UASs. In other embodiments, communications may be broadcast instead of limited to pairs of UASs in a swarm.
In some embodiments each UAS may maintain or attempt to maintain, for each other UAS in a swarm, a complete knowledge of every other UAS's back-end pose graph, as well as a complete knowledge of all range measurements between swarm members. Although a subset of swarm edges and range measurements may provide some benefit, the greatest benefit will be realized when a UAS is able to use all swarm edges and range measures to optimize that UAS's back-end pose graph. To maximize each UAS's knowledge of a swarm's edges and range measurements, UAS communications may comprise sharing edge and range measurement information, and updating when new (or previously not received) edge and/or range measurement information is received.
Although in some embodiments UASs in a swarm may each attempt to obtain a complete knowledge of all edges and range measurements for the entire swarm, this system may be de-centralized because all UAS perform similar behaviors and have similar roles. As is known in the art, centralized or partially centralized implementations may also be designed and implemented. In general, in a decentralized implementation, each UAS may perform its own pose graph (global pose) optimization. In some centralized or par centralized implementations, one or more designated UASs may perform some pose graph optimization on behalf of one or more other UASs in the swarm.
The time points t1-t12 (321-32) in the respective flights paths 306 and 307 of UAS U1 305 and UAS U2 310 show the times at which the respective UASs performed reset steps, i.e., passing delta pose and associated covariance to the back end and zeroing out the front end's delta pose and covariance. At times t4 (324), t1 (327), and t12 (332), U1 and U2 share measured range distances Φ1,2,2,3,d (341), Φ1,5,2,4d (342), and Φ1,8,2,7,d (343), respectively.
As shown in
Although the example in
Global Pose Optimization
It should be noted that, in general, for a first UAS U1 to use a range measurement to a second UAS U2 for optimization of U1's back-end pose graph, it is necessary for U1 to have knowledge (or at least an estimate) of the relative spatial relationship between at least one node in U1's pose graph and one node on U2's pose graph. This may be referred to as an “anchor.” U2 may provide this information to U1 through the inter-UAS message-sharing communications described herein.
In one embodiment, in which back-end information (e.g., global pose) is represented as a pose graph, a UAS may be configured to optimize its pose graph to improve its accuracy and the accuracy of the UAS's estimated global pose. In theory, as additional information or constraints are added to a pose graph, or to a pose graph optimization model or algorithm, the accuracy of the pose graph's estimated global pose improves. In general, optimizing a pose graph is a nonlinear unconstrained optimization problem. Pose graph optimization is one type of nonlinear unconstrained optimization, with features specific to a pose graph. Many approaches and tools are known and available for nonlinear unconstrained optimization. One such tool is GTSAM, a software library that includes tools and solutions for pose graph optimization. GTSAM is one tool that may be used to optimize the pose graph as described herein. As of Apr. 7, 2021, GTSAM is available at leat at https://gtsamorg. See also https://roboticsconference.org/docs/keynote-TestOfTime-DellaertKaess.pdf.
In some embodiments, global pose may be modeled, predicted, and optimized using a model or approach other than a pose graph. In general, this type of problem is known as a nonlinear unconstrained optimization problem. This genre of problems and models are well-known and researched. General nonlinear unconstrained optimization solvers include, but are not limited to, Levenberg-Marquardt, Google's ceres solver, and many others.
Although the disclosure herein focuses on unmanned aircraft or other vehicles, the systems and methods disclosed herein could be applied for manned or partially manned aircraft or vehicles.
Number | Date | Country | |
---|---|---|---|
63008462 | Apr 2020 | US |