Embodiments herein generally relate to the field of autonomous aerial vehicles, and, more particularly, to localization for autonomous micro-aerial vehicles.
There is a fast-growing demand for small unmanned aerial vehicles (UAVs) in industry for purposes of autonomous exploration and inspection. The compact form-factor, ease of control and high mobility of the UAV make them well suited for many tasks that are difficult for humans, due to limited space or potential danger. However, this also requires that the UAV system be robust enough to handle multiple tasks in challenging situations such as lowlight, GPS-denied, cluttered or geometrically under-constrained environments.
To achieve robustness, state estimation algorithms must produce high quality results in challenging situations. A common solution to this problem is increasing the redundancy of the sensing system. A diverse set of sensors tend to capture more useful information, especially if they are of different modalities. However, sensor redundancy creates new problems of its own, such as synchronization issues and payload constraints. Therefore, sensing system design must be a trade-off between redundancy and payload cost.
Popular solutions for robot localization can be divided into two categories: filtering-based approaches and optimization-based approaches. Filtering-based approaches (e.g. Bayesian filtering) infer the most likely state from available measurements and uncertainties, while optimization-based approaches attempt to minimize reprojection error to find the optimal states. As an objective of this invention is to specify an online algorithm capable of accurately localizing a UAV with limited onboard computing power, the filtering approach is preferred, as it is usually faster than iterative optimization procedures.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Current localization methods, while being quite robust, are not always able to perform accurately throughout the entire configuration space for all environments. The main constraint on localization is often a result of the geometry of the map that the robot is being localized to. Some parts of a given environment may not offer enough planer surfaces to fully constrain the state estimation. When localization fails, the robot is not only incapable of performing its mission, but it is also exposed to significant flight risks, such as crashes or flyaways. To increase the robustness of the system as a whole, the present invention describes a reactive planning layer that prevents the robot from entering regions of space where localization failure is likely.
Localizability of space can be determined based on the geometric constraints of the map that the robot is using to localize. Specifically, localizability can be estimated based on the presence of planes in a given environment. Each plane constrains the pose estimate along its normal. Based on this, localizability can be estimated for a given pose in a map by computing a surface normal for every point on a map, and then ray tracing from the given pose to determine the number of visible planes from that pose. Based on the normals that are visible from this position, the direction which is least constrained for localization can be calculated using a singular value decomposition. Additional normalization can be applied to the localizability estimate to allow it to give consistent and bounded predictions about the localization performance.
The present invention uses an implementation of ESKF-based localization algorithm which is suitable for different sensing systems. For example, cameras or 3D LiDAR can be easily integrated into this framework. Second, the present invention describes a novel method for real time localizability estimation in 3D. This model is demonstrated by observing a correlation between predicted localizability and real pose estimation errors.
The ESKF filtering procedures and the localizability estimation algorithm are discussed in detail below.
The robot localization system of the present invention comprises a quadcopter having an Inertial Measurement Unit (IMU) to capture fast motion, and a rotating laser to scan 3D environment and provide a global reference for localization. While the invention is described in terms of a quadcopter, it should be realized that the system may be applicable for any type of robot, for example, a UAV. This system is light and thus does not overburden the robot. Additionally, it can be packaged and used as a complete module, thus allowing easy transfer between platforms. The system is capable of capturing fast motion as well as maintaining a global reference to avoid drifting. The laser scanner (i.e., a LIDAR unit) can provide accurate range data even in bad illumination surroundings, where visual approaches would otherwise fail. The system includes an algorithm to predict the performance of the localization for any given location in a map, to avoid positioning the robot in areas where the localization is likely to fail.
Given a pre-built map and the initial pose of the robot, its current pose, velocity and IMU biases are estimated immediately when a new set of range data is available. The localization is achieved by combining an Error State Kalman Filter (ESKF) with a Gaussian Particle Filter (GPF), as shown in
Assuming a 3D point cloud map has been built, the object of the present invention is estimating robot position, orientation and velocity with IMU and laser measurements. First, the IMU acceleration and angular velocity are integrated numerically to provide an update to a previous state of the robot to create a prior belief of the robot state. However, this prior belief is subject to drifts since IMU measurements are always corrupted with noise and biases. During the prediction step, the uncertainty increases. Second, when a laser range measurement which usually comes at a lower frequency than IMU, is available, the range information from the laser is used to correct the prior belief and results in a more accurate posterior estimation. The GPF is used for extracting pose information from raw laser range data (point cloud data in this case). In the correction step, a set of particles is drawn according to the prior pose belief, and a likelihood is computed for each particle and treated as a weight. A weighted mean and covariance are then computed to be the pose posterior. Note that normally, laser measurements only contain pose information, but pose only occupies a part of the state vector. Thus, this partial posterior is further used to recover a pseudo-pose measurement. This pseudo-measurement is then used to update the full state vector through a normal Kalman filter (KF) update step.
An error state representation, compared to a nominal state representation, has several benefits. First, error states are always close to zero, thus making it valid to approximate R(δθ) as I+[δθ]x where [δθ]x is the skew-symmetric operator. This approximation makes the derivatives of an exponential map easy to compute. Second, in an error state, the rotation error is represented as a 3D vector, which is more intuitive than other types of rotation representations such as matrix or quaternion. Additionally, a 3D vector is easily put in a state vector, while a rotation matrix does not fit and a quaternion requires additional efforts to propagate its uncertainties. Finally, as the rotation error is always close to zero, it is far from a singular configuration.
The state vector of the system contains:
The true state, predicted state and error state are represented as x, {circumflex over (x)} and δx respectively and satisfy equation (1).
x={circumflex over (x)}⊕δx (1)
where ⊕ indicates a generic composition. Also note that in error state, angle vector δθ is represented by a 3×1 vector as a minimal representation of rotation error.
The system aerodynamics are derived from nominal state dynamics. The following is a true representation of true state x, estimated state {circumflex over (x)} and error state δx.
Where am, ωm is the acceleration and angular velocity measurements, an, ωn denote accelerometer and gyroscope noise, and aw, ωw is the Gaussian random walk noise of biases.
The propagation step contains the estimated state propagation and the error covariance propagation. The estimate state is propagated through a direct Euler integration of equation (2), and the error covariance is propagated by linearizing the error state dynamics. Discrete propagation rule is shown in equations (4) and (5) respectively.
In this step, a pseudo-pose error measurement δy∈6 is used to update full error state vector δx∈15 in a normal KF fashion. δy is called pseudo-measurement because it is not acquired from sensors directly, but recovered using a GPF.
Observation Model: with error state representation, the observation model is simply linear.
Recover Pseudo Measurement: The pseudo-measurement δy can be thought of as being measured by an imaginary sensor. In reality, it is computed by the following steps: first, based on pose priors δ
C
t+1
m=(Σt+1m−1−(
δyt+1m=(Km)−1(δxt+1m−δ
where Km=
Correction: Once the pseudo-measurements are computed, they are used to update the full error states by a normal KF update. The Kalman gain is given by equation (9).
K=
Note that K∈15×6 is different from K∈6×6. Full error state posterior and covariance are updated as shown in equations (10) and (11).
δxt+1=K(δyt+1−Hδ
Σt+1=(I15−KH)
The updated errors are integrated into the normal state by adding the error state to the estimated state, as shown in (12).
Before the next iteration, the error states are set to zero.
The GPF portion of the localization process takes as input LIDAR data and the prior belief from the ESKF motion model. The LIDAR data is sampled based on a probability (the prior belief is a probability) and re-weighted, resulting in a Gaussian distribution representing the robot's position. Because the robot's position is represented as a Gaussian distribution, the robot has a higher likelihood of being in a position close to the mean of the distribution. For each position in the distribution, a weight can be computed based on matching the LIDAR measurement associated with the position to the map. The better the match, the higher the weight which is assigned to the position. A pseudo-measurement is derived from the resulting state, and is used to correct the state in the ESKF portion of the localization process.
Given a map of the environment, in the form of a point-cloud, it is desirable to be able to determine if the localization will consistently produce accurate results if the robot is in a certain configuration. To accomplish this, the localizability of a given pose in a map is estimated to predict the localization performance. Localizability is a measure of a map's geometric constraints available to a range sensor from a given pose, and tells whether or not the robot will be able to localize from a position.
Regions of high localizability should correspond to low state estimation errors and regions of low localizability should correspond to higher state estimation errors. For each point in the map of the environment the value of the localizability metric is checked. The localizability metric is based on the normal distribution of the laser rays hitting the surfaces of the environment. For each laser ray that hits this a surface, it is determined the normal direction with respect to the sensor at that point. This is shown in
Each valid measurement from the sensor provides a constraint on the robot's pose. Specifically, by approximating surfaces as a plane locally, a measurement point pi lying on the plane is constrained by equation (13).
n
i
T(pi−pi,0)=0 (13)
where ni is the surface normal, and pi,0 is a point on the plane. Additionally, the sensor measurement provides the offset between the robot's position x and pi as x +rti =pi where ri is a ray vector of this measurement. By substitution, equation (13) becomes equation (14).
n
i
T(x+ri−pi,0)=0niTx=niT(pi,0−ri)=di (14)
where di is a constant vector. Combining all the constraints imposed by a set of measurement points, results in (15)
To accurately localize the robot, the sensor needs to be able to adequately constrain its pose in the three translational dimensions. The matrix N describes the set of observable constraints from the given pose. Preforming a principal component analysis (PCA) on the row vectors of N provides an orthonormal basis spanning the space described by the constraints from the surface normals. Furthermore, the singular values of N can be examined with SVD as UΣVT. Where Σ describes the cumulative strength of the constraints form each corresponding basis vector. Theoretically, localization should be possible as long as all three of the singular values are non-zero. However, this proves to be unreliable in practice so localizability is calculated as the minimum singular value of N. More specifically: L−min(diag(Σ)). This sets localizability equal to the strength of the constraints in the minimally constrained direction, based on the visibility of surface normals. Furthermore, this analysis also allows for the determination of the minimally constrained direction as the singular vector corresponding to the minimal singular value.
The described method and system represents a robust localization approach fusing IMU and laser range data into an ESKF framework. The algorithm is robust in various environments with different characteristics and scale. Additionally, a new approach to model localizability and predict localization performance based on environmental geometry is described.
This application claims the benefit of U.S. Provisional Patent Application No. 62/495,863, filed Sep. 27, 2016.
This invention was made with government support under IIS1328930 awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
62495863 | Sep 2016 | US |