This disclosure relates to navigation and, more particularly, to vision-aided inertial navigation.
In general, a Vision-aided Inertial Navigation System (VINS) fuses data from a camera and an Inertial Measurement Unit (IMU) to track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform. In this way, the VINS combines complementary sensing capabilities. For example, an IMU can accurately track dynamic motions over short time durations, while visual data can be used to estimate the pose displacement (up to scale) between consecutive views. For several reasons, VINS has gained popularity to address GPS-denied navigation.
In general, this disclosure describes techniques for reducing or eliminating estimator inconsistency in vision-aided inertial navigation systems (VINS). It is recognized herein that a significant cause of inconsistency can be gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and divergence. An Observability-Constrained VINS (OC-VINS) is described herein, which may enforce the unobservable directions of the system, thereby preventing one or more unobservable directions from erroneously being treated as observable after estimation, thereby preventing spurious information gain and reducing inconsistency.
As used herein, an unobservable direction refers to a direction along which perturbations of the state cannot be detected from the input data provided by the sensors of the VINS. That is, an unobservable direction refers to a direction along which changes to the state of the VINS relative to one or more feature may be undetectable from the input data received from at least some of the sensors of the sensing platform. As one example, a rotation of the sensing system around a gravity vector may be undetectable from the input of a camera of the sensing system when feature rotation is coincident with the rotation of the sensing system. Similarly, translation of the sensing system may be undetectable when observed features are identically translated.
In addition, this disclosure presents a linear-complexity 3D inertial navigation algorithm that computes state estimates based on a variety of captured features, such as points, lines, planes or geometric shapes based on combinations thereof, such as crosses (i.e., perpendicular, intersecting line segments), sets of parallel line segments, and the like.
As one example, the algorithm is applied using both point and plane features observed from an input source, such as an RGBD camera. The navigational system's observability properties are described and it is proved that: (i) when observing a single plane feature of known direction, the IMU gyroscope bias is observable, and (ii) by observing at least a single point feature, as well as a single plane of known direction but not perpendicular to gravity, all degrees of freedom of the IMU-RGBD navigation system become observable, up to global translations. Next, based on the results of the observability analysis, a consistency-improved, observability-constrained (OC) extended Kalman filter (EKF)-based estimator for the IMU-RGBD camera navigation system is described. Finally, the superiority of the described algorithm is experimentally validated in comparison to alternative methods using urban scenes.
The techniques described herein are applicable to several variants of VINS, such as Visual Simultaneous Localization and Mapping (V-SLAM) as well as visual-inertial odometry using the Multi-state Constraint Kalman Filter (MSC-KF), or an inverse filter operating on a subset of or all image and IMU data. The proposed techniques for reducing inconsistency are extensively validated with simulation trials and real-world experimentation.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
a)-2(f) are plots associated with a first example simulation. In this example, the RMSE and NEES errors for orientation (a)-(b) and position (d)-(e) are plotted for all three filters, averaged per time step over 20 Monte Carlo trials.
a)-3(d) illustrate a set of plots associated with Simulation 2 including the average RMSE and NEES over 30 Monte-Carlo simulation trials for orientation (above) and position (below). Note that the OC-MSC-KF attained performance almost indistinguishable to the Ideal-MSC-KF.
a) is a perspective diagram showing an experimental test bed that comprises a light-weight InterSense NavChip IMU and a Point Grey Chameleon Camera.
b) is a perspective diagram illustrating an AscTech Pelican on which the camera-IMU package was mounted during the indoor experiments.
a)-5(c) are a set of plots associated with Experiment 1 including the estimated 3D trajectory over the three traversals of the two floors of the building, along with the estimated positions of the persistent features.
a)-6(b) are a set of plots associated with Experiment 1 including comparison of the estimated 3 σ error bounds for attitude and position between Std-V-SLAM and OC-V-SLAM.
a) and (7b) are a set of plots associated with Experiment 2 including the position (a) and orientation (b) uncertainties (3 σ bounds) for the yaw angle and the y-axis, which demonstrate that the Std-MSC-KF gains spurious information about its orientation.
a) and 8(b) are another set of plots for Experiment 2: The 3D trajectory (a) and corresponding overhead (x-y) view (b).
a)-9(c) are perspective diagrams associated with Experiment 3.
a) and 10(b) are a set of plots for Experiment 3.
Estimator inconsistency can greatly affect vision-aided inertial navigation systems (VINS). As generally defined in, a state estimator is “consistent” if the estimation errors are zero-mean and have covariance smaller than or equal to the one calculated by the filter. Estimator inconsistency can have a devastating effect, particularly in navigation applications, since both the current pose estimate and its uncertainty, must be accurate in order to address tasks that depend on the localization solution, such as path planning. For nonlinear systems, several potential sources of inconsistency exist (e.g., motion-model mismatch in target tracking), and great care must be taken when designing an estimator to improve consistency.
Techniques for estimation are described that reduces or prohibits estimator inconsistency. For example, the estimation techniques may eliminate inconsistency due to spurious information gain which arises from approximations incurred when applying linear estimation tools to nonlinear problems (i.e., when using linearized estimators such as the extended Kalman Filter (EKF)).
For example, the structure of the “true” and estimated systems are described below and it is shown that for the true system four unobservable directions exist (i.e., 3-d.o.f. global translation and 1-d.o.f. rotation about the gravity vector), while the system employed for estimation purposes has only three unobservable directions (3-d.o.f. global translation). Further, it is recognized herein that a significant source of inconsistency in VINS is spurious information gained when orientation information from the image data and the IMU data is incorrectly projected along the direction corresponding to rotations about the gravity vector. An elegant and powerful estimator modification is described that reduces or explicitly prohibits this incorrect information gain. An estimator may, in accordance with the techniques described herein, apply a constrained estimation algorithm that computes the state estimates based on the IMU data and the image data while preventing projection of information from the image data and IMU data along at least one of the unobservable degrees of freedom. The techniques described herein may be applied in a variety of VINS domains (e.g., V-SLAM and the MSC-KF) when linearized estimators, such as the EKF, are used.
As used herein, an unobservable direction refers to a direction along which perturbations of the state cannot be detected from the input data provided by the sensors of the VINS. That is, an unobservable direction refers to a direction along which changes to the state of the VINS relative to one or more feature may be undetectable from the input data received from at least some of the sensors of the sensing platform. As one example, a rotation of the sensing system around a gravity vector may be undetectable from the input of a camera of the sensing system when feature rotation is coincident with the rotation of the sensing system. Similarly, translation of the sensing system may be undetectable when observed features are identically translated.
In one example, the observability properties of a linearized VINS model (i.e., the one whose Jacobians are evaluated at the true states) are described, and it is shown that such a model has four unobservable d.o.f., corresponding to three-d.o.f. global translations and one-d.o.f. global rotation about the gravity vector. Moreover, it is shown that when the estimated states are used for evaluating the Jacobians, as is the case for the EKF, the number of unobservable directions is reduced by one. In particular, the global rotation about the gravity vector becomes (erroneously) observable, allowing the estimator to gain spurious information and leading to inconsistency. These results confirm the findings of using a different approach (i.e., the observability matrix), while additionally specifying the exact mathematical structure of the unobservable directions necessary for assessing the EKF's inconsistency.
To address these problems, modifications of the VINS EKF is described herein where, in one example, estimated Jacobians are updated so as to ensure that the number of unobservable directions is the same as when using the true Jacobians. In this manner, the global rotation about the gravity vector remains unobservable and the consistency of the VINS EKF is significantly improved.
Simulations and experimental results are described that demonstrate inconsistency in standard VINS approaches as well as validate the techniques described herein to show that the techniques improve consistency and reduce estimation errors as compared to conventional VINS. In addition, performance of the described techniques is illustrated experimentally using a miniature IMU and a small-size camera.
This disclosure describes example systems and measurement models, followed by analysis of VINS inconsistency. The proposed estimator modification are presented and subsequently validated both in simulations and experimentally.
VINS Estimator Description
An overview of the propagation and measurement models which govern the VINS is described. In one example, an EKF is employed for fusing the camera and IMU measurements to estimate the state of the system including the pose, velocity, and IMU biases, as well as the 3D positions of visual landmarks observed by the camera. One example utilizes two types of visual features in a VINS framework. The first are opportunistic features (OFs) that can be accurately and efficiently tracked across short image sequences (e.g., using KLT), but are not visually distinctive enough to be efficiently recognized when revisiting an area. OFs can be efficiently used to estimate the motion of the camera over short time horizons (i.e., using the MSC-KF), but they are not included in the state vector. The second are Persistent Features (PFs), which are typically much fewer in number, and can be reliably redetected when revisiting an area (e.g., SIFT keys). 3D coordinates of the PFs (e.g., identified points, lines, planes, or geometric shapes based on combinations thereof) are estimated and may be recorded, e.g., into a database, to construct a map of the area or environment in which the VINS is operating.
System State and Propagation Model
IMU 16 produces IMU data 18 indicative of a dynamic motion of VINS 10. IMU 14 may, for example, detect a current rate of acceleration using one or more accelerometers as VINS 10 is translated, and detect changes in rotational attributes like pitch, roll and yaw using one or more gyroscopes. IMU 14 produces IMU data 18 to specify the detected motion. Estimator 22 of processing unit 20 process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 and, from the state estimates, computes position, orientation, speed, locations of observable features, a localized map, an odometry or other higher order derivative information represented by VINS data 24.
In this example, {I
In one example, estimator 22 comprises an EKF that estimates the 3D IMU pose and linear velocity together with the time-varying IMU biases and a map of visual features 15. In one example, the filter state is the (16+3N)×1 vector:
where xs(t) is the 16×1 state of VINS 10, and xƒ(t) is the 3N×1 state of the feature map. The first component of the state of VINS 10 is G(t) which is the unit quaternion representing the orientation of the global frame {G} in the IMU frame, {I}, at time t. The frame {I} is attached to the IMU, while {G} is a local-vertical reference frame whose origin coincides with the initial IMU position. The state of VINS 10 also includes the position and velocity of {I} in {G}, denoted by the 3×1 vectors GpI(t) and GvI(t), respectively. The remaining components are the biases, bg(t) and ba(t), affecting the gyroscope and accelerometer measurements, which are modeled as random-walk processes driven by the zero-mean, white Gaussian noise nwg(t) and nwa(t), respectively.
In one example, the map state, xƒ, comprises the 3D coordinates of N PFs, Gfi, i=1, . . . , N, and grows as new PFs are observed. In one implementation, the VINS does not store OFs in the map. Instead, processing unit 20 of VINS 10 processes and marginalizes all OFs in real-time using the MSC-KF approach. An example continuous-time model which governs the state of VINS 10.
An example system model describing the time evolution of the state and applied by estimator 22 is represented as:
In these expressions, ω(t)=[ω(t) ω2(t) ω(t)]T is the rotational velocity of the IMU, expressed in {I}, Ga is the IMU acceleration expressed in {G}, and
The gyroscope and accelerometer measurements, ωm and am, are modeled as
ωm(t)=ω(t)+bg(t)+ng(t)
am(t)=C(G(t))(Ga(t)−Gg)+ba(t)+na(t),
where ng and na are zero-mean, white Gaussian noise processes, and Gg is the gravitational acceleration. The matrix Cq) is the rotation matrix corresponding to
Linearizing at the current estimates and applying the expectation operator on both sides of (2)-(7), the state estimate propagation model is obtained as:
where ^a(t)=am(t)−^ba(t), and ^ω(t)=ωm(t)−^bg(t).
The (15+3N)×1 error-state vector is defined as
where xs(t) is the 15×1 error state corresponding to the sensing platform, and xƒ(t) is the 3N×1 error state of the map. For the IMU position, velocity, biases, and the map, an additive error model is utilized (i.e., x=x−^x is the error in the estimate ^x of a quantity x). However, for the quaternion a multiplicative error model is employed. Specifically, the error between the quaternion
where δq describes the small rotation that causes the true and estimated attitude to coincide. This allows the attitude uncertainty to be represented by the 3×3 covariance matrix E[δΘδΘT], which is a minimal representation.
The linearized continuous-time error-state equation is
where 03N denotes the 3N×3N matrix of zeros. Here, n is the vector comprising the IMU measurement noise terms as well as the process noise driving the IMU biases, i.e.,
n=[ngT nwgT naT nwaT]T, (19)
while Fs is the continuous-time error-state transition matrix corresponding to the state of VINS 10, and Gs is the continuous-time input noise matrix, i.e.,
where I3 is the 3×3 identity matrix. The system noise is modeled as a zero-mean white Gaussian process with autocorrelation E[n(t)nT(τ)]=Qcδ(t−τ), where Qc depends on the IMU noise characteristics and is computed off-line.
Discrete-Time Implementation
The IMU signals ωm and am are sampled by processing unit 20 at a constant rate 1/δt, where δttk+1−tk. Upon receiving a new IMU measurement 18, the state estimate is propagated by estimator 22 using 4th-order Runge-Kutta numerical integration of (10)-(15). In order to derive the covariance propagation equation, the discrete-time state transition matrix, Φk is computed, and the discrete-time system noise covariance matrix, Qk is computed as
The covariance is then propagated as:
Pk+1|k=ΦkPk|kΦkT+Qk. (23)
In the above expression, and throughout this disclosure, Pi|j and ^xi|j are used to denote the estimates of the error-state covariance and state, respectively, at time-step i computed using measurements up to time-step j.
Measurement Update Model
As VINS 10 moves, image source observes both opportunistic and persistent visual features. These measurements are utilized to concurrently estimate the motion of the sensing platform (VINS 10) and the map of PFs. In one implementation, three types of filter updates are distinguished: (i) PF updates of features already in the map, (ii) initialization of PFs not yet in the map, and (iii) OF updates. The feature measurement model is described how the model can be employed in each case.
To simplify the discussion, the observation of a single PF point fi is considered. The image source measures zi, which is the perspective projection of the 3D point Ifi, expressed in the current IMU frame {I}, onto the image plane, i.e.,
Without loss of generality, the image measurement is expressed in normalized pixel coordinates, and the camera frame is considered to be coincident with the IMU. Both intrinsic and extrinsic IMU-camera calibration can be performed off-line.
The measurement noise, ηi, is modeled as zero mean, white Gaussian with covariance Ri. The linearized error model is:
{tilde over ( )}zi=zi−^zi≃Hi{tilde over ( )}x+ηi (26)
where ^z is the expected measurement computed by evaluating (25) at the current state estimate, and the measurement Jacobian, Hi, is
Hi=Hc[H
with
evaluated at the current state estimate. Here, Hc, is the Jacobian of the camera's perspective projection with respect to Ifi, while H
This measurement model is utilized in each of the three update methods. For PFs that are already in the map, the measurement model (25)-(27) is directly applied to update the filter. In particular, the measurement residual ri is computed along with its covariance Si, and the Kalman gain Ki, i.e.,
ri=zi−^zi (32)
Si=HiPk+1|kHiT+Ri (33)
Ki=Pk+1|kHiTSi−1. (34)
and update the EKF state and covariance as
^xk+1|k+1=^xk+1|k+Kiri (35)
Pk+1|k+1=Pk+1|k−Pk+1|kHiTSi−1HiPk+1|k (36)
For previously unseen (new) PFs, compute an initial estimate is computed, along with covariance and cross-correlations by solving a bundle-adjustment problem over a short time window. Finally, for OFs, the MSC-KF is employed to impose an efficient (linear complexity) pose update constraining all the views from which a set of features was seen.
VINS Observability Analysis
In this section, the observability properties of the linearized VINS model are examined. Specifically, the four unobservable directions of the ideal linearized VINS are analytically determined (i.e., the system whose Jacobians are evaluated at the true states). Subsequently, the linearized VINS used by the EKF, whose Jacobians are evaluated using the current state estimates, are shown to have only three unobservable directions (i.e., the ones corresponding to global translation), while the one corresponding to global rotation about the gravity vector becomes (erroneously) observable. The findings of this analysis are then employed to improve the consistency of the EKF-based VINS.
Observability Analysis of the Ideal Linearized VINS Model
An observability matrix is defined as a function of the linearized measurement model, H, and the discrete-time state transition matrix, Φ, which are in turn functions of the linearization point, x*, i.e.,
where Φk,1=Φk+1 . . . Φ1 is the state transition matrix from time step 1 to k. First, consider the case where the true state values are used as linearization point x* for evaluating the system and measurement Jacobians. The case where only a single feature point is visible is discussed. The case of multiple features can be easily captured by appropriately augmenting the corresponding matrices. Also, the derived nullspace directions remain the same, in terms of the number, with an identity matrix (−└Gfi×┘Gg) appended to the ones corresponding to global translation (rotation) for each new feature. The first block-row of M is written as (for k=−1):
Hk−ψ1[ψ2 03 03 03 −I3 I3] (38)
where
Ψ1=Hc,kC(I
Ψ2=└Gf−GpI
and I
To compute the remaining block rows of the observability matrix, Φk,1 is determined analytically by solving the matrix differential equation:
{dot over ( )}Φk,1=FΦk,1, i.c. Φ1,1=I18. (41)
with F detailed in (18). The solution has the following structure
where among the different block elements Φij, the ones necessary in the analysis are listed below:
By multiplying (38) at time-step k and (42), the k-th block row of M is obtained, for k>1:
One primary result of the analysis is: the right nullspace N1 of the observability matrix M(x) of the linearized VINS
M(x)N1=0 (51)
spans the following four directions:
For example, the fact that N1 is indeed the right nullspace of M(x) can be verified by multiplying each block row of M [see (46)] with Nt,1 and Nr,1 in (52). Since MkNt,1=0 and MkNr,1=0, it follows that MN1=0. The 18×3 block column Nt,1 corresponds to global translations, i.e., translating both the sensing platform and the landmark by the same amount. The 18×1 column Nr,1 corresponds to global rotations of the sensing platform and the landmark about the gravity vector.
Observability Analysis of the EKF Linearized VINS Model
Ideally, any VINS estimator should employ a linearized system with an unobservable subspace that matches the true unobservable directions (52), both in number and structure. However, when linearizing about the estimated state ^x, M=M(′x) gains rank due to errors in the state estimates across time. In particular, the last two block columns of Mk in (46) remain the same when computing Mk=HkΦk,1 from the Jacobians Hk and Φk,1 evaluated at the current state estimates and thus the global translation remains unobservable. In contrast, the rest of the block elements of (46), and specifically Γ2 do not adhere to the structure shown in (48) and as a result the rank of the observability matrix {circumflex over (M)} corresponding to the EKF linearized VINS model increases by one. In particular, it can be easily verified that the right nullspace {circumflex over (N)}1 of {circumflex over (M)} does not contain the direction corresponding to the global rotation about the g vector, which becomes (erroneously) observable. This, in turn, causes the EKF estimator to become inconsistent. The following describes techniques for addressing this issue.
OC-VINS: Algorithm Description
Estimator 22 receives IMU data 18 and, based on the IMU data 18, performs propagation by computing updated state estimates and propagating the covariance. At this time, estimator 22 utilizes a modified a state transition matrix to prevent correction of the state estimates along at least one of the unobservable degrees of freedom (STEP 54). In addition, estimator 22 receives image data 14 and updates the state estimates and covariance based on the image data. At this time, estimator 22 uses a modified observability matrix to similarly prevent correction of the state estimates along at least one of the unobservable degrees of freedom (STEP 58). In this example implementation, estimator 22 enforces the unobservable directions of the system, thereby preventing one or more unobservable directions from erroneously being treated as observable after estimation, thereby preventing spurious information gain and reducing inconsistency. In this way, processing unit 20 may more accurately compute state information for VINS 10, such as a pose of the vision-aided inertial navigation system, a velocity of the vision-aided inertial navigation system, a displacement of the vision-aided inertial navigation system based at least in part on the state estimates for the subset of the unobservable degrees of freedom without utilizing state estimates the at least one of the unobservable degrees of freedom.
An example algorithm is set forth below for implementing the techniques in reference to the equations described in further detail herein:
In order to address the EKF VINS inconsistency problem, it is ensured that (51) is satisfied for every block row of {circumflex over (M)} when the state estimates are used for computing Ĥk, and Φk,1, ∀k>0, i.e., it is ensured that
Ĥk{circumflex over (Φ)}k,1{circumflex over (N)}1=0, ∀k>0. (53)
One way to enforce this is by requiring that at each time step, {circumflex over (Φ)}k and Ĥk satisfy the following constraints:
{circumflex over (N)}k+1={circumflex over (Φ)}k{circumflex over (N)}k (54a)
Ĥk{circumflex over (N)}k=0, ∀k>0 (54b)
where {circumflex over (N)}k, k>0 is computed analytically (see (56)). This can be accomplished by appropriately modifying {circumflex over (Φ)}k and Ĥk.
In particular, rather than changing the linearization points explicitly, the nullspace, {circumflex over (N)}k, is maintained at each time step, and used to enforce the unobservable directions. This has the benefit of allowing us to linearize with the most accurate state estimates, hence reducing the linearization error, while still explicitly adhering to the system observability properties.
Nullspace Initialization
The initial nullspace is analytically defined:
At subsequent time steps, the nullspace is augmented to include sub-blocks corresponding to each new PF in the filter state, i.e.,
where the sub-blocks {circumflex over (N)}F
Modification of the State Transition Matrix Φ
During the covariance propagation step, it is ensured that {circumflex over (N)}k+1={circumflex over (Φ)}k{circumflex over (N)}k. Note, the constraint on {circumflex over (N)}t,k is automatically satisfied due to the structure of {circumflex over (Φ)}k, so we focus on {circumflex over (N)}r,k. Note that due to the structure of the matrices Φk and Nr,k, the first five block elements of need only be considered while the equality for the remaining ones, i.e., the elements corresponding to the features, are automatically satisfied. Specifically, rewrite (54a) is rewritten element-wise as:
and collect the constraints resulting from each block row of the above vector. Specifically, from the first block row we have
C(I
{circumflex over (Φ)}11*=C(I,k+1|k
The requirements for the third and fifth block rows are
{circumflex over (Φ)}31C(I
{circumflex over (Φ)}51C(I
both of which are in the form Au=w, where u and w are nullspace vector elements that are fixed. In order to ensure that (61) and (62) are satisfied, a perturbed A* is found for A=Φ31 and A=Φ51 that fulfills the constraint. To compute the minimum perturbation, A*, of A, the following minimization problem is formulated:
where denotes the Frobenius matrix norm. After employing the method of Lagrange multipliers, and solving the corresponding KKT optimality conditions, the optimal A* that fulfills (63) is
A*=A−(Au−w)(uTu)−1uT. (64)
Once the modified {circumflex over (Φ)}11*, is computed from (60), and {circumflex over (Φ)}13* and {circumflex over (Φ)}51* from (63) and (64), the corresponding elements of {circumflex over (Φ)}k is updated and the covariance propagation is addressed.
Modification of H
During each update step, we seek to satisfy Ĥk{circumflex over (N)}k=0. In turn, this means that
Ĥk{circumflex over (N)}t,k=0 (65)
Ĥk{circumflex over (N)}r,k=0 (66)
must both hold. Expressing (65) for a single point we have [see (27) and (52)]
which is satisfied automatically, since ĤP=−Ĥƒ[see (30) and (31)]. Hence, the nullspace direction corresponding to translation is not violated.
Expanding the second constraint (66), we have
Since ĤP=−Ĥƒ, (68) is equivalent to satisfying the following relationship
where we have implicitly defined Ĥc
A* is computed as:
A*=A−Au(uTu)−1uT (71)
After computing the optimal A*, the Jacobian elements are recovered as
Ĥc
Ĥcp*=A1:2,4:6* (73)
Ĥcf*=−Ĥcp* (74)
where the subscripts (i:j, m:n) denote the submatrix spanning rows i to j, and columns m to n. Hence the modified observability matrix is
Ĥk*=[Ĥc
Having computed the modified measurement Jacobian, the filter update is performed. By following this process, it can be ensured that the EKF estimator 22 does not gain information along the unobservable directions of the system. An overview of one example of the OC-VINS modified EKF estimator is presented in Algorithm 1.
As the camera-IMU platform (VINS 10) moves into new environments, new features are added to the map constructed within VINS 24. This entails intersecting the bearing measurements from multiple camera observations to obtain an initial estimate of each new feature's 3D location, as well as computing the initial covariance and cross-correlation between the new landmark estimate and the state. This can be solved as a minimization problem over a parameter vector x=[xs,1t . . . xs,mT|ƒT]T, where xs,i, i=1 . . . m, are the m camera poses which the new landmark, f, was observed from. Specifically, the following is minimized:
where Pss−1 is the information matrix (prior) of the state estimates across all poses obtained from the filter, and we have no initial information about the feature location (denoted by the block (2,2) element of the prior information being equal to zero). The m measurements zi, i=1 . . . m are the perspective projection observations of the point. Stochastic cloning over m time steps are employed to ensure that the cross-correlation between the camera poses are properly accounted for.
An initial guess for the landmark location is obtained using any intersection method, and then (76) is iteratively minimized. At each iteration, the following linear system of equations is solved:
Applying the Sherman-Morrison-Woodbury matrix identity, we solve the system by inverting the matrix on the left-hand side as
Here, M=HsPssHsT+R. During each iteration, the parameter vector is updated as
After the minimization process converges, the posterior covariance of the new state (including the initialized feature) is computed as:
where each element is defined from (79)-(80).
Simulations
Monte-Carlo simulations were conducted to evaluate the impact of the proposed Observability-Constrained VINS (OC-VINS) method on estimator consistency. The proposed methodology was applied to two VINS systems: (i) Visual Simultaneous Localization and Mapping (V-SLAM), and (ii) the Multi-state Constraint Kalman Filter (MSC-KF), which performs visual-inertial localization without constructing a map.
Simulation 1: Application of the Proposed Framework to V-SLAM
In this section, the results of applying the proposed OC-VINS techniques to V-SLAM (referred to as OC-V-SLAM) are described. The Visual Simultaneous Localization and Mapping (V-SLAM) paradigm refers to a family of algorithms for fusing inertial measurements with visual feature observations. In V-SLAM, the current IMU pose, as well as the 3D positions of all visual landmarks are jointly estimated. The performance of OC-SLAM described herein is compared to the standard V-SLAM (Std-V-SLAM), as well as the ideal V-SLAM that linearizes about the true state. Specifically, the Root Mean Squared Error (RMSE) and Normalized Estimation Error Squared (NEES) were computed over 20 trials in which the camera-IMU platform traversed a circular trajectory of radius 5 m at an average velocity of 60 cm/s. The camera had 45 deg field of view, with σpx=1px, while the IMU was modeled after MEMS quality sensors. The camera observed visual features distributed on the interior wall of a circumscribing cylinder with radius 6 m and height 2 m (see
Simulation 2: Application of the Proposed Framework to MSC-KF
The OC-VINS methodology described herein was applied to the MSC-KF (referred herein as “OC-MSC-KF”). In the MSC-KF framework, all the measurements to a given OF are incorporated during a single update step of the filter, after which each OF is marginalized. Hence, in the OC-MSC-KF, the sub-blocks of the nullspace corresponding to the features [i.e., Nf
Monte-Carlo simulations were conducted to evaluate the consistency of the proposed method applied to the MSC-KF. Specifically, the standard MSC-KF (Std-MSC-KF) was compared with the Observability-Constrained MSC-KF (OC-MSC-KF), which is obtained by applying the methodology described herein, as well as the Ideal-MSC-KF, whose Jacobians are linearized at the true states, which were used as a benchmark. The RMSE and NEES were evaluated over 30 trials (see
The proposed OC-VINS framework were also validated experimentally and compared with standard VINS approaches. Specifically, the performance of OC-V-SLAM and OC-MSC-KF were evaluated on both indoor and outdoor datasets. In the experimental setup, a light-weight sensing platform comprised of an InterSense NavChip IMU and a PointGrey Chameleon camera (see
During the indoor experimental tests, the sensing platform was mounted on an Ascending Technologies Pelican quadrotor equipped with a VersaLogic Core 2 Duo single board computer. For the outdoor dataset, the sensing platform was head-mounted on a bicycle helmet, and interfaced to a handheld Sony Vaio. An overview of the system implementation is described, along with a discussion of the experimental setup and results.
The image processing is separated into two components: one for extracting and tracking short-term opportunistic features (OFs), and one for extracting persistent features (PFs) to use in V-SLAM.
OFs are extracted from images using the Shi-Tomasi corner detector. After acquiring image k, it is inserted into a sliding window buffer of m images, {k−m+1, k−m+2, . . . , k}. We then extract features from the first image in the window and track them pairwise through the window using the KLT tracking algorithm. To remove outliers from the resulting tracks, we use a two-point algorithm to find the essential matrix between successive frames. Specifically, given the filter's estimated rotation (from the gyroscopes' measurements) between image i and j, , we estimate the essential matrix from only two feature correspondences. This approach is more robust than the five-point algorithm because it provides two solutions for the essential matrix rather than up to ten. Moreover, it requires only two data points, and thus it reaches a consensus with fewer hypotheses when used in a RANSAC framework.
The PFs are extracted using SIFT descriptors. To identify global features observed from several different images, we first utilize a vocabulary tree (VT) structure for image matching. Specifically, for an image taken at time k, the VT is used to select which image(s) taken at times 1, 2, . . . , k−1 correspond to the same physical scene. Among those images that the VT reports as potential matches, the SIFT descriptors from each of them are compared to those from image k to create tentative feature correspondences. The epipolar constraint is then enforced using RANSAC and Nister's five-point algorithm to eliminate outliers. It is important to note that the images used to construct the VT (offline) are not taken along our experimental trajectory, but rather are randomly selected from a set of representative images.
In the first experimental trial, we compared the performance of OC-V-SLAM to that of Std-V-SLAM on an indoor trajectory. The sensing platform traveled a total distance of 172.5 m, covering three loops over two floors in Walter Library at the University of Minnesota. The quadrotor was returned to its starting location at the end of the trajectory, to provide a quantitative characterization of the achieved accuracy.
Opportunistic features were tracked using a window of m=10 images. Every m camera frames, up to 30 features from all available PFs are initialized and the state vector is augmented with their 3D coordinates. The process of initializing PFs is continued until the occurrence of the first loop closure; from that point on, no new PFs are considered and the filter relies upon the re-observation of previously initialized PFs and the processing of OFs.
For both the Std-V-SLAM and the OC-V-SLAM, the final position error was approximately 34 cm, which is less than 0.2% of the total distance traveled (see
a) highlights the difference in estimated yaw uncertainty between the OC-V-SLAM and the Std-V-SLAM. In contrast to the OC-V-SLAM, the Std-V-SLAM covariance rapidly decreases, violating the observability properties of the system. Similarly, large differences can be seen in the covariance estimates for the x-axis position estimates (see
The proposed OC-MSC-KF was validated on real-world data. The first test comprised a trajectory 50 m in length that covered three loops in an indoor area, after which the testbed was returned to its initial position. At the end of the trajectory, the Std-MSC-KF had a position error of 18.73 cm, while the final error for the OC-MSC-KF was 16.39 cm (approx. 0.38% and 0.33% of the distance traveled, respectively). In order to assess the impact of inconsistency on the orientation estimates of both methods, the rotation between the first and last images computed independently using Batch Least-Squares (BLS) and feature point matches was used as ground truth. The Std-MSC-KF had final orientation error [0.15 −0.23 −5.13] degrees for roll, pitch, and yaw (rpy), while the rpy errors for the OC-MSC-KF were [0.19 −0.20 −1.32] degrees respectively.
In addition to achieving higher accuracy, for yaw in particular, the OC-MSC-KF is more conservative since it strictly adheres to the unobservable directions of the system. This is evident in both the position and orientation uncertainties. The y-axis position and yaw angle uncertainties is plotted in
In the final experimental trial, the OC-MSC-KF was tested on a large outdoor dataset (approx. 1.5 km in length).
b) depicts a zoomed-in plot of the starting location (center) for both filters, along with the final position estimates. In order to evaluate the accuracy of the proposed method, the sensing platform was returned to its starting location at the end of the trajectory. The OC-MSC-KF obtains a final position error of 4.38 m (approx. 0.3% of the distance traveled), while the Std-MSC-KF obtains a final position error of 10.97 m. This represents an improvement in performance of approximately 60%.
The filters' performance is also illustrated visually in
As explained above, over a short period of time, all six degrees of freedom of a robot's position and orientation (pose) can be obtained directly by integrating the rotational velocity and linear acceleration measurements from an Inertial Measurement Unit (IMU). However, due to the biases and noise in the IMU signals, errors in the robot pose estimates accumulate quickly over time rendering them unreliable. To deal with this problem, most inertial navigation systems (INS) rely on GPS for bounding the estimation error. Unfortunately, for robots operating in urban or indoor environments, the GPS signals are usually either unreliable or unavailable.
Compared to regular cameras, RGBD cameras provide both color images and the corresponding 3D point cloud, which simplifies the tasks of triangulating point-feature positions and extracting higher level features, such as planes, from the scene. To date, very few works exist that combine inertial and RGBD measurements for navigation. When using IMU and only point feature measurements, one degree of rotational freedom (yaw) of the IMU-RGBD camera is unobservable. As a result, the uncertainty and error in the yaw estimates will keep increasing, hence, adversely affecting the positioning accuracy. In this disclosure, it is demonstrated that by observing plane features of known directions, the yaw becomes observable and, thus, its uncertainty remains bounded.
In this section, a linear-complexity inertial navigation algorithm is presented that uses both point and plane features. In particular, system observability properties, including its observable modes and unobservable directions, are described. In example implementations, point feature measurements are processed using a tightly-coupled visual-inertial odometry, multi-state constraint Kalman filter (MSC-KF), with complexity linear in the number of observed point features. Additionally, the directions of the plane features are used as measurements in the extended Kalman filter update without including the plane feature poses in the state vector, hence ensuring linear complexity in the number of the observed plane features.
The observability of the IMU-RGBD camera navigation system when using both point and plane feature measurements is described, and it is proved that with a single plane feature of known direction, the IMU gyroscope bias is observable. If additionally a single point feature is detected, and the plane's normal vector is not aligned with gravity, all degrees of freedom of the IMU-RGBD camera navigation system, except the global position, become observable. Based on the observability analysis, the accuracy and consistency of the IMU-RGBD camera navigation system is improved by employing the observability-constrained extended Kalman filter that enforces the observability requirement. A linear-complexity algorithm for fusing inertial measurements with both point and plane features is presented and experimentally validated.
The rest of this section is structured as follows. First, the inertial navigation system model using both point and plane feature measurements is presented. A methodology for studying the observability properties of unobservable nonlinear systems is described. The method is applied to the specific IMU-RGBD camera navigation system, and its unobservable directions are described. The OC-EKF algorithm developed for improving the accuracy and consistency of the inertial navigation system based on its observability properties is presented. Experimental results for the performance of the proposed algorithm are described and assessed.
VINS Estimator Description
In this section, the system state and covariance propagation equations using inertial measurements are described. The measurement model for processing plane and point feature observations is presented.
System State and Propagation Model
In the IMU-RGBD camera navigation system, the state vector to be estimated can be represented as:
x=[IqGT GvIT GpIT GpƒT baT bgT]T
where IqG is the unit quaternion representing the orientation of the global frame {G} in the IMU's frame of reference {I}, GvI and GpI represent the velocity and position of {I} in {G}, Gpƒ denotes the position of the point feature in {G}, ba and bg represent the gyroscope and accelerometer biases.
The system model describing the time evolution of the states can be represented as:
where Iω(t)=[ω1 ω2 ω3]T and Ga(t)=[a1 a2 a3]T are the system rotational velocity and linear acceleration expressed in {I} and {G}respectively, wwa and wwg are zero-mean white Gaussian noise processes driving the gyroscope and accelerometer biases bg and ba, Gg is the gravitational acceleration in {G}, C(IqG(t)) denotes the rotation matrix corresponding to IqG(t), and
The gyroscope and accelerometer measurements, ωm and am, are modeled as:
ωm(t)=Iω(t)+bg+wg(t) (A.2)
am(t)=C(IqG(t))(Ga(t)−Gg)+ba+wa(t) (A.3)
where wg and wa are zero-mean, white Gaussian noise processes. In order to determine the covariance propagation equation, we define the error-state vector as:
{tilde over (x)}=[IδθGT G{tilde over (V)}IT G{tilde over (p)}IT G{tilde over (p)}ƒT {tilde over (b)}aT {tilde over (b)}gT]T (A.4)
Then, the linearized continuous-time error-state equation can be written as:
{tilde over ({dot over (x)})}=Fc{tilde over (x)}+Gcw (A.5)
where w=[wgT wwgT waT wwaT]T denotes the system noise, Fc is the continuous-time error-state transition matrix corresponding to the system state, and Gc is the continuous-time input noise matrix. The system noise is modelled as a zero-mean white Gaussian process with autocorrelation [w(t)wT(τ)]=Qcδ(t−τ). To compute the propagated covariance, the discrete-time state transition matrix from time tk to tk+1, Φk is found, and the system noise covariance matrix, Qk, which can been computed as:
Φk=Φ(tk+1,tk)=exp(∫t
Qk=∫t
The propagated covariance can be determined as:
PK+1|k=ΦkPk|kΦkT+Qk (A.8)
Measurement Model for Plane Features
For purposes of example, the IMU frame {I} and the RGBD-camera frame {C} are assumed to coincide. Let Gn denote the normal vector to a plane, whose direction is assumed known in the global frame of reference, and thus we need not include it in the state vector. Planes are fitted in the 3D point cloud provided by the RGBD camera, and its normal vector, zplane, is used as the plane feature measurement:
zplane=C(ηθ)In=C(ηθ)C(IqG)Gn (A.9)
where ηθ=ak is the measurement noise representing a rotation by an angle α around the unit vector k. Since Gn is a unit norm vector, the measurement noise is modelled as an extra rotation of the plane's normal vector. Moreover, in order to avoid a singular representation of the noise covariance when processing this observation in the EKF, introduce the following modified measurement model is introduced:
and the linearized error model is computed as:
{tilde over (z)}′plane=z′plane−{circumflex over (z)}′plane≃Hplane{tilde over (x)}+ηplane (A.11)
where {circumflex over (z)}′plane is the expected measurement computed by evaluating (A.10) at the current state estimate and ηθ=0, ηplane is the measurement noise, and the measurement Jacobian, Hplane, is computed using the chain rule as:
Hplane=Hc[Hθ
where
Measurement Model for Point Features
The RGBD camera can directly measure the 3D position of a point feature Ipƒ in the IMU frame {I} as:
zpoint=Ipƒ+ηpoint=C(IqG)(Gpƒ−GpI)+ηpoint (A.14)
The linearized error model is computed as:
{tilde over (z)}point=zpoint−{circumflex over (z)}point≃Hpoint{tilde over (X)}+ηpoint (A.15)
where {tilde over (z)}point is the expected measurement computed by evaluating (14) at the current state estimate and ηpoint, while the measurement Jacobian, Hpoint, is
Hpoint=[Hθ
where
Observability Analysis
In this section, a brief overview of the method for analyzing the observability of nonlinear systems is provided, and an extension for determining the unobservable directions of nonlinear systems is presented.
Observability Analysis with Lie Derivatives
Consider a nonlinear, continuous-time system:
where u=[u1 . . . ul]T is its control input, x=[x1 . . . xm]T is the system's state vector, y is the system output, and ƒi, i=0, . . . , l are the process functions. The zeroth-order Lie derivative of a measurement function h is defined as the function itself:
0h=h(x) (A.18)
and the span of the ith order Lie derivative is defined as:
For any ith order Lie derivative, ih, the i+1th order Lie derivative ƒ
ƒ
Finally, the observability matrix of system (A.17) is defined as a matrix with block rows the span of the Lie derivatives of (A.17), i.e.,
where i, j, k=0, . . . , l. To prove that a system is observable, it suffices to show that any submatrix of comprising a subset of its rows is of full column rank. In contrast, to prove that a system is unobservable and find its unobservable directions, we need to: (i) show that the infinitely many block rows of can be written as a linear combination of a subset of its block rows, which form a submatrix ′; and (ii) find the nullspace of ′ in order to determine the system's unobservable directions. Although accomplishing (ii) is fairly straightforward, achieving (i) is extremely challenging especially for high-dimensional systems, such as the IMU-RGBD camera navigation system.
Observability Analysis with Basis Functions
To address this issue, techniques are leveraged in the observability analysis, which relies on change of variables for proving that a system is unobservable and finding its unobservable directions.
Theorem 1: Assume that there exists a nonlinear transformation β(x)=[β1(x)T . . . βt(x)T]T (i.e., a set of basis functions) of the variable x in (A.17), such that:
i=0, . . . , l, are functions of β;
Then:
and Ξ is the observability matrix of the following system:
Proof: The proof is given in C. X. Guo and S. I. Roumeliotis, “IMU-RGBD camera 3d pose estimation and extrinsic calibration: Observability analysis and consistency improvement,” in Proc. of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, May 6-10 2013, pp. 2920-2927, incorporated herein by reference.
Based on Theorem 1, the unobservable directions can be determined with significantly less effort. To find a system's unobservable directions, we first need to define the basis functions that satisfy conditions (A1) and (A3), and verify that condition (A2) is satisfied, or equivalently that the basis function set is complete. Once all the conditions are fulfilled, the unobservable directions of (A. 17) correspond to the nullspace of matrix B, which has finite dimensions, and thus it is easy to analyze.
Observability Analysis of the IMU-RGBD Camera Navigation System
In this section, Theorem 1 is leveraged to study the observability of the IMU-RGBD camera navigation system when using plane and point feature observations. To do this, the system's basis functions are found, which are also the observable modes, using only a single plane feature. Then, the basis function set for the IMU-RGBD camera navigation system is completed using both plane and point features. Finally, the unobservable directions of the IMU-RGBD camera navigation system when using only plane observations is found, and when using both plane and point feature measurements.
Basis Functions when Using Plane Features
For purposes of example, the orientation between the IMU frame {I} and the global frame {G} are expressed using the Cayley-Gibbs-Rodriguez parameters, ISG. Furthermore, we retain only a few of the subscripts and superscripts in the state vector which is expressed as:
x=[ST vT pT pƒT baT bgT]T
Employing the propagation model, the IMU-RGBD camera navigation system using only plane features can be written as:
where CC(s) represents the rotation matrix corresponding to s, and
Note that ƒ0 is a 18×1 vector, while ƒ1 and ƒ2 are both 18×3 matrices which is a compact way for representing three process functions:
ƒ1ω=ƒ11ω1+ƒ12ω2+ƒ13ω3
ƒ2a=ƒ21a1+ƒ22a2+ƒ23a3 (A.24)
To define the basis functions for this system, the conditions of Theorem 1 are followed: (i) Select basis functions so that the measurement function zplane can be expressed as a function of β; (ii) Select the remaining basis functions as functions of the system's Lie derivatives, until condition (A2), (i.e.,
is a function of β for any i), is satisfied by all the basis functions.
For this particular problem, the first set of basis functions are defined directly as the measurement function:
β1zplane=CGn (A.25)
where β1 is a 3×1 vector representing in a compact form 3 basis functions. To check if condition (A2) of Theorem 1 is fulfilled, compute the span of β1 is computed with respect to x
and project it onto all the process functions:
As such,
contains bg, and thus is not a function of the previously defined basis function β1. To proceed, condition (A3) of Theorem 1 is used to define additional basis functions as nonlinear combinations of the system's Lie derivatives.
Since the basis function β1 is the zeroth-order Lie derivative of the measurement h=zplane, then by definition, (A.26) is one of the first-order Lie derivatives:
Hereafter, this fact used to define more basis functions. By definition, the second-order Lie derivative ƒ
ƒ
If equation (A.29), for i=1, 2, 3, is stacked into a matrix form
since Y is a 9×3 matrix of full column rank, bg can be determined in terms of the Lie derivatives ƒ
β2bg (A.31)
Then, if the span of β2 is computed and projected onto the process functions, the result is
which are all zeros, and thus do not contain any term not belonging to the previously defined basis functions. Therefore, a complete basis function set of the IMU-RGBD camera navigation system has been found using a single plane feature.
Basis Functions when Using Both Plane and Point Features
The basis functions of the IMU-RGBD camera navigation system using only a single point feature are:
[β3 β4 β5 β6 β7]=[C(pƒ−p) bg Cv Cg ba] (A.33)
Since the basis functions are also the system's observable modes, the complete basis function set of the IMU-RGBD camera navigation system when using both plane and point features is the union of the basis function sets, {β3,β2} (resulting from measurements of the plane feature), and {β3,β4,β5,β6,β7} (computed for observations of the point feature). Hereafter, the union of these two basis function sets is determined after removing redundant elements.
First, since β2=β4=bg, we have β2∩β4=bg. Then, under the assumption that the normal vector of the observed plane is not parallel to gravity, C=C(s) can be expressed in terms of Gn, g, β=CGn, and β6=Cg. Since both Gn and g are known quantities, we have β1∩β6=s. Therefore, the basis functions of the IMU-RGBD camera navigation system using both plane and point features are:
with which, we leverage result (i) of Theorem 1 to construct the observable system in terms of the basis functions as:
where D(β′2)I+└β′2┘+β′2β′2T. System (A.35) is actually a minimal representation of the IMU-RGBD camera navigation system using both plane and point features. Hereafter, how to find the unobservable directions of the IMU-RGBD camera navigation system is shown by leveraging result (iii) of Theorem 1.
Determining the System's Unobservable Directions
In this section, the unobservable directions of the IMU-RGBD camera navigation system is first determined when observing only a single plane feature by computing the nullspace of the basis functions' span,
Then, the unobservable directions of the IMU-RGBD camera navigation system when observing both a single plane feature and a single point feature is found by computing the nullspace of
Theorem 2: The IMU-RGBD camera navigation system observing a single plane feature is unobservable, and its unobservable directions are spanned by the IMU-RGBD camera orientation around the plane's normal vector and the accelerometer bias in the IMU frame {I}, as well as the IMU-RGBD camera position, velocity, and the point feature position in the global frame {G}.
Proof: In the previous section, it was shown that the basis function set {β1, β2} satisfies all three conditions of Theorem 1. Therefore, the system's unobservable directions span the nullspace of matrix B1, which is formed by stacking the spans of the basis functions β1 and β2 as:
It can be seen that the nullspace of B1 is spanned by:
where Nplaneg (the first column of Nplane) corresponds to the IMU-RGBD camera's rotation around the plane feature's normal vector, and Nplanep (the remaining columns of Nplane) denotes the unobservable directions in the IMU-RGBD camera velocity, position, the point feature position, and the accelerometer bias.
In contrast, when both point and plane feature measurements are available, we have:
Theorem 3: The IMU-RGBD camera navigation system using a single point feature and a single plane feature (of known direction which is not parallel to gravity) is unobservable, and its unobservable subspace is spanned by 3 directions corresponding to the IMU-RGBD camera position in the global frame {G}.
Proof: Employing result (iii) of Theorem 1, the system's unobservable directions can be determined by computing the nullspace of the span B2 of the corresponding basis functions β′, where
Let N=[N1T N2T N3T N4T N5T N6T]T be the right nullspace of matrix B2. Hereafter, the relation B2N=0 is used to determine the elements of N. Specifically, from the second, fourth, and fifth block rows of the product B2N, we have:
N1=N5=N6=03 (A.39)
Then, from the first and third block rows of B2N, we have N3=N4−I3, and N2=03. Using Gaussian elimination, it is easy to show that the rank of matrix B2 is 15. Thus, the dimension of its right nullspace is exactly three, and the system's unobservable directions are spanned by:
N[03 03 I3 I3 03 03]T (A.40)
which corresponds to the global position of the IMU-RGBD camera and the point feature. Intuitively, this means that translating the IMU-RGBD camera and the point feature positions concurrently has no impact on the system's measurements.
The unobservable directions of the IMU-RGBD camera navigation system using only a single point feature are spanned by:
Note that N=Nplane∩Npoint, which makes sense because any unobservable quantity of the IMU-RGBD camera navigation system using both point and plane feature observations, must be unobservable when the system uses either plane or point feature measurements.
Algorithm Description
This section presents an example IMU-RGBD camera navigation algorithm employing the observability constrained (OC)-EKF, which seeks to maintain the original system's observability properties in the linearized implementation (EKF). In particular, the implementation of the OC-EKF for processing point feature measurements is described. Then, it is proved that once the OC-EKF is employed for point feature measurements, the observability constraint is automatically satisfied for the plane feature measurements.
A system's observability Gramian, M, is defined as:
where Φk,1k−1 . . . Φ1 is the state transition matrix from time step 1 to k, and Hk is the measurement Jacobian at time step k. A system's unobservable directions, N, are supposed to span the observability Gramian's nullspace:
MN=0 (A.43)
However, (A.43) does not hold when a nonlinear system is linearized using the current state estimate. As a consequence, the EKF gains spurious information along unobservable directions, which results in smaller uncertainty (that causes the filter to be inconsistent) and larger estimation errors. To address this issue, the OC-EKF modifies the state transition and measurement Jacobian matrices in such a way so that the resulting linearized system adheres to the observability properties of the original nonlinear system. In particular, (A.43) can be satisfied by enforcing the following two constraints:
Nk+1=ΦkNk (A.44)
HkNk=0,∀k>0 (A.45)
where Nk and Nk+1 are the unobservable directions evaluated at time-steps k and k+1. Hereafter, an example implementation of the algorithm is described. Further example details can be found in C. X. Guo and S. I. Roumeliotis, “Observability-constrained EKF implementation of the IMU-RGBD camera navigation using point and plane features,” University of Minnesota, Tech. Rep., March 2013., incorporated herein by reference.
Observability Constraint for Point Feature Measurements
In this section, the implementation of the OC-EKF for the IMU-RGBD camera navigation system using point feature measurements is presented.
Modification of the State Transition Matrix Φk:
To start, the state transition matrix, Φk, is modified according to the observability constraint (A.44)
Npointk+1=ΦkNpointk (A.46)
where Npointk and Npointk+1, defined in (41), are the unobservable directions when using only point features, at time-steps k and k+1 respectively, and Φk has the following structure:
The observability constraint (A.46) is equivalent to the following three constraints:
Φ11Ckg=Ck+1g (A.48)
which is satisfied by modifying Φ11*=Ck+1Ck
Φ21Ckg=└vk┘g−└vk+1┘g (A.49)
Φ31Ckg=δt└vk┘g+└pk┘g−└pk+1┘g (A.50)
which can be formulated and solved analytically as a constrained optimization problem where we seek to find the closest, in the Frobenius norm, Φ21* and Φ31* that satisfy constraints (A.49) and (A.50).
Modification of the Measurement Jacobian Hpoint:
During the update, we seek to modify the Jacobian matrix Hpointk so as to fulfill constraint (A.45), i.e.,
HpointkNpointk=0 (A.51)
Substituting Hpointk and Npointk, as defined in (A. 16) and (A.40) respectively, into (A.51), it can be shown that (A.51) is equivalent to the following two constraints
As before, we can analytically determine Hθ
Observability Constraint for Plane Feature Measurements
In this section, it is proved that once the OC-EKF is applied to the IMU-RGBD camera navigation system using point feature measurements, the observability constraint (A.43) is automatically satisfied for the plane feature measurements.
Substituting Φk and Hplanek into the observability Gramian Mplane for plane feature measurements, the first block row of Mplane is just the measurement Jacobian matrix
Mplane(1)=Hplane1=HC1[└C(IqG1)Gn┘03×15] (A.54)
while the kth block row is computed as
where Πk Φ11k−1 . . . Φ111, and ψk is a time-varying matrix that does not affect the current analysis. When applying the OC-EKF to point features, Φ11k=Ck+1CK
Thus, Mplane Nplane=0 is automatically satisfied. In summary, after enforcing the observability constraint for point feature measurements on the state transition matrix, Φk, the observability constraint (A.43) is automatically satisfied for the plane feature measurements.
An experimental setup was constructed comprising an InterSense™ NavChip IMU and a Kinect™ sensor, which contained an RGB camera and an infrared (IR) depth-finding camera. The intrinsic parameters of the Kinect RGB camera and IR camera, as well as the transformation between them, were determined offline using the algorithm described in D. Herrera, J. Kannala, and J. Heikkila, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 2058-2064, October 2012, incorporated herein by reference. The IMU signals were sampled at 100 Hz, and the Kinect provided RGBD images at a frequency of 6.3 Hz. The plane features are extracted from the RGBD images using the method proposed in C. Erdogan, M. Paluri, and F. Dellaert, “Planar segmentation of RGBD images using fast linear fitting and markov chain monte carlo,” in Proc. of the IEEE International Conference on Computer and Robot Vision, Toronto, Canada, May 27-30 2012, pp. 32-39, incorporated herein by reference.
In the experiment, which took place in an office environment, a person holding the IMU-Kinect pair traversed for about 185 meters in two floors of a building, and returned to the initial position. Using the data collected, the final position error of the following five algorithms was examined:
As expected, OC-MSC-KF SLAM has the lowest final error, and our proposed algorithm, OC-MSC-KF w/ Planes, outperforms the other four algorithms. Additionally, the algorithms using both point and plane feature measurements (MSC-KF w/ Planes and OC-MSC-KF w/ Planes), have much smaller final error and perform closer to the OC-MSC-KF SLAM. This is because the plane features provide periodic corrections to the IMU-RGBD camera pair's orientation, thus also improving its position estimation accuracy. Finally, we note that enforcing the observability constraints (OC-MSC-KF and OC-MSC-KF w/ Planes) results in better accuracy since the filters do not process spurious information.
The previous section presented presents an algorithm for fusing inertial measurements, as well as point and plane feature observations captured from one or more image sources, such as an IMU-RGBD camera navigation system. Specifically, it was shown that by observing only a single plane feature of know direction, only the plane's direction in the IMU frame and the gyroscope bias are observable. Then, it was shown that by observing a single point feature and a single plane feature, of known direction other than the gravity, all the estimated quantities in the IMU-RGBD camera navigation system become observable, except the IMU-RGBD camera position in the global frame. Based on the observability analysis, an OC-EKF was described that significantly improves the estimation accuracy and consistency by removing spurious information along unobservable directions from the estimator.
In this example, a computer 500 includes a processor 510 that is operable to execute program instructions or software, causing the computer to perform various methods or tasks, such as performing the enhanced estimation techniques described herein. Processor 510 is coupled via bus 520 to a memory 530, which is used to store information such as program instructions and other data while the computer is in operation. A storage device 540, such as a hard disk drive, nonvolatile memory, or other non-transient storage device stores information such as program instructions, data files of the multidimensional data and the reduced data set, and other information. The computer also includes various input-output elements 550, including parallel or serial ports, USB, Firewire or IEEE 1394, Ethernet, and other such ports to connect the computer to external device such a printer, video camera, surveillance equipment or the like. Other input-output elements include wireless communication interfaces such as Bluetooth, Wi-Fi, and cellular data networks.
The computer itself may be a traditional personal computer, a rack-mount or business computer or server, or any other type of computerized system. The computer in a further example may include fewer than all elements listed above, such as a thin client or mobile device having only some of the shown elements. In another example, the computer is distributed among multiple computer systems, such as a distributed server that has many computers working together to provide various functions.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Various features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices or other hardware devices. In some cases, various features of electronic circuitry may be implemented as one or more integrated circuit devices, such as an integrated circuit chip or chipset.
If implemented in hardware, this disclosure may be directed to an apparatus such a processor or an integrated circuit device, such as an integrated circuit chip or chipset. Alternatively or additionally, if implemented in software or firmware, the techniques may be realized at least in part by a computer readable data storage medium comprising instructions that, when executed, cause one or more processors to perform one or more of the methods described above. For example, the computer-readable data storage medium or device may store such instructions for execution by a processor. Any combination of one or more computer-readable medium(s) may be utilized.
A computer-readable storage medium (device) may form part of a computer program product, which may include packaging materials. A computer-readable storage medium (device) may comprise a computer data storage medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic or optical data storage media, and the like. In general, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Additional examples of computer readable medium include computer-readable storage devices, computer-readable memory, and tangible computer-readable medium. In some examples, an article of manufacture may comprise one or more computer-readable storage media.
In some examples, the computer-readable storage media may comprise non-transitory media. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).
The code or instructions may be software and/or firmware executed by processing circuitry including one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other processing circuitry suitable for implementation of the techniques described herein. In addition, in some aspects, functionality described in this disclosure may be provided within software modules or hardware modules.
This disclosure analyzed the inconsistency of VINS from the standpoint of observability. For example, it was showed that standard EKF-based filtering approaches lead to spurious information gain since they do not adhere to the unobservable directions of the true system. Furthermore, an observability-constrained VINS approach was applied to mitigate estimator inconsistency by enforcing the nullspace explicitly. An extensive simulation and experimental results were presented to support and validate the described estimator, by applying it to both V-SLAM and the MSC-KF.
Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 61/767,691, filed Feb. 21, 2013 and U.S. Provisional Patent Application No. 61/767,701, filed Feb. 21, 2013, the entire content of each being incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5847755 | Wixson et al. | Dec 1998 | A |
7991576 | Roumeliotis | Aug 2011 | B2 |
8577539 | Morrison et al. | Nov 2013 | B1 |
20020198632 | Breed et al. | Dec 2002 | A1 |
20040073360 | Foxlin | Apr 2004 | A1 |
20040167667 | Goncalves et al. | Aug 2004 | A1 |
20080167814 | Samarasekera et al. | Jul 2008 | A1 |
20080279421 | Hamza et al. | Nov 2008 | A1 |
20090248304 | Roumeliotis et al. | Oct 2009 | A1 |
20100110187 | von Flotow et al. | May 2010 | A1 |
20120194517 | Izadi et al. | Aug 2012 | A1 |
20140333741 | Roumeliotis et al. | Nov 2014 | A1 |
Number | Date | Country |
---|---|---|
2015013418 | Jan 2015 | WO |
Entry |
---|
Ayache et al., “Maintaining Representations of the Environment of a Mobile Robot,” IEEE Trans. Robot. Autom., vol. 5(6), Dec. 1989, pp. 804-819. |
Bartoli et al., “Structure from Motion Using Lines: Respresentation, Triangulation and Bundle Adjustment,” Computer Vision and Image Understanding, vol. 100(3), Dec. 2005, pp. 416-441. |
Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Patt. Analy. Machine Intell., vol. 8(6), Nov. 1986, pp. 679-698. |
Chen, “Pose Determination from Line-to-Plane Correspondences: Existence Condition and Closed-Form Solutions,” Proc. 3rd. Int. Conf. Comp. Vision, Dec. 4-7, 1990, pp. 374-378. |
Erdogan et al., “Planar Segmentation of RGBD Images Using Fast Linear Fitting and Markov Chain Monte Carlo,” Proceedings of the IEEE International Conference on Computer and Robot Vision, May 27-30, 2012, pp. 32-39. |
Guo et al., “IMU-RGBD Camera 3d Pose Estimation and Extrinsic Calibration: Observability Analysis and Consistency Improvement,” Proceedings of the IEEE International Conference on Robotics and Automation, May 6-10, 2013, pp. 2935-2942. |
Guo et al., “Observability-constrained EKF Implementation of the IMU-RGBD Camera Navigation Using Point and Plane Features,” Technical Report. University of Minnesota, Mar. 2013, 6 pp. |
Hermann et al., “Nonlinear Controllability and Observability,” IEEE Trans. On Automatic Control, vol. 22(5), Oct. 1977, pp. 728-740. |
Herrera et al., “Joint Depth and Color Camera Calibration with Distortion Correction,” IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 34(10), Oct. 2012, pp. 2058-2064. |
Hesch et al., “Observablity-constrained vision-aided Inertial Navigation,” University of Minnesota, Department of Computer Science and Engineering, MARS Lab, Feb. 2012, 24 pp. |
Hesch et al., “Towards Consistent Vision-aided Inertial Navigation,” Proceedings of the 10th International Workshop on the Algorithmic Foundations of Robotics, Jun. 13-15, 2012, 16 pp. |
Huang et al., “Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera,” Proceedings of the International Symposium on Robotics Research, Aug. 28,-Sep. 1, 2011, 16 pp. |
Jones et al., “Visual-inertial Navigation, Mapping and Localization: A Scalable Real-time Causal Approach,” Int. J. Robot. Res., vol. 30(4), Apr. 2011, pp. 407-430. |
Kottas et al., “On the Consistency of Vision-aided Inertial Navigation,” Proceedings of the Int. Symp. Exper. Robot., Jun. 17-21, 2012, 15 pp. |
Li et al., “Improving the Accuracy of EKF-based Visual-inertial Odometry,” Proceedings of the IEEE International Conference on Robotics and Automation, May 14-18, 2012, pp. 828-835. |
Liu et al., “Estimation of Rigid Body Motion Using Straight Line Correspondences,” Computer Vision, Graphics, and Image Processing, vol. 43(1), Jul. 1988, pp. 37-52. |
Lupton et al., “Visual-inertial-aided Navigation for High-dynamic Motion in Built Environments Without Initial Conditions,” IEEE Trans. Robot., vol. 28(1), Feb. 2012, pp. 61-76. |
Martinelli, “Vision and Imu Data Fusion: Closed-form Solutions for Attitude, Speed, Absolute Scale, and Bias Determination,” IEEE Trans. Robot, vol. 28(1), Feb. 2012, pp. 44-60. |
Matas et al., “Robust Detection of Lines Using the Progressive Probabilistic Hough Transformation,” Computer Vision and Image Understanding, vol. 78(1), Apr. 2000, pp. 119-137. |
Meltzer et al., “Edge Descriptors for Robust Wide-baseline Correspondence,” Proc. IEEE Conf. Comp. Vision Patt. Recog., Jun. 23-28, 2008, pp. 1-8. |
Mirzaei et al., “A Kalman Filter-based Algorithm for IMU-Camera Calibration: Observability Analysis and Performance Evaluation,” IEEE Trans. Robot., vol. 24(5), Oct. 2008, pp. 1143-1156. |
Mirzaei et al., “Optimal Estimation of Vanishing Points in a Manhattan World,” IEEE Int. Conf. Comp. vision, Nov. 6-13, 2011, pp. 2454-2461. |
Mourikis et al., “A Multi-state Constraint Kalman Filter for Vision-aided Inertial Navigation,” Proceedings of the IEEE International Conference on Robotics and Automation, Apr. 10-14, 2007, pp. 3482-3489. |
Mourikis et al., “Vision-aided Inertial Navigation for Spacecraft Entry, Descent, and Landing,” IEEE Trans. Robot., vol. 25(2), Apr. 2009, pp. 264-280. |
Roumeliotis et al., “Stochastic Cloning: A Generalized Framework for Processing Relative State Measurements,” Proc. IEEE Int. Conf. Robot. Autom., May 11-15, 2002, pp. 1788-1795. |
Schmid et al., “Automatic Line Matching Across Views,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Jun. 1997, pp. 666-671. |
Servant et al., “Improving Monocular Plane-based SLAM with Inertial Measurements,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 18-22, 2010, pp. 3810-3815. |
Smith et al., “Real-time Monocular Slam with Straight Lines,” British Machine vision Conference, vol. 1, Sep. 2006, pp. 17-26. |
Spetsakis et al., “Structure from Motion Using Line Correspondences,” Int. Journal Computer Vision, vol. 4(3), Jun. 1990, pp. 171-183. |
Taylor et al., “Structure and Motion from Line Segments in Multiple Images,” IEEE Trans. Pail. Analy. Machine Intell., vol. 17(11), Nov. 1995, pp. 1021-1032. |
Trawny et al., “Indirect Kalman Filter for 3D Attitude Estimation,” University of Minnesota, Dept. of Comp. Sci. & Eng., MARS Lab, Mar. 2005, 25 pp. |
Weiss et al., “Real-time Metric State Estimation for Modular Vision-inertial Systems,” Proceedings of the IEEE International Conference on Robotics and Automation, May 9-13, 2011, pp. 4531-4537. |
Weiss et al., “Real-time Onboard Visual-inertial State Estimation and Self-calibration of MAVs in Unknown Environment,” Proceedings of the IEEE International Conference on Robotics and Automation, May 14-18, 2012, pp. |
Weiss et al., “Versatile Distributed Pose Estimation and sensor Self-Calibration for an Autonomous MAV,” Proceedings of IEEE International Conference on Robotics and Automations, May 14-18, 2012, pp. 31-38. |
Weng et al., “Motion and Structure from Line Correspondences: Closed-form Solution, Uniqueness, and Optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14(3), Mar. 1992, pp. 318-336. |
Williams et al., “Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles,” Proceedings of the International Conference on Robotics and Automation, May 9-13, 2011, pp. 431-438. |
Zhou et al., “Determining 3d Relative Transformations for any Combiniation of Range and Bearing Measurements,” IEEE Trans. On Robotics, vol. 29(2), Apr. 2013, pp. 458-474. |
Mirzaei et al., “Globally Optimal Pose Estimation from Line Correspondences,” Proc. IEEE Int. Conf. Robot., May 9-13, 2011, pp. 5581-5588. |
U.S. Provisional U.S. Appl. No. 61/767,701 by Stergios I. Roumeliotis, filed Feb. 21, 2013. |
U.S. Provisional U.S. Appl. No. 61/023,569 by Stergios I. Roumeliotis, filed Jul. 11, 2014. |
U.S. Provisional U.S. Appl. No. 61/014,532 by Stergios I. Roumeliotis, filed Jun. 19, 2014. |
Dellaert et al., “Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing,” International Journal of Robotics and Research, vol. 25(12), Dec. 2006, pp. 1181-1203. |
Eustice et al., “Exactly Sparse Delayed-state Filters for View-based SLAM,” IEEE Transactions on Robotics, vol. 22 (6), Dec. 2006, pp. 1100-1114. |
Johannsson et al., “Temporally Scalable Visual Slam Using a Reduced Pose Graph,” in Proceedings of the IEEE International Conference on Robotics and Automation, May 6-10, 2013, 8 pp. |
Kaess et al., “iSAM: Incremental Smoothing and Mapping,” IEEE Transactions on Robotics, Manuscript, Sep. 2008, 14 pp. |
Kaess et al., “iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree,” International Journal of Robotics Research, vol. 21, Feb. 2012, pp. 217-236. |
Klein et al., “Parallel Tracking and Mapping for Small AR Workspaces,” in Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality, Nov. 13-16, 2007, pp. 225-234. |
Konolige et al., “Efficient Sparse Pose Adjustment for 2D Mapping,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 18-22, 2010, pp. 22-29. |
Konolige et al., “FrameSLAM: From Bundle Adjustment to Real-time Visual Mapping,” IEEE Transactions on Robotics, vol. 24(5), Oct. 2008, pp. 1066-1077. |
Konolige et al., “View-based Maps,” International Journal of Robotics Research, vol. 29(29), Jul. 2010, 14 pp. |
Kummerle et al., “g2o: A General Framework for Graph Optimization,” in Proceedings of the IEEE International Conference on Robotics and Automation, May 9-13, 2011, pp. 3607-3613. |
Sibley et al., “Sliding Window Filter with Application to Planetary Landing,” Journal of Field Robotics, vol. 27(5), Sep./Oct. 2010, pp. 587-608. |
Smith et al., “On the Representation and Estimation of Spatial Uncertainty,” International Journal of Robotics Research, vol. 5(4), 1986, pp. 56-68 (Note: Applicant points out in accordance with MPEP 609.04(a) that the 1986 year of publication is sufficiently earlier than the effective U.S. filing date and any foreign priority date of Feb. 21, 2014 so that the particular month of publication. |
Number | Date | Country | |
---|---|---|---|
20140316698 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61767691 | Feb 2013 | US | |
61767701 | Feb 2013 | US |