The present invention relates, in general, to inertial sensor calibration and, more particularly, to systems and methods compensating for sensor biases using pre-integrated sensor values.
Inertial sensors are a cornerstone of modern (high bandwidth) navigation systems. Inertial sensors can be described as inertial measurement units (IMU). Typical IMUs include one or more gyroscopic sensors to detect a magnitude and direction of angular change and a plurality of accelerometers to determine accelerations in three dimensions. IMU sensor data can be integrated to determine velocity and position of an object or vehicle containing the IMU sensor. IMUs offer a means to “black box” dead reckoning, but suffer from significant difficulties in terms of sensor errors. A gyroscope bias offset, for example, results in accrued orientation error, which in turn results in incorrect gravity compensation of acceleration measurements. When these errors are integrated, significant positioning error is quickly accrued.
Dynamic calibration of inertial sensors can be a vital aspect of inertial navigation. The current state-of-the-art relies upon various Kalman filtering architectures to produce an in-situ estimate of sensor errors. Sensor error estimates are often fed back, much like a control loop, to compensate for sensor errors in real-time before integration is performed.
Simultaneous localization and mapping (SLAM) has been a major area of navigation-related research over the past two decades. The development of sparse methods have allowed a new perspective on navigation and localization, bringing into question whether Kalman filtering is the best way of inferring inertial sensor calibration parameters.
Localization and mapping strategies developed in recent years have primarily focused on cameras or laser scanners as primary navigation sensors. SLAM has been extensively studied by the robotics community and some efforts have converged toward a factor graph representation of robot localization and mapping measurements. Many of the existing prior art focuses on visual (monocular and/or stereo) aiding only.
Leutenegger et al. (S. Leutenegger et al. “Keyframe-based visual-inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, December 2014) published work on a visual inertial SLAM solution, which estimates bias terms. Their work presents an overview of visual inertial systems, but does not present complete analytical models for compensated interpose inertial constraints.
Indelman et al. (V. Indelman, et al. “Factor graph based incremental smoothing in inertial navigation systems,” in Intl. Conf. on Information Fusion (FUSION), Singapore, July 2012, pp. 2154-2161) initially proposed to use individual inertial sensor measurements for odometry constraints, creating a new pose for each sensor measurement. However, the rapid increase in the number of poses makes it difficult to compute real-time solutions for large problems. Later work has adopted pre-integrals, but fails to present an analytical version of an inertial sensor model.
Martinelli (A. Martinelli, “Vision and IMU data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination,” Robotics, IEEE Transactions on, vol. 28, no. 1, pp. 44-60, February 2012) proposed a closed-form solution for visual-inertial odometry constraints, but only considered accelerometer biases during his derivation. While accelerometer bias is an important error source, it is not the most significant. Platform misalignment, which is predominantly driven by gyroscope bias, results in erroneous coupling of gravity compensation terms. This gravity misalignment, when integrated, is a dominant error source in all inertial navigation systems.
Monocular SLAM, bundle adjustment, and structure from motion (SfM) show that images from a single moving camera can provide sufficient information for localization and mapping. The solution, however, has unconstrained scale. Some SLAM solutions, such as LSD-SLAM, explicitly parameterize map scale and must be inferred by other means.
In 2001, Qian et al (G. Qian, R. Chellappa, and Q. Zheng, “Robust structure from motion estimation using inertial data,” JOSA A, vol. 18, no. 12, pp. 2982-2997, 2001) suggested incorporating inertial data with a structure from motion procedure to aid in scale observability. This data fusion further enables high-resolution trajectory estimation between camera frames and coasting through featureless image sequences. However, these methods do not model raw sensor error terms, and in effect require high-performance IMUs or ultimately incur little dependence on poor inertial measurements.
There have been some attempts in the prior art (such as T. Lupton et al., “Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions,” IEEE Trans. Robotics, vol. 28, no. 1, pp. 61-76, February 2012) in which raw inertial measurements are integrated first and offset compensation is only done later. However, there exists a need for continuous propagation and closed-form solution. There remains a need to perform more optimal in-situ inference over inertial measurements for localization, mapping and control, in a manner similar to how one would use “non-differentiated” measurements such as landmark sightings or range measurements
A strong synergy between visual and inertial sensors has resulted in various visual-inertial navigation systems (VINS) being developed. Major contributions to visual-inertial odometry came in 2007 with the introduction of the Multi-State Constrained Kalman Filter (MSC-KF) by Mourikis et al. (A. I. Mourikis et al., “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007, pp. 3565-3572) and inertial structure from motion by Jones et al. (E. Jones et al., “Visual-inertial navigation, mapping and localization: A scalable real-time causal approach,” The International Journal of Robotics Research, vol. 30, no. 4, pp. 407-430, 2011.) Both employ tight coupling between inertial measurements and visual feature tracking. The visually aided inertial navigation by Madison et al. (R. W. Madison, et al., “Vision-aided navigation for small UAVs in gps-challenged environments,” The Draper Technology Digest, p. 4, 2008) uses a loosely coupled Kalman filter to fuse visual and inertial information.
Refinements to the MSC-KF have improved filter consistency by restricting the state updates to respect the unobservable null space of the system, which enters the state through linearization assumptions and numerical computational issues. Filtering style VINS generally estimate a single snapshot estimate of inertial sensor biases. One notable issue with visual inertial filtering solutions is that both the prediction and measurement models are non-linear. It has been noted that performance may be improved by improving linearization points through iteration of the updates to the state vector. Since filter update steps are in essence Gauss-Newton updates steps, the work of Bell (B. M. Bell, “The iterated Kalman smoother as a gauss-newton method,” SIAM Journal on Optimization, vol. 4, no. 3, pp. 626-636, 1994) stands a good middle ground between filtering and smoothing solutions.
The aerospace community has contributed significantly to in-situ navigation and converged on an extended Kalman filter (EKF) data fusion architecture for inertial navigation systems (INS). Groves (P. D. Groves, Principles of GNSS, inertial, and multisensor integrated navigation systems. Artech House, GNSS Technology and Application Series, 2008) argued that any aided inertial navigation system should always estimate gyroscope and accelerometer biases. Industrial applications commonly use Kalman filters to dynamically estimate a single time-slice of 30 or more IMU calibration parameters.
A common approach is to use inertial sensors for capturing high bandwidth dynamics and gravity measurements while radio-based ranging measurements, such as a global navigation satellite system (GNSS), preserve the long term accuracy of the navigation solution. GNSS aiding is either done by cascading the INS/GNSS filter behind the GNSS solution filter (loosely coupled), or directly using pseudo-range satellite measurements in the INS/GNSS measurement model (tightly coupled). In the extreme, inertial information is used to also aid satellite signal tracking (deeply coupled). Today, variations of these filtering techniques are used for both manned and unmanned aerial vehicles. Maritime and underwater navigation use a similar processing architecture, although there is more emphasis on acoustic navigational aids.
While much of the work in this field has focused on visual filtering methods, there remains a need for improved bias compensation for IMUs.
Embodiments may address one or more of the shortcomings of the prior art by using any of the following concepts. The following concepts are executed on one or more processors that are coupled to hardware inertial sensors in an inertial measurement unit (IMU). In one embodiment, a method for determining pose parameters of an inertial measurement unit (IMU) sensor includes collecting measurement data generated by at least one accelerometer of the IMU sensor corresponding to at least one acceleration component of the IMU sensor and at least one gyroscope of the IMU sensor corresponding to at least one rotational attribute of the IMU sensor in a memory unit of the sensor and using a processor of the IMU sensor to temporally integrate the measurement data, including any errors associated therewith, between a time (ti) corresponding to a pose (i) of the IMU and a time (tj) corresponding to a pose (j) of the IMU to generate, measured changes in one or more pose parameters of the IMU sensor. The method further includes generating a temporally continuous error propagation model of dependence of each of the one or more pose parameters on the measurement errors and temporally integrating the temporally continuous error propagation model to generate one or more compensation gradients for the pose parameters.
In one aspect, the method can include generating a predicted change in each of the pose parameters between pose (i) and pose (j) using the compensation gradients, and using the measured and the predicted changes in the pose parameters to infer an estimate of a change in each of the pose parameters. In another aspect, the step of generating the predicted change further can include using an observed change in the pose parameters between the two poses that is observed using a sensor other than the IMU sensor. In another aspect, the method can include using the measured and predicted changes to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor.
In one aspect, the step of using the measured and predicted changes can include generating a residual based on the measured and predicted changes and optimizing the residual to infer the estimate of the change in each of the pose parameters. In another aspect, the method can include using the residual to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor. In another aspect, the method can include the step of optimizing the residual by minimizing the magnitude of the residual. The method can also include generating an error covariance of the measured pose parameters and using the residual and the error covariance to infer the estimate of the change in each of the pose parameters. Similarly, the method can include using the residual and the error covariance to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor. In one aspect, the step of generating the temporally continuous error propagation model can include using a Gauss-Markov process as an additive bias.
The method can also include using one or more environmental cues in addition to the measured and predicted changes to infer an estimate of a change in each of the pose parameters.
In some embodiments, relative pose parameters corresponding to the pose (i) and the pose (j) are defined according to the following relation:
wherein,
In some embodiments, predicted change in the pose parameters is defined according to the following relation:
wherein
In some embodiments, the step of using the measured and predicted changes can include calculating a residual according to the following relation:
wherein,
In some embodiments, the error propagation model provides time variation of a plurality of compensation gradients (ξ) employed in the C1 matrix in accordance with the following relation
wherein the compensation matrix (ξ) is defined as:
wherein F and G matrices are defined as:
wherein
ξ is a matrix of compensation gradients,
F and G are matrix terms defined by M,
b{tilde over (f)}x is a force measured by the IMU sensor
JR is a Jacobian expressing rotational terms,
bωx is a rotation measured by the IMU sensor, and
b
b
R is an accumulated rotational value.
According to another embodiment, an inertial measurement unit (IMU) sensor includes at least one accelerometer for generating acceleration data corresponding to at least one acceleration component of the IMU sensor, at least one gyroscope for generating data corresponding to at least one rotational attribute of the IMU sensor, and a memory unit for storing the acceleration and rotational data. The IMU further includes data making up a temporally continuous error propagation model stored in the memory unit for dependence of one or more pose parameters of the IMU on errors associated with the measured data, and a processor for temporally integrating the acceleration and rotational data, including any errors, if any, associated therewith, between a time (ti) corresponding to a pose (i) of the IMU and a time (tj) corresponding to a pose (j) of the IMU to generate one or more measured changes in one or more pose parameters of the IMU sensor, the processor further temporally integrating the temporally continuous error propagation model to generate one or more compensation gradients for the pose parameters. The processor can be configured to carry out any of the methods disclosed herein.
In one embodiment, the IMU is part of a navigational system including a navigation processor external to the at least one inertial measurement unit (IMU) sensor configured to compensate for sensor biases in the at least one inertial measurement unit (IMU) sensor in response to the measured one or more changes and the one or more compensation gradients and an electrical interface for communicating the measured one or more changes and the one or more compensation gradients from the IMU processor to the navigation processor.
In one embodiment, the IMU and navigational processor are part of a mobile device having a user interface and a network interface.
Notation conventions: The following are variables used throughout the description of embodiments.
x is a pose of an object, where an object is an object containing the IMU or the IMU. Pose is a vector quantity having rotational, position, and velocity components. As used with respect to the present invention, the pose may also include terms estimating the bias rotational and linear inertial terms.
p is a position of an object.
v is the linear velocity of an object.
R is the rotational orientation of an object, and can be stated as a function of an orientation quaternion q.
φ is the angular orientation, which is an integral of the angular velocity ω.
b is a bias term that estimates the IMU bias for a certain measured property.
f is the force exerted on an object, which can also be described as a, an acceleration, because the object's mass is constant.
ζ is a vector aggregate that includes pose and bias parameters, as explained with respect to Equation 23.
E is an encapsulation of errors.
ν (nu) is a slack variable that can be considered a placeholder for errors.
k is a discrete moment in time, generally at the IMU sensor rate.
w refers to the world frame.
δ is a residual value as explained below.
The following modifiers can be applied over variables: ‘̂’ indicates an estimated value; ‘˜’ indicates a noisy measured value; ‘−’ indicates that a value is the true value; ‘º’ indicates a quaternion; ⋅ is the common notation of a derivative. Superscript to the left of a variable identifies the frame of reference for that variable. (For rotational variables a subscript and superscript indicates the source and destination of that rotation, respectively.) For example, b
Embodiments utilize inertial pre-integral bias compensation models. In forming these models, embodiments are explained with analytical derivation for first order retroactive inertial compensation. These models can be used by one or more processors within an IMU or in communication with an IMU to compensate for IMU sensor biases, preferably in a continuous time domain model. In some embodiments second order compensation can be used.
Embodiments allocate error to inertial sensor bias terms. This approach is generally different from allocating error directly to a position or orientation odometry constraint, sometimes also called “bias” in the prior art. An example of this type of prior art bias based on constraints is illustrated by
Inertial measurements capture vehicle dynamics and other effects, such as gravity, at high rate, but include sensor measurement errors. This high rate information can be summarized by directly integrating all inertial information between distinct platform pose locations. These integrated values are called inertial pre-integrals. When one tries to use these inertial measurements as odometry constraints in a factor graph formulation, compensation of sensor errors is not trivial. Embodiments provide a computationally tractable and capable inertial odometry measurement model.
The use of factor graphs in some embodiments gives unifying perspective on non-linear interactions between key variables of interest in a system. Information from various sensors can all interact naturally by adding the associated constraint functions (sensor measurements) and variable nodes to a graph. Furthermore, factor graphs can be used to infer state estimates in history, while simultaneously predicting states in the future via model predictions. In some embodiments, the factor graph formulation together with efficient incremental smoothing algorithms allow a tractable language for describing a large inference problem akin to simultaneous localization and mapping (SLAM).
To perform state estimation and planning, information from each sensor should only contribute to one constraint in a factor graph. Use of pre-marginalized odometry constraints in a SLAM solution may incorrectly reuse measurements conceptually shown with the visual feature and pre-marginalized visual-odometry constraints in
Typically, filtering solutions in inertial navigation only have one opportunity to compensate estimated sensor biases when producing a naive odometry constraint. Inertial odometry and inference on a factor graph, used in embodiments disclosed herein allows using aiding information from any other aiding measurement without suffering permanent sensor bias errors in the odometry constraints. For example, camera measurements or slower rate absolute position measurements can be used to aid in correcting for inertial sensor bias, without introducing new errors into these estimated bias terms.
Embodiments of inertial odometry compensate for gyroscope and accelerometer bias terms, the compensation is explained below along with an analytical derivation.
Prior art filtering approaches usually compensate inertial measurements with estimated error terms before a navigation solution is computed by integration. Equation 1 shows a basic computation of the position p in the world frame w at a given time t. The first form shows the simplified common integral of acceleration to find position where only acceleration is considered. Eq. 1 accounts for rotation R, gravity g, a noisy measurement of force/acceleration by the IMU f (recall, ˜ indicates a noisy measurement, while ̂ indicates an estimate) and attempts to compensate for this noisy measurement with an estimate of the bias terms of the IMU b. The position at time t can be expressed as Eq. 1 (showing the actual/ideal position and a model for estimating the position given a noisy acceleration measurement and an estimate of IMU bias terms).
w
p
t=∫∫t
w
{circumflex over (p)}
t=∫∫t
where b{tilde over (f)} is the noisy measured acceleration, estimated accelerometer bias is {circumflex over (b)}a, wg is the world frame gravity. This equation represents a position estimate at time t, relative to starting condition at time t0, together with bias estimate terms.
Embodiments utilize a pre-integral method whereby where the integral is distributed and sensor bias terms are only compensated after inertial measurements have been integrated. In concept, an estimate of the change in position between poses xi and xj can be expressed as in Eq. 2. (A vectorized model of all pre-integrals used in some embodiments can be seen in Eq. 6)
Δ{circumflex over (p)}i→j=∫∫t
The approach can be effective because, generally, outside contributions, such as raw sensor bias or gravity, vary much slower than the dynamics of the system and can be efficiently subtracted later using low order integration assumptions.
Pre-integrals define the change in position, velocity and orientation from the ith pose to the jth pose, as shown in
In some embodiments, exemplary pre-integral interpose changes in position, velocity and orientation, respectively can be computed at sensor rate tk−1→tk, as shown in Eq. 3.
b
Δp
k
+=b
b
Δv
k=b
b
b
R=
b
b
R
b
b
R=
b
b
Rexp([Δφx]) (Equation 3)
Numerical integration can be used to estimate the values in Eq. 3 (e.g., zeroth or first order). In some embodiments, coning, sculling and scrolling compensated multi-rate algorithms, such as those developed by Savage (P. G. Savage, “Strapdown inertial navigation integration algorithm design part 1: Attitude algorithms,” Journal of guidance, control, and dynamics, vol. 21, no. 1, pp. 19-28, 1998) can be used when computation is limited.
For completeness, some embodiments represent the orientation estimate with quaternions. The body orientation of the ith pose with reference to the world frame w can be written as b
b
b
p̊=
b
b
p̊⊗
b
b
q̊∥
b
b
p̊∥=1 (Equation 4)
where ⊗ is the Hamiltonian quaternion product, b
Model derivation: dynamic gyro and accelerometer bias compensation utilize a measurement model to account for estimated biases in the computed interpose pre-integrals. Using this model, a processor accumulates the direction in which to compensate future bias estimates at each pose. The latter can occur along with the pre-integration process at sensor measurement rate.
Pose definition: Embodiments utilize a pose model that contains not only position and orientation, which is standard in SLAM, but also velocity and inertial sensor bias estimates also. The elements for a pose relative to the world frame, wxi, can therefore, in some embodiments, be modeled as:
w
x
i=[b
where b
Inertial measurements: A vectorized sensor measurement can be defined that includes pre-integral values, such as those defined in Eq. 3. Thus, a vectorized group of pre-integral values can be defined as:
b
Δ{tilde over (x)}
i→j=[b
Because the pre-integral values include measurement noise, a pre-integral value zij can be modeled as a pre-integral value with an error slack variable ν.
Where b
(Zij|wxi,wxj) (Equation 8)
This probabilistic model is related to a common model used in SLAM applications to represent odometry constraints to probabilistically estimate a future pose:
(wXj|wxi,zij) (Equation 9)
The expression in Eq. 9 eases the marginalization operation in a recursive filtering context. In reality though, an IMU has certainly moved between two distinct points while an uncertain odometry measurement is collected between them.
Gaussian Uncertainty and Markov Random Fields: In some embodiments, a model addresses the probabilistic constrains by assigning sufficient uncertainty (covariance) to encapsulate all errors in measurement and computation according to the model:
Σ=E[(zij−μ)T(zij−μ)] (Equation 10)
Inertial sensor noise is generally well modeled with white plus flicker noise. Thus models in some embodiments use a standard Gauss-Markov process to model sensor biases according to the relationships:
{dot over (b)}
ω=ηω{dot over (b)}a=ηa (Equation 11)
The unnormalized likelihood density of the entire system (encoded by the factor graph) can be constructed as the product of m constraint potentials, ψνz,i and n prior belief potentials, ψν,j. Each potential depends on variables νk,j and measurements zk,i, as depicted by the graph connectivity. (It should be noted that some literature in the art may refer to νk,j as theta and that this variable should not be confused with the slack variable nu.) The SLAM problem is known to generally be sparse. A probability model can be defined as the following relationship, in some embodiments, where l is a time index. It should be noted that the following equations can use n unary and m pairwise constraints, meaning that multiple variables can influence neighbors. Thus, it should be understood that wherever inertial variables i and j are used, these variables could also include other sensor or constraint information, in addition to inertial measurement data.
ν
|z
(νl|zl)∝Πj=1nψν,j(νl,j)Πi=1mψνz,i(νl,i,zl,i) (Equation 12)
Completely unknown variables are assigned a uniform prior, and can be dropped in the un-normalized likelihood. Considering unimodal normal constraint potentials:
ψνz,i(νl,i,zl,i)=exp(−½δl,iTΛiδl,i) (Equation 13)
Where residual δl,i=zl,i−hi(νl,i). In SLAM, variable assignments are generally desirable that gives the maximum of the full posterior density:
ν*l=argmaxν
Using Σi as the measurement/process noise covariance matrix provides a convenient parameterization that can be used to find the optimal assignment of variables via standard nonlinear least squares:
Future model predictions and goal states may be included via equality constraints of predictive models, where m(νl)=0:
ν*l=argminν
s.t. m(νl)=0 (Equation 16)
Interpose Definition: F. Lu and E. Milios, “Globally consistent range scan alignment for environmental mapping,” Autonomous Robots, vol. 4, pp. 333-349, April 1997 illustrates a standard interpose odometry representation where the body poses are spatially separated at time i and j. The symmetric difference between poses can define an interpose distance f⊖.
w
x
j⊖wxi=f⊖(wxi,wxj), (Equation 17)
Embodiments attempt to incorporate the influence of sensor biases and gravity, which have been integrated into the pre-integral measurements. To accomplish this, embodiments assume that some compensation function ƒB(.) must exist, such that ƒB captures the measured effects of sensor bias, gravity etc. For convenience, illustrative embodiments collect inertial compensation terms, accelerometer and gyroscope biases as well as accelerometer measured gravity, as B{bα, bω, g}. In some embodiments, more error sources can be included in the model, as desired.
Using this compensation function, as explained further below, consider the composition (∘) of the standard interpose distance, ƒ⊖. Each measurement, relating multiple variables and as modeled in the factor graph, represents a residual. In the total multi-sensor inference problem, the inertial odometry constraints between two poses. δl,i and δij can be used for multi-sensor fusion (inference). Using the residual function δk,i discussed in Eqs. 13-16, the measurement model for inference can then be written as:
That is, a residual value can be defined to be the difference between the pre-integral values, b
To use existing non-linear least squares optimization libraries, such as software available in the prior art, one can make the assumption that for each pre-integral measurement, random variable Zij is sampled from a multivariate normal distribution:
Z
ij˜(μij,Σ), μij=(f⊖∘fB)(wxi,wxj) (Equation 19)
Embodiments attempt to recover the actual odometry dynamics from the pre-integral measurements. During pre-integration gravity and accelerometer errors are accumulated at various orientations with respect to the previous pose frame, bi, and that the orientation estimate itself is in error due to gyroscope biases. This goal is exacerbated by the noncommutative nature of rotations (R1R2≠R2R1). Furthermore, the pre-integration of noisy gyroscope measurements is a product of exponentials. At each IMU measurement embodiments can compute an update in orientation which includes some propagated orientation error due to gyro bias,
b
b
R≈e
[ωδt
]
e
[b
δt
]
e
[ωδt
]
e
[b
δt
]
. . . e
[ωδt
]
e
[b
δt
] (Equation 20)
Individual bias errors are part of the larger product and cannot be separated out easily. A further complication arises because the erroneous orientation estimate is used at each IMU measurement epoch to compute velocity and position deltas, as shown in Eq. 3.
A closed form solution for (f⊖∘fB) can be difficult. Accordingly, some embodiments utilize a first order approximation to reduce processor overhead. For this approximation, these embodiments assume the sensor error dynamics are much slower than the actual vehicle dynamics, which is generally always the case. This assumption makes post-integration compensation computationally tractable. The slow sensor error dynamics allows the first order approximation of (f⊖∘fB) to be reasonably accurate. Some embodiments improve upon this accuracy by using second, third, or greater orders in the calculation.
Consider a first-order Taylor series approximation of Eq. 18 to terms in B:
(f⊖∘fB)\B
For practical computation appeal, a vector ζij can be constructed as a vectorized function c (wxi; wxj; wg) that maps states from poses wxi, wxj∈16 and outside influences, such as wg∈3, into a column vector ζij∈30:
The vector ζij contains sufficient information to characterize the relative interpose inertial measurement from i→j, and can be compared to the variables making a single pose in Eq. 5. Other inputs may include effects such as earth rotation, Coriolis or transport rate.
The interpose rotation b
b
b
φ=(logSO3(wb
Note that the logSO3 function and Vee operator are defined in Eqs. 55-56. Given ζij, a linear transformation L can be defined to map (f⊖∘fB)\B
Expanding on Eq. 22, the first order bias and gravity compensation transform operator C1 can be defined. (Note that C1 can depend on a linearization point.)
C
1:30→15 (Equation 26)
The matrix-vector product C1 ζij, computes the first order dynamic inertial bias compensation term.
Thus embodiments have at least one model for inertial pre-integrals with bias compensation. The two linear transformations, L and C1, therefore produce the first order Taylor approximation in terms of B (the accelerometer and gyroscope bias terms for the IMU) for eq. (21)
b
Δ{circumflex over (x)}
i→j
≈[L−C
1]ζij (Equation 28)
During optimization embodiments attempt to compute the residual δij between the noisy interpose pre-integral measurement, b
The uncertainty ν accounts for measurement and computational approximation errors.
Discussion now turns towards high speed accumulation of the inertial compensation gradients in C1. Unlike filtering, in some embodiments, it is not assumed that bias estimates at integration time are the most accurate; in such embodiments erroneous IMU measurements are knowingly integrated into their summarized pre-integral form, and compensated later. Therefore, embodiments compute the summarized compensation gradients for later use.
For example, orientation error can be predominantly caused by gyroscope bias bω and a processor can use the first order estimated bias gradient
to account for accumulated bias error in the noisy rotation pre-integral measurement b
∂bb
Attitude Error Model: Embodiments take gyroscope measurements as true rotation plus a Gauss-Markov style process acting as additive bias:
bω=b{tilde over (ω)}−bω (Equation 31)
Embodiments can convert rotation into its matrix exponential parameterization, which is essentially a manifold expression at each pose and allows locally linear manipulation of rotations. Embodiments can use the closed form Lie algebra to Lie group exponential mapping expSO3(φ)=e[φx] as explained in Eq. 55.
Taking the partial derivative to gyro bias, with temporary shorthand [φx]=[φ],
Taking partial scalar derivative to time
Pre-multiplying by the inverse rotation:
The vector derivative can be evaluated by considering each of the three dimensions in bω independently. The above expression can use group operations in SO (3) and can be embedded into a new linear system with only algebraic operations in the associated algebra SO (3). This enables linear bias compensation as explained in Eq. 30. The decomposition of skew-symmetric matrices associated with a smooth curve on SO (3) is
Where {Êx, Êy, Êz} are usual Lie algebra basis elements. The Vee operation on basis elements can be (Êx)v=êi where Êx and êi are the natural basis.
The vectorized gyroscope bias contribution along Êx of Eq. 34 can be stated as:
Keeping in mind ∂ω=−∂bω accumulation on the local manifold is given by:
Δφ=∫t
At infinitesimal angles, operations on the vector field can commute, leaving:
Repeating the vectorization and augmenting each column to produce a new set of linear equations, one for each of the three rotational degrees of freedom, leaves
This is the right differential, or Jacobian, and acts as a mapping from the smooth rotation rate manifold into Lie group. Gyroscope bias is coincident with the body frame rotation at each point in time.
The formula given in Eq. 55 can produce a closed form expression for the right differential, as shown in Park (F. C. Park, “The optimal kinematic design of mechanisms,” Ph.D. dissertation, Harvard University, 1991, in Lemma 2.1):
Eq. 34 can be rewritten as a differential equation of bias gradients on a linear manifold at each point in time, which provides a first order attitude error model.
Velocity Error Model: The measured acceleration in the previous pose's ith frame is given by
b
{dot over (v)}=
b
b
R(b{tilde over (f)}−ba)+wb
True force is equal to measured force minus accelerometer bias (bf=b{tilde over (f)}−ba). The true body force can be defined as body acceleration minus gravity, which is a non-inertial force, (bf=ba−bg). Velocity dependence on gyroscope bias can be found with the partial derivative to gyro bias and by using the relation given by Eq. 55:
Because
this can be reduced to:
Because the IMU does not have access to an exact accelerometer bias estimate during the pre-integration and initial estimates ignore the term [bax], the following estimates results during pre-integration.
However, this assumption can, in part, be avoided by using an embodiment where best bias estimates are fed from the navigation processor to the pre-processor.
It should be noted that if bias estimates are accumulated and compensated at the sensor, the error introduced by this assumption decreases as the estimates improve. Until these estimates are resolved, sensor noise dominates errors in the computation.
Velocity dependence on accelerometer bias can be found directly through the second partial derivative:
In addition, we know relations for positions:
This gives an estimate of position and velocity error models for use with some embodiments. Embodiments consider the models expressed in Eqs. 39, 40, 45, and 46 to utilize a computational framework for computing matric C1. Such computation can be performed via a processor and associated memory and computational hardware in electrical communication with data from an IMU.
First, the processor gathers relevant gradients into a matrix ξ∈15×3 such that:
A linear time-varying state space system can be expressed as:
G contains affine terms associated with the time evolution of ξ:
Embodiments utilize this relationship to determine a time evolution of a best numerical discretization from the continuous time evolution of ξ, denoted by ξk. Embodiments may use zeroth order integration (as done in Eq. 50). More accurate embodiments will solve the continuous time propagation of ξ with more accurate differential matrix equation solutions, such as trapezoidal rule or improved numerical techniques. (See Van Loan, Charles. Computing integrals involving the matrix exponential. Cornell University, 1977.). A zeroth order integration ∫t
When a new pose is generated, embodiments can take ξij=ξk at time tk=tj and reset ξk.
Notation Conventions: Equations used herein use the following convention with regular conversion between quaternions and rotation matrices.
Explanations of embodiments make extensive use of the matrix exponential, which is defined by the Taylor series.
The matrix exponent provides an elegant formulation, but it will be appreciated that care should be taken with practical floating point computation of eM. In some embodiments, a (numerically) accurate matrix exponential, such as the Pade approximation, can be used. Implementations of matrix exponentiation by Pade approximation are commonly known in the art.
Derivation of the equations discussed herein exploits the manifold geometry of SO(3) around the current best estimate of a vehicle or an object's trajectory in 3 dimensions. Perturbations on this manifold can be parameterized with Lie algebra φx∈SO(3) The members of SO(3) consist of minimal parameters, φ∈3, in skew-symmetric matrices:
with usual Lie algebra basis elements
The Vee operator, [φx]v=φ vectorizes the coefficients on the associated orthogonal basis.
General rotations are operated by rotation matrices as part of the Lie Group ab∈SO(3), where the matrix exponent is used to map members of the algebra onto the group, ea
Noting difficulties when ∥φ∥→0, some embodiments use Eq. 52 directly when ∥φ∥ drops below some reasonable threshold.
A logarithm map can be used to convert elements from the Lie Group to Lie algebra, such as that used in F. C. Park, “The optimal kinematic design of mechanisms,” Ph.D. dissertation, Harvard University, 1991, subject to ∥φ∥<π and Tr(abR)≠−1:
Singularity-free positive scalar quaternions, bb
Covariance propagation:
It should be noted that Matrices Ψk and γk here are different from F and G in Eq. 48. Ψk and γk are standard INS covariance propagation model which can be found in Titterton, David, and John L. Weston. Strapdown inertial navigation technology. Vol. 17. IET, 2004,
Embodiments can compute the discrete noise matrix (such as presented in J. Farrell, Aided navigation: GPS with high rate sensors, ser. Electronic Engineering. McGraw-Hill New York, 2008) from elements in Ψk according to Qk=Φk(Φk−1Qk):
Continuous process noise matrix, Q, is defined by sensor noise characteristics. A scaling factor, κ, can be used as a safety factor to ensure that sufficient uncertainty is allocated to accommodate measurement noise and computational errors.
This approach may not accurately model colored noise, but is common practice in the inertial navigation. Increased process noise Q only reduces the inertial odometry weight in the SLAM solution. Nominal range for κ is around two to ten.
Allan variance testing may be used to estimate each of these noise parameters. Spectral densities, Θ*, are generally supplied with the inertial sensor package and are commonly related to 1σ deviations according to:
Having described the underlying mathematical concepts used by embodiments, the methods and systems can be understood in view of
Gyroscope sensors 104 and accelerometers 102, which may be independent or as part of an integrated sensor package, can use any conventional means to create data pertaining to rotation and linear motion. Sensors 102 and 104 inherently have bias that should be compensated to improve inertial odometry estimates to more accurately estimate the pose of the system 100. Sensor signals are presented through conventional signal means, such as a data bus, to digital sampling, driving, and transducer circuit 106. Circuit 106 is responsible for controlling sensors 102 and 104 and for preparing a clean digital signal that can be used by other circuits and processors within system 100.
A processor 108 receives the sensor data from circuit 106 and prepares that data for inertial odometry in accordance with the embodiments described herein. For example, processor 108 can perform filtering and decimation of sensor data. Furthermore, processor 108 can pre-integrate the sensor data and maintain pose data in accordance with Eqs. 1-6. In some embodiments, processor 108 also performs bias compensation in accordance with the embodiments described throughout, maintaining pose vectors and compensation gradients, as well as attitude and velocity error models for bias terms for sensors 102 and 104, in accordance with Eqs. 23-48. Similarly processor 108 can utilize Gauss-Markov processes to update the models to minimize residuals in the compensation model in accordance with Eqs. 12-16.
In some embodiments, at least a portion of these tasks is accomplished by interaction of processor 108 with processor 110. For example, in some embodiments, processor 108 is responsible for pre-integration of sensor data to allow slower-rate compensation and maintenance or bias models by processor 110. Processor 110 can utilize external sensor measurements (e.g. optical sightings) and a factor graph constraint model to assist in determining an estimate of compensation terms. Processor 110 can also provide a user interface and oversee the inertial odometry calculations using bias-compensated sensor data.
A user application 112, such as a navigational routine or SLAM application can communicate with processor 110 via a communication interface 114. This allows an application to utilize the compensated sensor values and trajectory/pose information derived by processor 110 using the compensation models discussed herein.
The device 118 can also include a processor and user interface 132 for managing the device, as well as a communications interface 134, such as a wireless antenna.
At step 144, that sensor data is pre-integrated in a temporal fashion by a processor, and a noisy model of the current pose is created (Eqs. 1-6, 40, 44-55, and 58-59). This step includes using a processor (e.g., of the IMU sensor) to temporally integrate the measurement data, including any errors associated therewith, between a time (ti) corresponding to a pose (i) of the IMU and a time (tj) corresponding to a pose (j) of the IMU to generate, measured changes in one or more pose parameters of the IMU sensor.
Using this information, the processor (or another processor) creates an error model (Eqs. 23, 40, 45, 46) at step 146. This step includes generating a temporally continuous error propagation model of dependence of each of the one or more pose parameters on the measurement errors. At each time/pose epoch, the IMU transmits pre-integrated measurements and gradients and covariances. Step 146 relates to copying values from pre-integral processes in the form of Eqs. 6, 47 at time k=ti−tj and Pk in Eq. 58. At step 148, values are integrated by the processor to create a compensation gradient (Eqs. 25, 27, 47, 48). Step 148 includes temporally integrating the temporally continuous error propagation model to generate one or more compensation gradients for the pose parameters. Steps 144 and 148 occur in parallel. Step 148 describes the assembly of pre-integral values in preparation for future residual evaluations.
Using that gradient, the processor predicts a pose change, compensating for the bias terms (Eqs. 23, 28), at step 150. Step 150 can include generating a predicted change in each of the pose parameters between pose (i) and pose (j) using the compensation gradients, and using the measured and the predicted changes in the pose parameters to infer an estimate of a change in each of the pose parameters. In some embodiments, the step of generating the predicted change can further comprise using an observed change in the pose parameters between the two poses that is observed using a sensor other than the IMU sensor. This step can also include using measured and predicted changes to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor. Step 151 fetches a stored previous pose estimate from memory to assist in pose prediction at step 150. Step 150 relates to high speed real-time pose (state) estimates using the previous best known pose, as received in step 151 through electrical or wireless communication, (stored from step 158). Real-time state prediction process 150 also uses Eqs. 28, 29 based on current pre-integrals and a best known recent pose.
Steps 152-164 show some of the detail that may be applied in some embodiments. At step 152, outside sensor information, such as visual indications or other information relating to a factor graph, such as
At step 154, a processor collects current pose and IMU calibration information for each inertial odometry constraint, using, e.g., Eq. 23 and 56. Step 156 then illustrates evaluation of inertial odometry residual functions, whereby a processor performs Eqs. 21-29, as required by the inference process 160. The calibration step can include determining a residual value to track the efficacy of the bias model and minimize this residual in accordance with Eqs. 15, 16, 18, and 29. Step 160 is the culmination of an inference process, such as shown in Eq. 14-16. Inference processes vary, and numerical parametric methods can iterate and converge pose and IMU calibration parameters through successive approximation. Error covariance should also be done at this time, relating to Eqs. 58-60.
At step 158, the processor extracts the best IMU calibration and pose parameter statistics and stores this model for use in step 151. This step can include using the residual to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor. This can also include using the residual and the error covariance to infer estimates for one or more parameters of the error propagation model so as to retroactively calibrate the IMU sensor. This can include generating the temporally continuous error propagation model using, for example, a Gauss-Markov process as an additive bias.
The following examples are provided for illustrative purposes only and not to necessarily indicate optimal ways of practicing the invention or for optimal results that may be obtained.
An inertial measurement unit (IMU) includes a collection of gyroscopes and accelerometers. The best navigation systems are able to dynamically calibrate the IMU from aiding information. The examples that follow intend to illustrate retroactive sensor calibration, which is distinct from Kalman-filter style IMU calibration, and the premise that this new inertial odometry technique is more powerful and capable than existing Kalman filter techniques.
Basic Averaging, Not Moving: Consider an IMU standing stationary for a duration of time. Call the IMU start and end position pose i and j, respectively. A pose is taken to represent the position and orientation relative to some world frame. Dynamic sensor calibration can be obtained from integrated gyroscope and accelerometer measurements. A processor can integrate IMU measurements according to the pre-integral strategy of Eq. 3 between time ti and tj as follows:
b
Δνi→j+∫t
The integration should show natural effects, such as gravity bg and earth rotation (omitted), as well as integrated errors Delta. By knowing b
Note that aiding information may include any information pertaining to any of the pose states, depicted in Eq. 5. The factor methodology allows embodiments to reason over all available data in a centralized framework. In this basic example, compensation gradients did not seem necessary, because nothing happens. The examples that follow build in complexity until the need for compensation gradients are clearer.
It should also be noted that inertial measurements are relative to inertial space, I. For these examples we simplify notation using a body frame b. Once the initial concepts have been demonstrated, we will represent measurements relative to a meaningful navigation frame.
Perfect Sensor Not Moving: Consider a flawless IMU in a resting position with XYZ axes pointing North-West-Up (NWU) in the presence of gravity, and temporarily omitting earth rotation rate. We expect to see rotation rate measurements bω=[0,0,0]T and accelerations bƒ=[0,0, ge]. The transpose is indicated by T and earth's gravity is illustrated as ge=1 m/s2 in these examples. Let the timer run for 1 second and find measured pre-integral values (according to Eq. 3)
The continuous analytic model explained in Eqs. 48 and 49, allows accumulation of compensation gradients. This example uses the complete trapezoidal solution for the matrix differential equation. Other contemplated methods have linearizing of zeroth order integration assumptions much earlier in their analytic development. It should be noted that Eq. 50 shows a zeroth order solution to matrix differential, but some embodiments in these examples use a first order integration solution to Eq. 49. This improved integration accuracy is possible due to the expressions in continuous form, as shown by Eq. 48. The transpose of compensation gradients vector ξi→j=∫t
The compensation gradients are then used to compute matrix C1, as shown by Eq. 27. The weighted residual difference between measured pre-integrals and predicted poses is computed via compensation gradients, Eq. 29:
δi→j=b
Because the residual is zero in this example, any change in pose xi's bias estimates will result in an increased residual. Embodiments can use this residual to construct inference techniques for recovery of bias estimates.
The inference process can be done at any later time, and is not part of the pre-integral or compensation gradients computation process. In some embodiments, uncertainty estimates or pre-integral covariance shown by Eq. 58, can be accumulated at sensor rate.
Furthermore, in various embodiments, an unlimited number of interpose measurements can be made in this manner, recorded, and then used in combination with other outside information to recover the best estimate sensor bias terms retroactively.
Inference by Nonlinear-Least-Squares: A variety of inference techniques exist, and the chosen method can depend on the specific application. Some embodiments use a range of open-source inference tools. A non-linear least-squares example is used to demonstrate how pre-integrals and compensation gradients can be used for retroactive bias recovery to retroactively calibrate the IMU sensor bias terms, and can be understood in light of Eqs. 15 and 16.
The pre-integral covariance Σi→j can be propagated according to the uncertainty model presented in Eq. 58. In some embodiments, a more comprehensive matrix integration strategy, such as trapezoidal method, can be used. Solving the minimization problem in the above equation for the basic perfect and stationary sensor example results:
b=[b
ω
v
a]=[0303] (Equation 65)
Basic Gyroscope Bias, Not Moving: Now consider the same example with a stationary NWU facing IMU, but with a slight bias of 0.001 rad/s on the x-axis gyroscope. During pre-integration of sensor measurements (Eq. 3), the orientation estimate b
b
Δ{tilde over (x)}
i→j≈[0.001 05 0−0.0004995 1 0−0.000166417 0.5 03] (Equation 66)
The associated compensation gradient vector, ξi→jT is depicted in the figure below. This vector has numerical differences from the perfect IMU example above.
At some later time, the pre-integrals, compensation gradients and error covariances are used by a processor, along with the constraint information that pose i and j are coincident, to retroactively infer the sensor bias estimates. Using a numerical optimization technique for a weighted non-linear least-squares cost, one obtains:
The above text shows computer output for true b and estimated {circumflex over (b)} gyro and accelerometer biases. The error output is used to show the percentage accuracy, computed according to difference between true and estimated bias values, stabilized by ϵ=1e−5 to prevent division by zero:
Basic Accelerometer Bias, Not Moving: We can repeat the experiment for a y-axis accelerometer bias error of 0.05 m/s2, with accompanying pre-integrals.
b
Δx
i→j=[07 0.05 1 0 0.025 0.5 03]T (Equation 68)
Computing the minimization:
residual=0.24 after 274 iterations (returned SUCCESS)
Truth: [0.0, 0.0, 0.0, 0.0, 0.05, 0.0]
Estim: [−8.9e−5, −0.0, −0.0, −0.0, 0.049567, 2.0e−6]
error: [894.15, 0.05, 0.0, 0.24, 0.87, −21.47] %
First, note the y-axis accelerometer bias is estimated within less than 1% error. The large error percentage for x-axis gyroscope is a consequence of noise models and expected performance from the IMU. The estimated value of 8.9e−5 rad/s is small relative to the sensor characteristics used for this example and more an artifact (division by a very small number) of how error percentage is assigned in this example. In the case of division by a small number, the accuracy of the percentage number, as is the case with 894.15% above, is not accurate. The bias estimate in that case is still a very small number, less than 10e−4.
Embodiments take advantage of the fact that the error characteristics of the IMU are an important part of recovering sensor bias estimates. The error statistics (Eq. 58) should also be propagated with the pre-integrals and compensation gradients as well. The sensor error covariance from a pre-integration period Σi→j represents our certainty in each of the pre-integral numbers. The seemingly large percentage error for x-axis gyroscope is in this example a consequence of sensor performance characteristics specified in Eq. 59. The bias estimate of 8.9e−5 rad/s is well below the specified noise performance used to calculate Σi→j in this example. Modifying the noise parameters will accordingly modify the scale of bias estimates. Some embodiments use a high performance inertial sensor pack, resulting in sensor uncertainty that is less with lower estimated bias.
In general, factor graph use cases utilize a notion of smoothness for consecutive interpose inertial odometry constraints. The error in x-gyroscope bias estimate seen here will result in increased residual errors in neighboring interpose constraints and will be constrained through a combination of multiple observations over time. This argument is similar to observability arguments made for Kalman filter style inertial navigation solutions.
Concurrent Gyroscope and Accelerometer Bias Errors, Not Moving: In the more general case, some variation in all sensor biases is expected. This example illustrates a similar example with the IMU standing still facing NWU over a 1 second period in an environment with gravity wge=[0,0,1]Tm/s2 but allows a random bias on all sensor axis. Let
b=[0.0013,0.0015,−0.0005,−0.0004,−0.0015,0.0003]T (Equation 69)
We find the pre-integrals:
b
Δ{tilde over (x)}
i→j≈[0.0013,0.0015,−0.0004,03,0.00035,−0.00215,1.0003,5.0e−5,−0.00097,0.50015,03]T (Equation 70)
At this point, changes in bias terms, Δbω or Δba, are not yet used in these examples, and seen as the two 03 elements in the pre-integral measurement vector given above.
Using the compensation gradients, ξi→j to again construct the residual function, Eq. 18 and 29, embodiments can retroactively search for sensor bias estimates as before:
residual=0.001 after 234 iterations
Truth: [0.0013, 0.0015, −0.0005, −0.0004, −0.0015, 0.0003]
Estim: [0.00128, 0.001459, −0.000504, −0.000419, −0.001531, 0.000309]
error: [−0.14, 0.06, −0.12, 0.22, −1.1, −1.9] %
All the bias errors are recovered within 1 or 2% of the true values. Imagine that the errors baked into the pre-integral measurement vector come from a randomly drifting and gravity coupled integral estimate. The compensation gradients capture the directions in which errors would propagate, and project them back into the pose i reference frame. The inference process can then use this summary of directions, compensation gradients, to compensate for the errors baked into the pre-integral measurements.
Bias Estimates While Moving, The Power of Compensation Gradients: All the above examples can easily be reproduced by simple averaging of the measurements, because we know the IMU did not move during the integration period. Now, consider the example where during the integration interval the IMU is flipped onto its side for half of the 1 second integration period, after which the IMU returned to a NWU orientation before the integration period expires. Averaging alone would not know how much time to spend in which direction and ultimately provides the wrong bias estimates. The pre-integral measurement is again made in the presence of random IMU biases and the IMU is turned 90° about its y-axis in the presence of gravity wge=[0,0,1]T
b
Δ{tilde over (x)}
i→j≈[0.00063,0.0009,0.00059,03,0.0007,−0.00027,1.00095,0.00027,−8.0e−5,0.50048,03,]T (Equation 71)
As before, the compensation gradients vector ξi→jT, which was accumulated at the same time as the pre-integral measurements and depicted in the figure below, is stored for later retroactive inference.
Inference with the measurement model, as used in previous examples, result in the following bias estimate accuracy:
residual=0.132 after 249 iterations
Truth: [0.001, 0.0001, 0.0003, 0.0003, 0.0009, 0.0007]
Estim: [0.00089, 0.000443, −0.000297, 5.9e−5, 0.000895, 7.2e−5]
error: [8.23, −408.57, 203.01, 79.19, 0.21, 87.82] %
The bias estimation error of −408.57% is immediately obvious, and a result of accumulation of integration errors during the pre-integration process. This illustrates the importance of improved integration of compensation gradients and pre-integrals in the art. Repeating the above experiment with at a simulated sensor rate of 10,000 Hz, as opposed to 1,000˜Hz rate used above, we find greatly improved accuracy:
residual=0.001 after 256 iterations
Truth: [0.0015, 0.0002, 0.0, 0.002, 0.0011, 0.001]
Estim: [0.00152, 0.000235, 3.7e−5, 0.001977, 0.001052, 0.000995]
error: [0.18, −0.57, 15.3, 0.05, 0.01, 0.04] %
A marked improvement is seen, indicating the value of sensor side precomputation of the pre-integral, compensation gradient and noise covariance computations. This improvement over known art is due in part to higher rate accumulation, as is summarized in the section hereafter.
Bias Estimates from Two Different Pose Locations: As a last basic example, consider how the compensation gradient terms operate when the IMU is moved between different pose locations during the integration interval. Consider an IMU, starting at rest, moved from the zero position in some world coordinates at pose i, along some complicated and varying dynamic trajectory, to a pose j at position [0.7, 0, 0] and tilted 90°.
The pre-integral measurement vector is
b
Δ{tilde over (x)}
i→j≈[1.5723,−0.0011,0.00103,0.0013,−0.0005,1.0004,0.7008,0.0001,0.500603,]T (Equation 72)
As before, the compensation gradients vector ξi→jT, which is accumulated at the same time as the pre-integral measurements, is stored for later retroactive inference. Notice how these values differ greatly from the examples before. This implies the direction information throughout the interpose period is captured in this ξi→jT vector.
To complete the example, inertial sensor
residual=0.016 after 293 iterations
Truth: [0.0015, −0.0011, 0.001, 0.002, 0.0011, 0.001]
Estim: [0.001608, −0.001575, 9.3e−5, 0.001988, 0.000699, 0.001115]
error: [−7.14, 42.81, 89.82, 0.6, 36.1, −11.4] %
Feed-Forward Updating—In some embodiments, various frequencies of accumulation can be used to accumulate pre-integral values, depending on desired or available computational load or power.
Accuracy of inferred inertial sensor biases using compensation gradients can be improved by computing at higher speeds. The three lines of
Second Order Compensation—In some embodiments, to improve the accuracy of the bias compensation model, second order terms are added to the Taylor expansions discussed herein. One advantage of interpreting bias offsets as a Taylor expansion with continuous time IMU error model, is that the second order Taylor terms follow directly from the derivation for first order terms. The second degree terms against gyro and accelerometer biases,
can be expressed, respectively.
Producing a new matrix
The matrix linear system can be extended:
Note that all second order gradients which are zero can be ignored. Simplifying the overall propagation computational load.
Second Order Attitude Model—As before, there is no direct coupling direct forward coupling from accelerometers to attitude. Aside, reverse coupling happens through the velocity error model.
The second order gyro bias term is:
Second Order Velocity Model: The second order terms for velocity are:
and to accelerometer bias:
(Equation 79)
Second Order Position Models: From gyro bias to biases:
and from accelerometer bias:
Compensating Earth Rotation Rate, And Gyrocompassing: The attitude and velocity error models can be extended to compensate for earth rotation. This can enable factor graph style gyrocompassing. The pre-integral accumulation stays unchanged, but in an inference engine, some embodiments use updated compensation terms. The development starts with the updated attitude propagation formulation:
This addition does not necessarily change the accumulation of compensation gradients. However, the uncertainty covariance propagation will be affected.
Although the present invention has been described with reference to exemplary embodiments, it is not limited thereto. Those skilled in the art will appreciate that numerous changes and modifications may be made to the preferred embodiments of the invention and that such changes and modifications may be made without departing from the true spirit of the invention. It is therefore intended that the appended claims be construed to cover all such equivalent variations as fall within the true spirit and scope of the invention. All publications that are referenced in this disclosure are herein incorporated by reference in their entirety.
This invention was made with Government Support under Grant No. IIS-1318392. The Government has certain rights in this invention.