VISION-AIDED INERTIAL NAVIGATION

Information

  • Patent Application
  • 20240011776
  • Publication Number
    20240011776
  • Date Filed
    October 31, 2022
    a year ago
  • Date Published
    January 11, 2024
    3 months ago
Abstract
Localization and navigation systems and techniques are described. An electronic device comprises a processor configured to maintain a state vector storing estimates for a position of the electronic device at poses along a trajectory within an environment along with estimates for positions for one or more features within the environment. The processor computes, from the image data, one or more constraints based on features observed from multiple poses of the electronic device along the trajectory, and computes updated state estimates for the position of the electronic device in accordance with the motion data and the one or more computed constraints without computing updated state estimates for the features for which the one or more constraints were computed.
Description
TECHNICAL FIELD

This document pertains generally to navigation, and more particularly, but not by way of limitation, to vision-aided inertial navigation.


BACKGROUND

Existing technologies for navigation are not without shortcomings. For example, global positioning system (GPS) based navigation systems require good signal reception from satellites in orbit. With a GPS -based system, navigation in urban areas and indoor navigation is sometimes compromised because of poor signal reception. In addition, GPS -based systems are unable to provide close-quarter navigation for vehicular accident avoidance. Other navigation systems suffer from sensor errors (arising from slippage, for example) and an inability to detect humans or other obstacles.


OVERVIEW

This document discloses a system for processing visual information from a camera, for example, and inertial sensor data to provide an estimate of pose or other localization information. The camera provides data including images having a number of features visible in the environment and the inertial sensor provides data with respect to detected motion. A processor executes an algorithm to combine the camera data and the inertial sensor data to determine position, orientation, speed, acceleration or other higher order derivative information.


A number of features within the environment are tracked. The tracked features correlate with relative movement as to the frame of reference with respect to the environment. As the number of features increases, the complexity of the data processing also rises. One example of the present subject matter uses an Extended Kalman filter (EKF) having a computational complexity that varies linearly with the number of tracked features.


In one example, a first sensor provides data for tracking the environmental features and a second sensor provides data corresponding to movement of the frame of reference with respect to the environment. A first sensor can include a camera and a second sensor can include an inertial sensor.


Other types of sensors can also be used. Each sensor can be classified as a motion sensor or as a motion inferring sensor. One example of a sensor that directly detects motion is a Doppler radar system and an example of a sensor that detects a parameter from which motion can be inferred is a camera.


In one example, a single sensor provides data corresponding to the tracked features as well as data corresponding to relative movement of the frame of reference within the environment. Data from the single sensor can be multiplexed in time, in space, or in another parameter.


The sensors can be located on (or coupled to), either or both of the frame of reference and the environment. Relative movement as to the frame of reference and the environment can occur by virtue of movement of the frame of reference within a stationary environment or it can occur by virtue of a stationary frame of reference and a traveling environment.


Data provided by the at least one sensor is processed using a processor that implements a filter algorithm. The filter algorithm, in one example, includes an EKF, however, other filters are also contemplated.


In one example, a system includes a feature tracker, a motion sensor and a processor. The feature tracker is configured to provide feature data for a plurality of features relative to a frame of reference in an environment for a period of time. The motion sensor is configured to provide motion data for navigation of the frame of reference in the environment for the period of time. The processor is communicatively coupled to the feature tracker and communicatively coupled to the motion sensor. The processor is configured to generate at least one of navigation information for the frame of reference and the processor is configured to carry out the estimation at a computational complexity linear with the number of tracked features. Linear computational complexity is attained by simultaneously using each feature's measurements to impose constraints between the poses from which the feature was observed. This is implemented by manipulating the residual of the feature measurements to remove the effects of the feature estimate error (either exactly or to a good approximation).


This overview is intended to provide an overview of subject matter of the present patent application. It is not intended to provide an exclusive or exhaustive explanation of the invention. The detailed description is included to provide further information about the present patent application.





BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.



FIG. 1 illustrates a composite view of time sampled travel of a frame of reference relative to an environment.



FIG. 2 illustrates a block diagram of a system.



FIGS. 3A-3D illustrates selected images from a dataset.



FIG. 4 illustrates an estimated trajectory overlaid on a map.



FIGS. 5A-5C illustrates position, attitude and velocity for x-axis, y-axis, and z-axis.





DETAILED DESCRIPTION


FIG. 1 illustrates a composite view of five time-sampled positions during travel of frame of reference 15 within environment 20. Environment 20 includes two features illustrated here as a tree and a U.S. postal mailbox, denoted as feature 5 and feature 10, respectively. At each of the time sampled points, each of the two features are observed as denoted in the figure. For example, at time t1, frame of reference 15 observes feature 5 and feature 10 along the line segments illustrated. At a later time t2, frame of reference 15 again observes feature 5 and feature 10, but with a different perspective. In the figure, the frame of reference travels a curving path in view of the features.


Navigation information as to the relative movement between the frame of reference 15 and the environment 20 can be characterized using sensor data.


Various types of sensors can be identified. A first type of sensor provides direct measurement of quantities related to motion. Data from a sensor of a second type can provide data by which motion can be inferred. Data can also be derived form a statistical model of motion.


A sensor of the first type provides an output that is derived from direct measurement of the motion. For example, a wheel encoder provides data based on rotation of the wheel. Other examples include a speedometer, a Doppler radar, a gyroscope, an accelerometer, an airspeed sensor (such as pitot tube), and a global positioning system (GPS).


A motion inferring sensor provide an output that, after processing, allows an inference of motion. For example, a sequence of camera images can be analyzed to infer relative motion. In addition to a camera (single or multiple), other examples of motion-inferring sensors include a laser scanner (either 2D or 3D), sonar, and radar.


In addition to receiving data from a motion sensor and a motion-inferring sensor, data can also be derived from a statistical probabilistic model of motion. For example, a pattern of vehicle motion through a roadway intersection can provide data for the present subject matter. Additionally, various types of kinematic or dynamic models can be used to describe the motion.


With reference again to FIG. 1, an estimate of the motion of frame of reference 15 at each of the time samples can be generated using an inertial measurement unit (IMU).


In addition, feature 5 and feature 10 can be tracked using a video camera.



FIG. 2 illustrates system 100 according to one example of the present subject matter. System 100 includes first sensor 110 and second sensor 120. In one example, sensor 100 includes a feature tracker and sensor 120 includes a motion sensor, a motion inferring sensor, or a motion tracking model. The figure illustrates two sensors, however, these can be combined and implemented as a single sensor, such as for example, a camera or a radar system, in which the function of sensing motion and tracking features are divided in time, space, or other parameter. In addition, more than two sensors can be provided.


System 100 uses data derived from two sensing modalities which are represented in the figure as first sensor 110 and second sensor 120. The first sensing modality entails detecting motion of the frame of reference with respect to the environment. The first sensing modality expresses a constraint as to consecutive poses and a motion measurement. This motion can be sensed using a direct motion sensor, a motion tracking sensor, a motion inferring sensor or based on feature observations. The second sensing modality includes feature observations and is a function of a particular feature and a particular pose.


System 100 includes processor 130 configured to receive data from the one or more sensors. Processor 130, in one example, includes instructions for implementing an algorithm to process the data and derive navigation information. Processor 130, in one example, implements a filter algorithm, such as a Kalman filter or an extended Kalman filter (EKF).


Data acquired using the feature tracking sensor and a motion sensor (or motion-inferring sensor or a motion tracking model) is processed by an algorithm. The algorithm has complexity linear with the number of tracked features. Linear complexity means that complexity of the calculation doubles with a doubling of the number of features that are tracked. In order to obtain linear complexity, the algorithm uses feature measurements for imposing constraints between the poses. This is achieved by projecting the residual equation of the filter to remove dependency of the residual on the feature error or higher order terms.


Processor 130 provides an output to output device 140. Output device 140, in various examples, includes a memory or other storage device, a visible display, a printer, an actuator (configured to manipulate a hardware device), and a controller (configured to control another system).


A number of output results are contemplated. For example, the algorithm can be configured to determine a position of a particular feature. The feature is among those tracked by one of the sensors and its position is described as a point in three-dimensional space. The results can include navigation information for the frame of reference. For example, a position, attitude, orientation, velocity, acceleration or other higher order derivative with respect to time can be calculated. In one example, the results include the pose for the frame of reference. The pose includes a description of position and attitude. Orientation refers to a single degree of freedom and is commonly referred to as heading. Attitude, on the other hand, includes the three dimensions of roll, pitch and yaw.


The output can include a position, an orientation, a velocity (linear or rotational), acceleration (linear or rotational) or a higher order derivative of position with respect to time.


The output can be of any dimension, including 1-dimensional, 2-dimensional or 3-dimensional.


The frame of reference can include, for example, an automobile, a vehicle, or a pedestrian. The sensors are in communication with a processor, as shown in FIG. 2, and provide data relative to the frame of reference. For example, a particular sensor can have multiple components with one portion affixed to the frame of reference and another portion affixed to the environment.


In other examples, a portion of a sensor is decoupled from the frame of reference and provides data to a remote processor.


In one example, a feature in space describes a particular point, and thus, its position within an environment can be identified using three degrees of freedom. In contrast to a feature, the frame of reference can be viewed as a rigid body having six degrees of freedom. In particular, the degrees of freedom for a frame of reference can be described as moving up and down, moving left and right, moving forward and backward, tilting up and down (pitch), turning left and right (yaw), and tilting side to side (roll).


Consider next an example of an Extended Kalman Filter (EKF)-based algorithm for real-time vision-aided inertial navigation. This example includes derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algorithm has computational complexity that is linear in the number of features, and is capable of high-precision pose estimation in large-scale real-world environments. The performance of the algorithm can be demonstrated with experimental results involving a camera/IMU system localizing within an urban area.


Introduction

Vision-aided inertial navigation has benefited from recent advances in the manufacturing of MEMS-based inertial sensors. Such sensors have enabled small, inexpensive, and very accurate Inertial Measurement Units (IMUs), suitable for pose estimation in small-scale systems such as mobile robots and unmanned aerial vehicles. These systems often operate in urban environments where GPS signals are unreliable (the “urban canyon”), as well as indoors, in space, and in several other environments where global position measurements are unavailable.


Visual sensing provides images with high-dimensional measurements, and having rich information content. A feature extraction method can be used to detect and track hundreds of features in images. However, the high volume of data also poses a significant challenge for estimation algorithm design. When real-time localization performance is required, there is a fundamental trade-off between the computational complexity of an algorithm and the resulting estimation accuracy.


The present algorithm can be configured to optimally utilize the localization information provided by multiple measurements of visual features. When a static feature is viewed from several camera poses, it is possible to define geometric constraints involving all these poses. This document describes a model for expressing these constraints without including the 3D feature position in the filter state vector, resulting in computational complexity only linear in the number of features.


The Simultaneous Localization and Mapping (SLAM) paradigm refers to a family of algorithms for fusing inertial measurements with visual feature observations. In these methods, the current IMU pose, as well as the 3D positions of all visual landmarks are jointly estimated. These approaches share the same basic principles with SLAM-based methods for camera-only localization, with the difference that IMU measurements, instead of a statistical motion model, are used for state propagation. SLAM-based algorithms account for the correlations that exist between the pose of the camera and the 3D positions of the observed features. SLAM-based algorithms, on the other hand, suffer high computational complexity; properly treating these correlations is computationally costly, and thus performing vision-based SLAM in environments with thousands of features remains a challenging problem.


The present subject matter includes an algorithm that expresses constraints between multiple camera poses, and thus attains higher estimation accuracy, in cases where the same feature is visible in more than two images.


The multi-state constraint filter of the present subject matter exploits the benefits of delayed linearization while having complexity only linear in the number of features. By directly expressing the geometric constraints between multiple camera poses it avoids the computational burden and loss of information associated with pairwise displacement estimation. Moreover, in contrast to SLAM-type approaches, it does not require the inclusion of the 3D feature positions in the filter state vector, but still attains optimal pose estimation.


Estimator Description

A goal of the proposed EKF-based estimator is to track the 3D pose of the IMU-affixed frame {I} with respect to a global frame of reference {G}. In order to simplify the treatment of the effects of the earth's rotation on the IMU measurements (cf. Equations 7 and 8), the global frame is chosen as an Earth-Centered, Earth-Fixed (ECEF) frame. An overview of the algorithm is given in Table 1.









TABLE 1





Multi-State Constraint Filter
















Propagation
for each IMU measurement received, propagate the filter state



and covariance.


Image
Every time a new image is recorded:


registration
augment the state and covariance matrix with a copy of the



current camera pose estimate; and



image processing module begins operation.


Update
when the feature measurements of a given image become



available, perform an EKF update.









The IMU measurements are processed immediately as they become available, for propagating the EKF state and covariance. On the other hand, each time an image is recorded, the current camera pose estimate is appended to the state vector. State augmentation is used for processing the feature measurements, since during EKF updates the measurements of each tracked feature are employed for imposing constraints between all camera poses from which the feature was seen. Therefore, at any time instant the EKF state vector comprises (i) the evolving IMU state, XIMU, and (ii) a history of up to Nmax past poses of the camera. The various components of the algorithm are described in detail below.


A. Structure of the EKF State Vector

The evolving IMU state is described by the vector:










X
IMU

=


[






G
I



q
_

T





b
g
T






G


v
I
T





b
a
T






G


p
I
T





]

T





(

Equation


1

)







where GIq is the unit quaternion describing the rotation from frame {G} to frame a {I}, GpI and GvI are the IMU position and velocity with respect to {G}, and bg and ba are 3×1 vectors that describe the biases affecting the gyroscope and accelerometer measurements, respectively. The IMU biases are modeled as random walk processes, driven by the white Gaussian noise vectors nwg and nwa, respectively. Following Eq. (1), the IMU error-state is defined as:










X
IMU

=


[




δ


θ
I
T






b
~

g
T






G



v
~

I
T






b
~

a
T






G



p
~

I
T





]

T





(

Equation


2

)







For the position, velocity, and biases, the standard additive error definition is used (i.e., the error in the estimate {circumflex over (x)} of a quantity x is defined as x=x−{circumflex over (x)}). However, for the quaternion a different error definition is employed. In particular, if {circumflex over (q)} is the estimated value of the quaternion q, then the orientation error is described by the error quaternion δq, which is defined by the relation qq⊗{circumflex over (q)} In this expression, the symbol ⊗ denotes quaternion multiplication. The error quaternion is










δ


q
¯





[





1
2


δ


θ
T




1



]

T





(

Equation


3

)







Intuitively, the quaternion δq describes the (small) rotation that causes the true and estimated attitude to coincide. Since attitude corresponds to 3 degrees of freedom, using δθ to describe the attitude errors is a minimal representation.


Assuming that N camera poses are included in the EKF state vector at time-step k, this vector has the following form:











X
^

k

=


[





X
^


IMU
k

T






G

C
1





q
_

^

T






G



p
^


C
1

T










G

C
N





q
_

^

T






G



p
^


C
N

T





]

T





(

Equation


4

)







Where GCi{circumflex over (q)} and G{circumflex over (p)}Ci, i=1 . . . N are the estimates of the camera attitude and position, respectively. The EKF errorstate vector is defined accordingly:











X
^

k

=


[





X
~


IMU
k

T




δθ

C
1

T






G



p
~


C
1

T








δθ

C
N

T




δθ

C
N

T




]

T





(

Equation


5

)







B. Propagation

The filter propagation equations are derived by discretization of the continuous-time IMU system model, as described in the following:


1) Continuous-Time System Modeling

The time evolution of the IMU state is described by:














G
I



q
_

.




(
t
)


=


1
2




Ω

(

ω

(
t
)

)

G
I




q
¯

(
t
)



,




b
g

.

(
t
)

=


n
wg

(
t
)






(

Equation


6

)














G


v
.




(
t
)


=




G

a



(
t
)



,




b
a

.

(
t
)

=


n
wa

(
t
)


,





G


p
I




(
t
)


=




G


v
I




(
t
)







In these expressions, Ga is the body acceleration in the global frame, ω=[ωx ωy ωz]T the rotational velocity expressed in the IMU frame, and








Ω

(
ω
)

=

[




-

[



ω


×



]




ω





-

ω
T




0



]


,


[



ω


×



]

=

[



0



-

ω
z





ω
y






ω
z



0



-

ω
x







-

ω
y





ω
x



0



]






The gyroscope and accelerometer measurements, ωm and am respectively, are given by:










ω
m

=

ω
+


C

(



G
I


q
_


)



ω
G


+

b
g

+

n
g






(

Equation


7

)


















a
m

=

C




(
G
I




q
_



)


(
G

a

-



G

g

+



2
[




ω
G



×



]

G



v
I


+



[




ω
G



×



]

2







G


p
I





)

+

b
a

+

n
a





(

Equation


8

)







where C(⋅) denotes a rotational matrix, and ng and na are zero-mean, white Gaussian noise processes modeling the measurement noise. Note that the IMU measurements incorporate the effects of the planet's rotation, ωG. Moreover, the accelerometer measurements include the gravitational acceleration, GG, expressed in the local frame.


Applying the expectation operator in the state propagation equations (Equation 6) yields the equations for propagating the estimates of the evolving IMU state:













G
I




q
_

^

.


=


1
2



Ω

(

ω
^

)





G
I



q
_

^




,




b
^

.

g

=

0

3
×
1



,




(

Equation


9

)












G




v
^

.

I


=



C

q
^

T



a
^


-

2
[





ω
G





×
]





G



v
^

I






+



[




ω
G



×



]

2





G



p
^

I



+



G

g














b
ˆ

.

a

=

0

3
×
1



,




G




p
^

.

I


=



G



v
^

I







where, for brevity, denote C{circumflex over (q)}=C(GI{circumflex over (q)}), â=am−{circumflex over (b)}a and {circumflex over (ω)}=ωm−{circumflex over (b)}g−C{circumflex over (q)}ωG. The linearized continuous-time model for the IMU error-state is:












X
~

.

IMU

=


F



X
~

IMU


+

Gn
IMU






(

Equation


10

)







where nIMU=[ngT nwgT naT nwaT]T is the system noise. The covariance matrix of nIMU, QIMU, depends on the IMU noise characteristics and is computed off-line during sensor calibration. The matrices F and G that appear in Equation 10 are given by:






F
=

[




-

[


ω
^

×

]





-

I
3





0

3
×
3





0

3
×
3





0

3
×
3







0

3
×
3





0

3
×
3





0

3
×
3





0

3
×
3





0

3
×
3







-


C

q
^

T

[


a
^

×

]





0

3
×
3





-

2
[


ω
G

×

]





-

C

q
^

T






[


ω
G

×

]

2






0

3
×
3





0

3
×
3





0

3
×
3





0

3
×
3





0

3
×
3







0

3
×
3





0

3
×
3





I
3




0

3
×
3





0

3
×
3





]





where I3 is the 3×3 identity matrix, and






G
=

[




-

I
3





0

3
×
3





0

3
×
3





0

3
×
3







0

3
×
3





I
3




0

3
×
3





0

3
×
3







0

3
×
3





0

3
×
3





-

C

q
^

T





0

3
×
3







0

3
×
3





0

3
×
3





0

3
×
3





I
3






0

3
×
3





0

3
×
3





0

3
×
3





0

3
×
3





]





2) Discrete-Time Implementation

The IMU samples the signals ωm and am with a period T, and these measurements are used for state propagation in the EKF. Every time a new IMU measurement is received, the IMU state estimate is propagated using 5th order Runge-Kutta numerical integration of Equation 9. The EKF covariance matrix is also propagated. For this purpose, consider the following partitioning for the covariance:










P

k

k


=

[




P

II

k

k






P

IC

k

k








P

IC

k

k


T




P

CC

k

k






]





(

Equation


11

)







where PIIk|k the 15×15 covariance matrix of the evolving IMU state, PCCk|k is the 6N×6N covariance matrix of the camera pose estimates, and PICk|k is the correlation between the errors in the IMU state and the camera pose estimates. With this notation, the covariance matrix of the propagated state is given by:







P


k
+
1


k


=

[




P

II


k
+
1


k







Φ

(



t
k

+
T

,

t
k


)



P

IC

k

k










P

IC

k

k


T




Φ

(



t
k

+
T

,

t
k


)

T





P

CC

k

k






]





where PIIk+1|k is computed by numerical integration of the Lyapunov equation:











P
.

II

=


FP
II

+


P
II



F
T


+


GQ
IMU



G
T







(

Equation


12

)







Numerical integration is carried out for the time interval (tk, tk+T), with initial condition PIIk|k. The state transition matrix Φ(tk+T,tk) is similarly computed by numerical integration of the differential equation












Φ
.

(



t
k

+
τ

,

t
k


)

=

F


Φ

(



t
k

+
τ

,

t
k


)



,

τϵ
[

0
,
t

]





(

Equation


13

)







with initial condition Φ(tk, tk)=I15.


C. State Augmentation

Upon recording a new image, the camera pose estimate is computed from the IMU pose estimate as:













G
C



q
_

^


=




I
C


q
_






G
I



q
_

^




,




and


G




p
^

C


=





G



p
^

I



+


C

q
^

T






I


p
C










(

Equation


14

)







where ICq is the quaternion expressing the rotation between the IMU and camera frames, and IpC is the position of the origin of the camera frame with respect to {I}, both of which are known. This camera pose estimate is appended to the state vector, and the covariance matrix of the EKF is augmented accordingly:










P

k

k





[




I


6

N

+
15






J



]





P

k

k


[




I


6

N

+
15






J



]

T






(

Equation


15

)







where the Jacobian J is derived from Equation 14 as:









J
=

[




C

(



I
C


q
_


)




0

3
×
9





0

3
×
3





0

3
×
6

N










C

q
^

T





I


p
C


×






0

3
×
9





I
3




0

3
×
6

N





]





(

Equation


16

)







D. Measurement Model

Consider next the measurement model employed for updating the state estimates. Since the EKF is used for state estimation, for constructing a measurement model it suffices to define a residual, r, that depends linearly on the state errors, {tilde over (X)}, according to the general form:









r
=


H


X
~


+
noise





(

Equation


17

)







In this expression H is the measurement Jacobian matrix, and the noise term must be zero-mean, white, and uncorrelated to the state error, for the EKF framework to be applied.


Viewing a static feature from multiple camera poses results in constraints involving all these poses. Here, the camera observations are grouped per tracked feature, rather than per camera pose where the measurements were recorded. All the measurements of the same 3D point are used to define a constraint equation (cf. Equation 24), relating all the camera poses at which the measurements occurred. This is achieved without including the feature position in the filter state vector.


Consider the case of a single feature, fj, that has been observed from a set of Mj camera poses (GCiq, GpCi), i∈Sj. Each of the Mj observations of the feature is described by the model:











z
i

(
j
)


=



1




C
i



z
j



[







C
i



X
j










C
i



Y
j





]

+

n
i

(
j
)




,

i


S
j






(

Equation


18

)







where ni(j) is the 2×1 image noise vector, with covariance matrix Ri(j)im2I2. The feature position expressed in the camera frame, Cipfj, is given by:













C
i



p

f
j



=


[







C
i



X
j










C
i



Y
j










C
i



Z
j





]

=


C

(





G

C
i



q
_



)



(




G


p

f
j



-



G


p

C
i




)







(

Equation


19

)







where Gpfj is the 3D feature position in the global frame. Since this is unknown, in the first step of the algorithm, employ a least-squares minimization to obtain an estimate, G{circumflex over (p)}fj, of the feature position. This is achieved using the measurements zi(j), i∈Sj, and the filter estimates of the camera poses at the corresponding time instants.


Following the estimation of the feature position, compute the measurement residual:










r
i

(
j
)


=


z
i

(
j
)


-


z
^

i

(
j
)







(

Equation


20

)







where









z
^

i

(
j
)


=


1




C
i




Z
^

j



[







C
i




X
^

j










C
i




Y
^

j





]


,


[







C
i



X
j










C
i



Y
j










C
i



Z
j





]

=


C

(





G

C
i




q
_

^



)



(




G



p
^


f
j



-



G



p
^


C
i




)







Linearizing about the estimates for the camera pose and for the feature position, the residual of Equation 20 can be approximated as:












r
i

(
j
)






H

X
i


(
j
)




X
~


+


H

f
i


(
j
)






G



p
~


f
j




+

n
i

(
j
)







(

Equation


21

)








In the preceding expression HXi(j) and Hfi(j) the Jacobians of the measurement zi(j) with respect to the state and the feature position, respectively, and G{tilde over (p)}fj is the error in the position estimate of fj. The exact values of the Jacobians in this expression are generally available. Stacking the residuals of all Mj measurements of this feature yields:






r
(j)
≃H
X
(j)
{tilde over (X)}+H
f
(j) G
{tilde over (p)}
f

j

+n
(j)   (Equation 22)


where r(j), HX(j), Hf(j), and n(j) are block vectors or matrices with elements ri(j), HXi(j), Hfi(j), and ni(j), for i∈Sj. Since the feature observations in different images are independent, the covariance matrix of n(j) is R(j)im2I2Mj.


Note that since the state estimate, X, is used to compute the feature position estimate, the error G{tilde over (p)}fj Equation 22 is correlated with the errors {tilde over (X)}. Thus, the residual r(j) is not in the form of Equation 17, and cannot be directly applied for measurement updates in the EKF. To overcome this, define a residual ro(j), by projecting r(j) on the left nullspace of the matrix Hf(j). Specifically, let A denote the unitary matrix whose columns form the basis of the left nullspace of Hf to obtain:












r
o

(
j
)


=



A
T

(



z

(
j
)


-


z
^




(
j
)





)






A

T



H
X

(
j
)




X
~


+


A
T



n

(
j
)









(

Equation


23

)















=


H
o

(
j
)




X
~






(
j
)



+

n
o


(
j
)












(

Equation


24

)








Since the 2Mj×3 matrix Hf(j) has full column rank, its left nullspace is of dimension 2Mj−3. Therefore, ro(j) is a (2Mj−3)×1 vector. This residual is independent of the errors in the feature coordinates, and thus EKF updates can be performed based on it. Equation 24 defines a linearized constraint between all the camera poses from which the feature fj was observed. This expresses all the available information that the measurements zi(j) provide for the Mj states, and thus the resulting EKF update is optimal, except for the inaccuracies caused by linearization.


In order to compute the residual ro(j) and the measurement matrix Ho(j), the unitary matrix A does not need to be explicitly evaluated. Instead, the projection of the vector r and the matrix Hx(j) on the nullspace of Hf(j) can be computed very efficiently using Givens rotations, in O(Mj2) operations. Additionally, since the matrix A is unitary, the covariance matrix of the noise vector no(j) is given by:









E


{


n
o

(
j
)




n
o


(
j
)


T



}


=



σ
im
2



A
T


A

=


σ
im
2



I


2


M
j


-
3









The residual defined in Equation 23 is not the only possible expression of the geometric constraints that are induced by observing a static feature in Mj images. An alternative approach is, for example, to employ the epipolar constraints that are defined for each of the Mj(Mj−1)/2 pairs of images. However, the resulting Mj(Mj−1)/2 equations would still correspond to only 2Mj−3 independent constraints, since each measurement is used multiple times, rendering the equations statistically correlated. Experimental data shows that employing linearization of the epipolar constraints results in a significantly more complex implementation, and yields inferior results compared to the approach described above.


E. EKF Updates

The preceding section presents a measurement model that expresses the geometric constraints imposed by observing a static feature from multiple camera poses. Next, consider the update phase of the EKF, in which the constraints from observing multiple features are used. EKF updates are triggered by one of the following two events:


When a feature that has been tracked in a number of images is no longer detected, then all the measurements of this feature are processed using the method presented above in the section concerning Measurement Model. This case occurs most often, as features move outside the camera's field of view.


Every time a new image is recorded, a copy of the current camera pose estimate is included in the state vector (see the section concerning State Augmentation). If the maximum allowable number of camera poses, Nmax, has been reached, at least one of the old ones must be removed. Prior to discarding states, all the feature observations that occurred at the corresponding time instants are used, in order to utilize their localization information. In one example, choose Nmax/3 poses that are evenly spaced in time, starting from the second-oldest pose. These are discarded after carrying out an EKF update using the constraints of features that are common to these poses. One example always retains the oldest pose in the state vector, because the geometric constraints that involve poses further back in time typically correspond to larger baseline, and hence carry more valuable positioning information.


Consider next the update process. At a given time step the constraints of L features, selected by the above two criteria, must be processed. Following the procedure described in the preceding section, compute a residual vector ro(j), j=1 . . . L, as well as a corresponding measurement matrix Ho(j), j=1 . . . L for each of these features (cf. Equation 23). Stacking all residuals in a single vector yields:












r
o

=



H
x



X
~


+

n
o






(

Equation


25

)








where ro and no are vectors with block elements ro(j) and no(j), j=1 . . . L respectively, and Hx is a matrix with block rows Hx(j), j=1 . . . L.


Since the feature measurements are statistically independent, the noise vectors no(j) are uncorrelated. Therefore, the covariance matrix of the noise vector no is equal to Roim2Id, where d=Σj=1L(2Mj−3) is the dimension of the residual ro. In practice, d can be a quite large number. For example, if 10 features are seen in 10 camera poses each, the dimension of the residual is 170. In order to reduce the computational complexity of the EKF update, employ the QR decomposition of the matrix Hx. Specifically, denote this decomposition as









H
x

=


[




Q
1




Q
2




]

[




T
H





0



]






where Q1 and Q2 are unitary matrices whose columns form bases for the range and nullspace of Hx, respectively, and TH is an upper triangular matrix. With this definition, Equation 25 yields:












r
0

=





[




Q
1




Q
2




]

[




T
H





0



]



X
~


+

n
o









(

Equation


26

)


















[





Q
1
T



r
o








Q
2
T



r
o





]

=



[




T
H





0



]



X
~


+

[





Q
1
T



n
o








Q
2
T



n
o





]








(

Equation


27

)










From the last equation it becomes clear that by projecting the residual ro on the basis vectors of the range of Hx, all the useful information in the measurements is retained. The residual Q2Tro is only noise, and can be completely discarded. For this reason, instead of the residual shown in Equation 25, employ the following residual for the EKF update:












r
n

=



Q
1
T



r
o


=



T
H



X
~


+

n
n







(

Equation


28

)








In this expression nn=Q1Tno a noise vector whose covariance matrix is equal to Rn=Q1TRoQ1im2Ir, with r being the number of columns in Q1. The EKF update proceeds by computing the Kalman gain:











K
=



PT
H
T

(



T
H



PT
H
T


+

R
n


)


-
1






(

Equation


29

)








while the correction to the state is given by the vector












Δ

X

=

Kr
n





(

Equation


30

)








In addition, the state covariance matrix is updated according to:












P

k
+

1




"\[LeftBracketingBar]"


k
+
1





=



(


I
ξ

-

KT
H


)





P

k
+

1




"\[LeftBracketingBar]"

k




(


I
ξ

-

KT
H


)

T


+


KR
n



K
T







(

Equation


31

)








where ξ=6N+15 is the dimension of the covariance matrix.


Consider the computational complexity of the operations needed during the EKF update. The residual rn, as well as the matrix TH, can be computed using Givens rotations in O(r2d) operations, without the need to explicitly form Q1. On the other hand, Equation 31 involves multiplication of square matrices of dimension ξ, an O(ξ3) operation. Therefore, the cost of the EKF update is max(O(r2d), O(ξ3)). If, on the other hand, the residual vector ro was employed, without projecting it on the range of Hx, the computational cost of computing the Kalman gain would have been O(d3) . Since typically d>>ξ, r, the use of the residual rn results in substantial savings in computation.


Discussion

Consider next some of the properties of the described algorithm. As shown elsewhere in this document, the filter's computational complexity is linear in the number of observed features, and at most cubic in the number of states that are included in the state vector. Thus, the number of poses that are included in the state is the most significant factor in determining the computational cost of the algorithm. Since this number is a selectable parameter, it can be tuned according to the available computing resources, and the accuracy requirements of a given application. In one example, the length of the filter state is adaptively controlled during filter operation, to adjust to the varying availability of resources.


One source of difficulty in recursive state estimation with camera observations is the nonlinear nature of the measurement model. Vision-based motion estimation is very sensitive to noise, and, especially when the observed features are at large distances, false local minima can cause convergence to inconsistent solutions. The problems introduced by nonlinearity can be addressed using techniques such as Sigma-point Kalman filtering, particle filtering, and the inverse depth representation for features. The algorithm is robust to linearization inaccuracies for various reasons, including (i) the inverse feature depth parametrization used in the measurement model and (ii) the delayed linearization of measurements. According to the present subject matter, multiple observations of each feature are collected prior to using them for EKF updates, resulting in more accurate evaluation of the measurement Jacobians.


In typical image sequences, most features can only be reliably tracked over a small number of frames (“opportunistic” features), and only a few can be tracked for long periods of time, or when revisiting places (persistent features). This is due to the limited field of view of cameras, as well as occlusions, image noise, and viewpoint changes, that result in failures of the feature tracking algorithms. If all the poses in which a feature has been seen are included in the state vector, then the proposed measurement model is optimal, except for linearization inaccuracies. Therefore, for realistic image sequences, the present algorithm is able to use the localization information of the opportunistic features. Also note that the state vector Xk is not required to contain only the IMU and camera poses. In one example, the persistent features can be included in the filter state, and used for SLAM. This can improve the attainable localization accuracy within areas with lengthy loops.


EXPERIMENTAL EXAMPLE

The algorithm described herein has been tested both in simulation and with real data. Simulation experiments have verified that the algorithm produces pose and velocity estimates that are consistent, and can operate reliably over long trajectories, with varying motion profiles and density of visual features.


The following includes the results of the algorithm in an outdoor experiment.


The experimental setup included a camera/IMU system, placed on a car that was moving on the streets of a typical residential area in Minneapolis, MN. The system included a Pointgrey FireFly camera, registering images of resolution 640×480 pixels at 3 Hz, and an Inertial Science ISIS IMU, providing inertial measurements at a rate of 100 Hz. During the experiment all data were stored on a computer and processing was done off-line. Some example images from the recorded sequence are shown in FIGS. 3A-3D.


The recorded sequence included a video of 1598 images representing about 9 minutes of driving.


For the results shown here, feature extraction and matching was performed using the SIFT algorithm. During this run, a maximum of 30 camera poses was maintained in the filter state vector. Since features were rarely tracked for more than 30 images, this number was sufficient for utilizing most of the available constraints between states, while attaining real-time performance. Even though images were only recorded at 3 Hz due to limited hard disk space on the test system, the estimation algorithm is able to process the dataset at 14 Hz, on a single core of an Intel T7200 processor (2 GHz clock rate). During the experiment, a total of 142903 features were successfully tracked and used for EKF updates, along a 3.2 km-long trajectory. The quality of the position estimates can be evaluated using a map of the area.


In FIG. 4, the estimated trajectory is plotted on a map of the neighborhood where the experiment took place. The initial position of the car is denoted by a red square on SE 19th Avenue, and the scale of the map is shown on the top left corner.



FIGS. 5A-5C are graphs illustrating the 3σ bounds for the errors in the position, attitude, and velocity, respectively. The plotted values are 3-times the square roots of the corresponding diagonal elements of the state covariance matrix. Note that the EKF state is expressed in ECEF frame, but for plotting, all quantities have been transformed in the initial IMU frame, whose x axis is pointing approximately south, and its y axis east.


The trajectory follows the street layout quite accurately and, additionally, the position errors that can be inferred from this plot agree with the 3σ bounds shown in FIG. 5A. The final position estimate, expressed with respect to the starting pose, is {circumflex over (X)}final=[−7.92 13.14 −0.78]T m. From the initial and final parking spot of the vehicle it is known that the true final position expressed with respect to the initial pose is approximately Xfinal=[0 7 0]T m. Thus, the final position error is approximately 10 m in a trajectory of 3.2 km, i.e., an error of 0.31% of the traveled distance. This is remarkable, given that the algorithm does not utilize loop closing, and uses no prior information (for example, nonholonomic constraints or a street map) about the car motion. Note also that the camera motion is almost parallel to the optical axis, a condition which is particularly adverse for image-based motion estimation algorithms. In FIG. 5B and FIG. 5C, the 3σ bounds for the errors in the IMU attitude and velocity along the three axes are shown. From these, observe that the algorithm obtains accuracy (3σ) better than 1° for attitude, and better than 0.35 msec for velocity in this particular experiment.


The results demonstrate that the algorithm is capable of operating in a real-world environment, and producing very accurate pose estimates in real-time. Note that in the dataset presented here, several moving objects appear, such as cars, pedestrians, and trees whose leaves move in the wind. The algorithm is able to discard the outliers which arise from visual features detected on these objects, using a simple Mahalanobis distance test. Robust outlier rejection is facilitated by the fact that multiple observations of each feature are available, and thus visual features that do not correspond to static objects become easier to detect. Note also that the method can be used either as a stand-alone pose estimation algorithm, or combined with additional sensing modalities to provide increased accuracy. For example, a GPS sensor (or other type of sensor) can be used to compensate for position drift.


The present subject matter includes an EKF-based estimation algorithm for real-time vision-aided inertial navigation. One aspect of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature positions in the state vector of the EKF, and is optimal, up to the errors introduced by linearization. The resulting EKF-based pose estimation algorithm has computational complexity linear in the number of features, and is capable of very accurate pose estimation in large-scale real environments. One example includes fusing inertial measurements with visual measurements from a monocular camera. However, the approach is general and can be adapted to different sensing modalities both for the proprioceptive, as well as for the exteroceptive measurements (e.g., for fusing wheel odometry and laser scanner data).


Selected Calculations

Intersection can be used to compute an estimate of the position of a tracked feature fj. To avoid local minima, and for better numerical stability, during this process, use an inverse-depth parametrization of the feature position. In particular, if {Cn} is the camera frame in which the feature was observed for the first time, then the feature coordinates with respect to the camera at the i-th time instant are:
















C
i



p

f
j



=



C
(




C
n


C
i




q
_



)






C
n



p

f
j




+




C
i



p

C
n





,

i


S
j






(

Equation


32

)








In this expression C(CnCiq) and CipCn are the rotation and translation between the camera frames at time instants n and i, respectively. Equation 32 can be rewritten as:















C
i



p

f
j



=





C
n



Z
j




(



C
(




C
n


C
i




q
_



)





[








C
n



X
j






C
n



Z
j












C
n



Y
j






C
n



Z
j







1



]



+


1




C
n



Z
j








C
i



p

C
n





)






(

Equation


33

)















=





C
n



Z
j




(



C
(




C
n


C
i




q
_



)





[




α
j






β
j





1



]



+


ρ
j






C
i



p

C
n





)







(

Equation


34

)















=





C
n



Z
j


[





h

i

1


(


α
j

,

β
j

,

ρ
j


)







h

i

2




(


α
j

,

β
j

,

ρ
j


)








h

i

3




(


α
j

,

β
j

,

ρ
j


)





]






(

Equation


35

)








In the last expression hi1, hi2 and hi3 are scalar functions of the quantities αj, βj, ρj, which are defined as:













α
j

=





C
n



X
j






C
n



Z
j




,


β
j

=





C
n



Y
j






C
n



Z
j




,


ρ
j

=

1




C
n



Z
j




,




(

Equation


36

)








Substituting from Equation 35 into Equation 18, express the measurement equations as functions of αj, βj, and ρj only:












z
i

(
j
)


=



1


h

i

3


(


α
j

,

β
j

,

ρ
j


)


[





h

i

1




(


α
j

,

β
j

,

ρ
j


)








h

i

2




(


α
j

,

β
j

,

ρ
j


)





]

+

n
i

(
j
)







(

Equation


37

)








Given the measurements zi(j), i∈Sj, and the estimates for the camera poses in the state vector, obtain estimates for {circumflex over (α)}j, {circumflex over (β)}j, and {circumflex over (ρ)}j, and using Gauss-Newton least squares minimization. Then, the global feature position is computed by:














G



p
^


f
j



=



1


ρ
^

j




C
T



(



G

C
n





q
_

^



)





[





α
^

j







β
^

j





1



]



+



G



p
^


C
n








(

Equation


38

)








Note that during the least-squares minimization process the camera pose estimates are treated as known constants, and their covariance matrix is ignored. As a result, the minimization can be carried out very efficiently, at the expense of the optimality of the feature position estimates. Recall, however, that up to a first-order approximation, the errors in these estimates do not affect the measurement residual (cf. Equation 23). Thus, no significant degradation of performance is inflicted.


Additional Notes

The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown and described. However, the present inventors also contemplate examples in which only those elements shown and described are provided.


All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.


In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.


Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code may be tangibly stored on one or more volatile or non-volatile computer-readable media during execution or at other times. These computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (for example, compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.


The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims
  • 1. A vision-aided inertial navigation system comprising: at least one image source capable of providing image data comprising images captured at a plurality of poses along a trajectory within an environment over a period of time, wherein the image data includes features that were each observed within the environment at multiple poses along the trajectory;a motion sensor configured to provide motion data in the environment for the period of time; anda processor communicatively coupled to the at least one image source and communicatively coupled to the motion sensor, the processor configured to compute estimates for at least a position and orientation for at least some of the plurality of poses along the trajectory,wherein the processor is configured to: determine, from the image data, feature measurements corresponding to one or more features observed from any of the plurality of poses along the trajectory;for one or more of the features observed from multiple poses along the trajectory, compute one or more constraints based upon estimates for positions within the environment for the one or more of the features observed from multiple poses along the trajectory, where the estimated positions are obtained using the image data;determine the position and orientation for at least some of the plurality of poses along the trajectory by updating, in accordance with the motion data and the one or more computed constraints, state information within a state vector representing estimates for position and orientation along the trajectory; andoutput, for display, information responsive to the updated state information; wherein the estimates for positions within the environment for the one or more of the features observed from multiple poses along the trajectory are maintained separately from the state vector.
  • 2. The vision-aided inertial navigation system of claim 1, wherein the processor is configured to compute each of the one or more constraints by manipulating a residual of a measurement for the respective feature.
  • 3. The vision-aided inertial navigation system of claim 1, wherein the at least one image source includes a camera, andwherein the vision-aided inertial navigation system comprises one of a robot or a vehicle.
  • 4. The vision-aided inertial navigation system of claim 1 wherein the motion sensor includes an inertial measurement unit (IMU).
  • 5. The vision-aided inertial navigation system of claim 1 wherein the processor is configured to implement an extended Kalman filter to compute the one or more constraints.
  • 6. The vision-aided inertial navigation system of claim 1 further including an output device coupled to the processor, the output device including at least one of a memory, a transmitter, a display, a printer, an actuator, and a controller.
  • 7. A method comprising: receiving, with a processor and from at least one image source communicatively coupled to the processor, image data comprising images captured at a plurality of poses along a trajectory within an environment over a period of time, wherein the image data includes features observed within the environment at multiple poses along the trajectory;receiving, with the processor and from a motion sensor communicatively coupled to the processor, motion data in the environment for the period of time; andcomputing, with the processor, state estimates for at least a position and orientation for at least some of the plurality of poses along the trajectory,wherein computing the state estimates comprises: determining, from the image data, feature measurements corresponding to one or more features observed from any of the plurality of poses along the trajectory;for one or more of the features observed from the plurality of poses along the trajectory, computing one or more constraints based upon estimates for positions within the environment for the one or more of the features observed from multiple poses along the trajectory, where the estimated positions are obtained using the image data;determining the position and orientation for at least some of the plurality of poses along the trajectory by updating, in accordance with the motion data and the one or more computed constraints, state information within a state vector representing estimates for position and orientation along the trajectory;outputting, for display, information responsive to the updated state information;wherein the estimates for positions within the environment for the one or more of the features observed from multiple poses along the trajectory are maintained separately from the state vector; andcontrolling, responsive to the computed state estimates, navigation along the trajectory.
  • 8. The method of claim 7, wherein computing the one or more constraints comprises manipulating a residual of a measurement of the respective feature to reduce an effect of a feature estimate error.
  • 9. The method of claim 7, wherein computing the state estimates further includes computing states estimates for at least one of a velocity and an acceleration along the trajectory.
  • 10. The method of claim 7, wherein receiving the image data includes receiving data from at least one of a camera-based sensor, a laser-based sensor, a sonar-based sensor, and a radar-based sensor.
  • 11. The method of claim 7, wherein receiving the motion data from the motion sensor includes receiving data from at least one of a wheel encoder, a velocity sensor, a Doppler radar based sensor, a gyroscope, an accelerometer, an airspeed sensor, and a global positioning system (GPS) sensor.
  • 12. The method of claim 7 wherein computing the one or more constraints comprises executing an Extended Kalman Filter (EKF).
  • 13. A non-transitory computer-readable storage medium comprising instructions that configure a processor to: receive, with the processor and from at least one image source communicatively coupled to the processor, image data comprising images captured at a plurality of poses along a trajectory within an environment over a period of time, wherein the image data includes features observed within the environment at multiple poses along the trajectory;receive, with the processor and from a motion sensor communicatively coupled to the processor, motion data in the environment for the period of time;determine, from the image data, feature measurements corresponding to one or more features observed from any of the plurality of poses along the trajectory;for one or more of the features observed from the plurality of poses along the trajectory, compute one or more constraints based upon estimates for positions within the environment for the one or more features observed from the plurality of poses along the trajectory, where the estimated positions are obtained using the image data;determine state estimates for at least a position and an orientation for at least some of the plurality of poses along the trajectory by updating, in accordance with the motion data and the one or more computed constraints, state information within a state vector representing estimates for the position and orientation along the trajectory; andoutput, for display, information responsive to the updated state information, wherein the estimates for positions within the environment for the one or more of the features observed from multiple poses along the trajectory are maintained separately from the state vector.
  • 14. The non-transitory computer-readable storage medium of claim 13, wherein the instructions configure the processor to compute the one or more constraints by manipulating a residual of a measurement of the respective feature to reduce an effect of a feature estimate error.
  • 15. The non-transitory computer-readable storage medium of claim 13, wherein the instructions configure the processor to compute states estimates for at least a velocity, or acceleration along the trajectory.
  • 16. The non-transitory computer-readable storage medium of claim 13, wherein the image data includes data from at least one of a camera-based sensor, a laser-based sensor, a sonar-based sensor, and a radar-based sensor.
  • 17. The non-transitory computer-readable storage medium of claim 13, wherein the motion data includes data from at least one of a wheel encoder, a velocity sensor, a Doppler radar based sensor, a gyroscope, an accelerometer, an airspeed sensor, and a global positioning system (GPS) sensor.
  • 18. The non-transitory computer-readable storage medium of claim 13, wherein the instructions configure the processor to execute an Extended Kalman Filter (EKF).
  • 19. The non-transitory computer-readable storage medium of claim 13, wherein at least one constraint is computed based on the feature measurements for features observed at three or more of the poses along the trajectory.
  • 20. The vision-aided inertial navigation system of claim 1, wherein the processor computes at least one constraint based on the feature measurements for features observed at three or more of the plurality of poses along the trajectory.
  • 21. The method of claim 7, wherein at least one constraint is computed based on the feature measurements for features observed at three or more of the poses along the trajectory.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 17/457,022, filed on Nov. 30, 2021 and entitled “VISION-AIDED INERTIAL NAVIGATION,”, which is a continuation of U.S. patent application Ser. No. 16/871,560, filed on May 11, 2020 and entitled “VISION-AIDED INERTIAL NAVIGATION,” which is a continuation of U.S. patent application Ser. No. 15/706,149, filed on Sep. 15, 2017 and entitled “EXTENDED KALMAN FILTER FOR 3D LOCALIZATION AND VISION-AIDED INERTIAL NAVIGATION,” which is a continuation of U.S. patent application Ser. No. 12/383,371, filed on Mar. 23, 2009 and entitled “VISION-AIDED INERTIAL NAVIGATION,” which claims the benefit of Provisional U.S. Patent Application No. 61/040,473, filed Mar. 28, 2008 and entitled “VISION-AIDED INERTIAL NAVIGATION,” the disclosures of which are incorporated herein by reference in their entireties.

STATEMENT OF GOVERNMENT RIGHTS

This invention was made with Government support under Grant Number MTP-1263201 awarded by the NASA Mars Technology Program, and support under Grant Number EIA-0324864, IIS-0643680 awarded by the National Science Foundation. The Government has certain rights in this invention.

Related Publications (1)
Number Date Country
20230194266 A1 Jun 2023 US
Provisional Applications (1)
Number Date Country
61040473 Mar 2008 US
Continuations (4)
Number Date Country
Parent 17457022 Nov 2021 US
Child 18051469 US
Parent 16871560 May 2020 US
Child 17457022 US
Parent 15706149 Sep 2017 US
Child 16871560 US
Parent 12383371 Mar 2009 US
Child 15706149 US