For a head-worn display unit (HWD; also head-mounted or helmet-mounted display) to accurately display aircraft or georeferenced symbology and imagery, the HWD must have accurate head pose data: the position and orientation of the head (e.g., upon which the HWD is worn, and with which the HWD itself moves) relative to a platform reference frame. For example, an aircraft (or ground-based vehicle, or other like mobile platform) may move relative to a georeferenced frame (e.g., WGS 84, PZ-90, or other frames) while the head of the aircraft's pilot may move relative to the aircraft reference frame.
Head pose may be estimated by inertial measurement units (IMU) configured to detect and report accelerations, angular rates, or changes in velocity and orientation over time. For example, a first IMU may be rigidly mounted to (and may estimate and track the relative position and orientation of) the aircraft, and a second IMU (independent of the first IMU) may be rigidly mounted to (and may estimate and track the relative position and orientation of) the head/HWD. To arrive at an accurate head pose estimate by differencing these two IMUs (relative navigation) requires a means of correcting, or compensating for, uncertainties or errors associated with either IMU. Aiding devices may provide position and orientation (pose) measurements in six degrees of freedom (6DOF), which serve as a means of correcting, or compensating for, the inherent drift of both IMUs, solving the relative navigation problem and differencing the first and second IMUs, achieving accurate head pose estimates relative to the platform reference frame.
In one aspect, embodiments of the inventive concepts disclosed herein are directed to a hybrid (optical-aided, magnetic-aided, or both) headtracking system incorporating a numerically stable Kalman filter incorporating UD-factorization, which provides accurate head pose data to a head-worn display (HWD) of the HWD wearer's head relative to an aircraft or like mobile platform. The system may receive platform-referenced pose data (relative to the platform frame) from a high integrity (e.g., georeferenced) IMU rigidly mounted to the aircraft. The system may receive head-referenced head pose data (relative to the head frame) from a head-mounted IMU (e.g., helmet-mounted). The system may, via one or more control processors, generate an estimate of the head pose relative to the platform frame based on the head-referenced and platform-referenced pose data. The control processors may determine error models for the inherent drift error associated with both the head-mounted and platform-mounted IMUs. The system may receive a secondary (e.g., low-rate) pose estimate from aiding devices rigidly mounted to the head and mobile platform (e.g., a camera and a set of fiducial markers, or a magnetometer and a set of magnetic sources) and determine error models for the inherent drift error of the aiding devices. The system may implement the numerically stable Kalman filter by propagating forward in time the estimated head pose data along with the head-mounted and platform mounted error models. The system may correct the estimated pose data and corresponding error models based on the secondary pose estimate.
In a further aspect, embodiments of the inventive concepts disclosed herein are directed to a head-worn display (HWD) (e.g., helmet-mounted display) incorporating a hybrid headtracking system with a numerically stable Kalman filter incorporating UD-factorization. The HWD controller may receive platform-referenced pose data from a high integrity IMU rigidly mounted to an aircraft or like mobile platform. The control processors may receive head-referenced head pose data (relative to the head frame) from a head-mounted IMU (e.g., helmet-mounted). The control processors may generate an estimate of the head pose relative to the platform frame based on the head-referenced and platform-referenced pose data. The control processors may determine error models for the inherent drift error associated with both the head-mounted and platform-mounted IMUs. The control processors may receive a secondary (e.g., low-rate) pose estimate from aiding devices rigidly mounted to the head and mobile platform (e.g., a camera and a set of fiducial markers, or a magnetometer and a set of magnetic sources) and determine error models for the inherent drift error of the aiding devices. The control processors may implement the numerically stable Kalman filter by propagating forward in time the estimated head pose data along with the head-mounted and platform mounted error models. The control processors may correct the time-propagated estimated pose data based on the secondary pose estimate and corresponding error model. The control processors may forward the corrected head pose data to the HWD display processors, which may display imagery and symbology to the user based on the corrected head pose data.
In a still further aspect, embodiments of the inventive concepts disclosed herein are directed to a method for hybrid headtracking via a numerically stable Kalman filter incorporating UD-factorization. The method may include receiving, via a controller connected to a head-worn display (HWD), platform-referenced pose data from a high-integrity IMU rigidly mounted to an aircraft or like mobile platform and associated with a platform reference frame. The method may include receiving, via the controller, head-referenced pose data (associated with a head reference frame from a head-mounted IMU rigidly mounted to a head of a user. The method may include determining, via the controller, error models based on the inherent drift of the platform-referenced and head-referenced IMUs. The method may include generating, via the controller, estimated head pose data based on the head-referenced and platform-referenced pose data, the estimated head pose data corresponding to the head and relative to the platform frame. The method may include generating, via the Kalman filter, time-propagated pose data based on the estimated head pose data and the respective error models corresponding to the head-referenced pose data and the platform-referenced pose data. The method may include receiving, via the controller, secondary estimated pose data from aiding devices (e.g., an optical or magnetic tracker) rigidly mounted to the mobile platform and the head. The method may include determining, via the controller, an error model corresponding to the secondary estimated pose data. The method may include generating, via the Kalman filter, corrected pose data or corrected associated error models by updating the time-propagated pose data based on the secondary estimated pose data (received from the aiding devices) and the corresponding error model.
Implementations of the inventive concepts disclosed herein may be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the included drawings, which are not necessarily to scale, and in which some features may be exaggerated and some features may be omitted or may be represented schematically in the interest of clarity. Like reference numerals in the drawings may represent and refer to the same or similar element, feature, or function. In the drawings:
Before explaining at least one embodiment of the inventive concepts disclosed herein in detail, it is to be understood that the inventive concepts are not limited in their application to the details of construction and the arrangement of the components or steps or methodologies set forth in the following description or illustrated in the drawings. In the following detailed description of embodiments of the instant inventive concepts, numerous specific details are set forth in order to provide a more thorough understanding of the inventive concepts. However, it will be apparent to one of ordinary skill in the art having the benefit of the instant disclosure that the inventive concepts disclosed herein may be practiced without these specific details. In other instances, well-known features may not be described in detail to avoid unnecessarily complicating the instant disclosure. The inventive concepts disclosed herein are capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
As used herein a letter following a reference numeral is intended to reference an embodiment of the feature or element that may be similar, but not necessarily identical, to a previously described element or feature bearing the same reference numeral (e.g., 1, 1a, 1b). Such shorthand notations are used for purposes of convenience only, and should not be construed to limit the inventive concepts disclosed herein in any way unless expressly stated to the contrary.
Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by anyone of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of embodiments of the instant inventive concepts. This is done merely for convenience and to give a general sense of the inventive concepts, and “a’ and “an” are intended to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Finally, as used herein any reference to “one embodiment,” or “some embodiments” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the inventive concepts disclosed herein. The appearances of the phrase “in some embodiments” in various places in the specification are not necessarily all referring to the same embodiment, and embodiments of the inventive concepts disclosed may include one or more of the features expressly described or inherently present herein, or any combination of sub-combination of two or more such features, along with any other features which may not necessarily be expressly described or inherently present in the instant disclosure.
Broadly, embodiments of the inventive concepts disclosed herein are directed to a system and related methods for hybrid headtracking, whereby the relative navigation problem is solved, and the integrity of flight cues, imagery, and symbology displayed via head-worn display (HWD) is ensured, by implementing a numerically stable UD factorization of the Kalman filter to provide high accuracy platform-referenced or georeferenced head pose data, correcting for the inherent drift of inertial measurement units used to estimate the head pose.
Referring to
For example, the helmet 102 may incorporate a head-worn display 112 (HWD) for displaying flight guidance cues (e.g., imagery and symbology) to the user 104, matching the displayed guidance cues to the current head pose of the user to the extent that objects positioned below the user 104 while inflight may be displayed even though said objects would not otherwise be physically visible to the user (e.g., obscured by the aircraft itself). For the HWD 112 to accurately display guidance cues (avoiding the display of hazardously misleading information (HMI)), high integrity of head pose estimates based on the head-mounted and platform-mounted IMUs must be assured. As with any IMU, both the head-mounted IMU and platform-mounted IMU present a degree of inherent drift error that must be corrected or compensated for. To achieve head pose estimates of sufficient accuracy to satisfy integrity requirements, the probability of displaying HMI must be kept sufficiently low (e.g., 1e−5/hr to 1e−7/hr). For example, HMI may be defined as a 6-degrees-of-freedom (6DOF; e.g., three position axes (x, y, z) and three rotational axes (roll, pitch, yaw)) head pose estimate whose error exceeds a value on the order of 3-20 milliradians (mrad) in orientation and 0.5-4 inches in position. The sensor fusion application of a conventional HWD (having a head-mounted size, weight, and power (SWaP) platform), via which inertially derived pose estimates are aided by some type of computer vision (e.g., optical sensors), may fail to meet these integrity requirements.
Such high-integrity navigation state estimation is based on being able to accurately estimate any uncertainties in the system. These uncertainty estimates may take the form of statistically representative uncertainties, or accuracy/integrity bounds imposed on the navigation state estimate itself (e.g., the above-noted HMI probability bounds). Similarly, any uncertainty of relevant physical quantity (e.g., measurement noise, calibration error, sensor locations) must be properly characterized and propagated through the estimation process. For example, errors can be calibrated for, and their magnitude reduced to negligible effect downstream. Alternatively, errors can be estimated and accounted for. As yet another alternative, the contributions of error sources may be included in the estimation process and their effects accounted for downstream. Any error sources affecting the navigation state estimate to a non-negligible degree must be modelled or otherwise addressed, or the calculated uncertainty of the navigation state cannot be trusted.
Referring now to
Referring now to
Referring to
Based on the estimated head pose 130 and estimated aircraft pose 132, the controller 122 may determine an initial estimated head pose relative to the aircraft frame as well as error models associated with each pose estimate 130, 132. For example, error modeling may attempt to account for offsets (e.g., uncertainties) between the head of the user (104,
For example, the position of the head of the user 104 may be defined relative to the platform frame 116 (or body frame) according to the equation
r
h/b
={circumflex over (r)}
h/b
+{circumflex over (r)}
h/b 1)
wherein the true value rh/b of the body-to-head position is defined as a function of the estimated body-to-head position {circumflex over (r)}h/b and the error δ{circumflex over (r)}h/b.
The time derivative vh/b of the body-to-head position value rh/b may be used (e.g., by the Kalman filter 124) to propagate the estimated head pose forward in time between updates from the aiding system (e.g., camera 108 and fiducial markers 110a-d). The time derivative may be defined according to the equation
v
h/b
={circumflex over (v)}
h/b
+δ{circumflex over (v)}
h/b 2)
wherein the true value vh/b is defined as a function of the estimated value {circumflex over (v)}h/b and the error δ{circumflex over (v)}h/b (as expressed in the body frame b).
The head acceleration ãh/i as measured by the hIMU 126 may be defined according to the equation
ã
h/i
=a
h/i
+a
b,h+ηa,h 3)
wherein the measured acceleration ãh/i is a function of the true acceleration ah/i, a bias term ab,h, and a noise term ηa,h.
The body (platform) acceleration ãb/i as measured by the hIMU 126 may be defined according to the equation
ã
b/i
=a
b/i
+a
b,h+ηa,b 4)
wherein the measured acceleration ãb/i is a function of the true acceleration ab/i, a bias term ab,h, and a noise term ηa,p.
The body-to-head rotation Chb between the body frame b and head frame 118 may be defined as a direction cosine matric (CCM) characterized by three Euler angles. For example, a first-order error model may define the true value Chb according to the equation
Chb≈(I+[Φ×])Ĉhb 5)
as a function of the estimated value Ĉhb and an error term Φ, where Φ may be defined as a vector of three simultaneous rotations.
The head angular rate ωh/i as measured by the hIMU 126 may be defined according to the equation
{tilde over (ω)}h/i=ωh/i+ωb,h+ηr,h 6)
wherein the measured angular rate {tilde over (ω)}h/i is a function of the true angular rate ωh/i, a bias term ωb,h, and a noise term ηr/h.
The platform/body angular rate ωb/i as measured by the hIMU 126 may be defined according to the equation
{tilde over (ω)}b/i=ωb/i+ηr,b 7)
wherein the measured angular rate {tilde over (ω)}b/i is a function of the true angular rate ωb/i and a noise term ηr,b (assuming negligible bias).
The head angular rate bias ωb,h (e.g., the low-frequency component of the error in the measured angular rate) may be defined according to the equation
ωb,h={circumflex over (ω)}b,h+δ{circumflex over (ω)}b,h 8)
wherein the true value ωb,h of the head angular rate bias is a function of the estimated value {circumflex over (ω)}b/h and an error δ{circumflex over (ω)}b,h.
Similarly to the body-to-head rotation Chb, the head-to-camera rotation Chc (between the head frame 118 and the camera frame (120,
C
h
c=(I+[δb×])Ĉhc 9)
as a function of the estimated value Ĉhc and an error term δb defined as a vector of three simultaneous rotations.
The camera 108 associated with the optical tracker may be associated with an intrinsic camera calibration matrix (134). The camera calibration matrix 134 may express a linear camera; if the camera 108 is a non-linear camera, the camera calibration matrix 134 may express the camera if additional calibration is performed, e.g., to correct for radial and tangential distortion or to compensate for non-linear effects. For example, the camera calibration matrix K
may be based on the camera focal length αx, αy (in horizontal and vertical pixel width respectively), a camera principal point (x0, y0) (in pixels) and a camera skew parameter s (in pixels). For example, aiding processors (136) associated with the optical tracker (or, alternatively, the controller 122) may determine an expected pixel location of the fiducial markers 110a-d (in images captured by the camera 108) based on the known locations of each fiducial marker, the intrinsic and extrinsic properties of the camera 108 (e.g., respectively the camera calibration matrix K (134) and secondary head pose estimate 138 (rc/h, Φc/h)). The secondary head pose estimate 138 may be determined, e.g., by the aiding processors 136, based on the observed residual between the expected marker location and the measured marker location (as determined by the captured images). This secondary pose estimate 138 (e.g., measurement update) may be generated through any of a variety of approaches, e.g., snapshot (loosely coupled optical-inertial) or tightly coupled optical-inertial.
With respect to tightly coupled measurement updates generated by the optical tracker (e.g., based on three-dimensional coordinate sets corresponding to the location of each fiducial marker 110a-d, as determined by a constellation database 140 of marker identifiers and 3D coordinates), the controller 122 may determine error models corresponding to the secondary head pose estimate 138 determined by the optical tracker. For example, image measurements {tilde over (p)} may be defined as a two-dimensional vector corresponding to each marker appearing in an image captured by the camera 108 according to the equation
{tilde over (p)}=p+δ{hacek over (p)} 11)
wherein the image measurements {tilde over (p)} are defined as a function of the true value p and an error δ{hacek over (p)}.
Similarly, the platform-to-marker position rm/p (or the location of each fiducial marker 110a-d in the platform frame 116) may be defined according to the equation
r
m/p
={circumflex over (r)}
m/p
+δr
m/p 12)
wherein the true value rm/p of each marker location is a function of the expected (predicted) location {circumflex over (r)}m/p and an error δrm/p.
With respect to loosely coupled measurement updates generated by the optical tracker (e.g., based on pixel locations corresponding to the fiducial markers 110a-d in the images captured by the camera 108) the camera-to-marker rotation may be defined by the quaternion
q
c
m
=δq
{tilde over (m)}
m
⊗q
c
{tilde over (m)} 13)
and the measured camera-to-marker position {tilde over (r)}m/c (a function of {tilde over (p)} and any errors in the camera calibration matrix K) defined by the equation
{tilde over (r)}
m/c
=r
m/c
+δr
m/c 14)
wherein the measured position {tilde over (r)}m/c is a function of the true value rm/c and an error term δrm/c.
The controller 122 and Kalman filter 124 may generate head pose estimates (relative to the platform or body frame) by propagating rates and accelerations of the estimated head pose data (130) received from the hIMU 126 and the estimated aircraft pose data (132) from the pIMU 128 into a strapdown equation, e.g.,
r
h/i
=r
b/i
+r
h/b 15)
to determine a predicted relative translation {umlaut over (r)}h/b corresponding to an estimated relative position of the head relative to the body frame projected forward in time. By way of a non-limiting example (and disregarding any offsets between the hIMU 126/pIMU 128 and the origin of the body frame b), differentiating with respect to time twice in the inertial space:
D
i
2
r
h/i
=D
i
2
r
b/i
+D
i
2(rh/b) 16)
D
i
2
r
h/i
=D
i
2
r
b/i
+{umlaut over (r)}
h/b
+{dot over (ω)}
b/i
×r
h/b+2(ωb/i×{dot over (r)}h/b)+ωb/i×{dot over (r)}h/b+ωb/i×(ωb/i×rh/b) 17)
which will hold for any arbitrary reference frame, so in the body frame b:
D
i
2
r
h/i
=C
h
b
f
h/i
h
+g
m
b 18)
D
i
2
r
b/i
=f
b/i
b
+g
m
b 19)
and, therefore:
{umlaut over (r)}h/b=Chbfh/ih−{dot over (ω)}b/i×rh/b−2(ωb/i×{dot over (r)}h/b)−ωb/i×{dot over (r)}h/b−ωb/i×(ωb/i−rh/b)−fb/i 20)
Similarly, a projected relative rotation Ċbh (quaternion {dot over (q)}hb) may be determined via a strapdown equation, e.g.,
and, using he additive property of angular velocities (and relative to the head frame h 118):
ωh/ih=ωh/bh+ωb/ih 22)
ωh/ih=ωh/bh+Cbhωb/ib 23)
ωh/bh=ωh/ih−Cbhωb/ib 24)
and therefore, substituting equation 24 into equation 21:
{dot over (C)}hb=Chb[(ωh/ih−Cbhωb/ib)×] 25)
{dot over (C)}hb=Chb[ωh/ih×]−[ωb/ib×] 26)
or, as expressed in quaternion rotations (in the body frame b),
wherein the projected relative rotation Ċbh corresponds to an estimated relative rotation of the head relative to the body frame (or platform frame 116) projected forward one unit in time.
The Kalman fitter 124 may propagate forward in time the estimated head pose 130 and estimated aircraft pose 132, e.g., the measured head angular rate {circumflex over (ω)}b/i and body acceleration {circumflex over (f)}b/i from the pIMU 128 (expressed in the body frame b) and the measured head acceleration {circumflex over (f)}h/i from the hIMU 126 (expressed in the head frame 118), along with their respective determined error models:
{circumflex over (ω)}b/ib=ωb/ib+ηrb 28)
{circumflex over (f)}
b/i
b
=f
b/i
b+ηab 29)
{circumflex over (f)}
h/i
h
=f
h/i
h
+a
b,h+ηab 30)
δ{circumflex over (ω)}b/i=ηrb 31)
to determine the estimated relative acceleration {circumflex over ({umlaut over (r)})}h/b of the head relative to the body frame (or platform frame 116) and the error δ{circumflex over ({umlaut over (r)})}h/b in the estimated relative acceleration (as compared to the predicted relative translation {umlaut over (r)}h/b). For example:
{umlaut over (r)}
h/b
b
=C
h
b
f
h/i
h−{dot over (ω)}b/ib×rh/bb−2(ωb/ib×{dot over (r)}h/bb)−ωb/ib×{dot over (r)}h/bb−ωb/ib×(ωb/ib×rh/bb)−fb/ib 32)
{umlaut over (r)}
h/b=(I−[Φ×])Chb(fh/ih+δab,h+ηab)−({dot over (ω)}b/i+{dot over (η)}rb)×(rh/bb+δrh/b)−2(ωb/ib+ηrb)×({dot over (r)}h/bb+δ{dot over (r)}h/b)−(ωb/ib+ηrb)×((ωb/ib+ηrb)×(rh/bb+δrh/b))−(fb/ib+ηab) 33)
Based on the above, and assuming all errors are uncorrelated, the error δ{circumflex over ({umlaut over (r)})}h/b in the estimated relative acceleration may be defined as (referring to equation 20 above, and still in the body frame b):
δ{circumflex over ({umlaut over (r)})}h/b={circumflex over ({umlaut over (r)})}h/bb−{umlaut over (r)}h/bb 34)
δ{circumflex over ({umlaut over (r)})}h/b=Chbδab,h+[(Chbfh/ih)×]Φ−[{dot over (ω)}b/ib×]δrh/b−2[ωb/ib×]δvh/b−[ωb/ib×]2δrh/b+Chbηah+[δrh/b×]{dot over (η)}rb+(2[δvh/b×]+[ωb/ib×][δrh/b×])ηrb 35)
Similarly, the Kalman filter 124 may propagate forward the measured head angular rate {circumflex over (ω)}h/i from the hIMU 126 (expressed in the head frame 118), to determine a relative head orientation {dot over (Φ)} with respect to the body frame (platform frame 116). By way of a non-limiting example, using the following error models (in the head frame h, 118):
{tilde over (ω)}h/ih=ωh/ih+ωb,h+ηrh 36)
{circumflex over (ω)}h/ih={tilde over (ω)}h/ih−ωb,h=ωh/ih+δ{circumflex over (ω)}b,h+ηrh 37)
the head orientation {dot over (Φ)} relative to the body frame may be defined in terms of the small quaternion δq and then differentiated with respect to time:
which equation 40 may be expanded according to the following:
Substituting the above error models into equation 23 above results in an expression for the relative angular rate ωh/bh of the head relative to the body, as expressed in the body frame b:
ωh/bh=({circumflex over (ω)}h/ih−δ{circumflex over (ω)}b,h−ηrh)−Cbh(I3+[Φ×])({circumflex over (ω)}b/ib−ηrb) 44)
Similarly, substituting into equation 43:
and using the small quaternion definitions
yields the following:
and, referring back to equation 23:
{dot over (Φ)}=−[{circumflex over (ω)}h/ih×]Φ−δ{circumflex over (ω)}b,h−ηrh−Cbĥηrb. 51)
When a secondary head pose estimate 138 (measurement update) arrives from the optical tracker (aiding processors 136), the Kalman fitter 124 may correct the time-propagated head pose estimates and error models based on the measurement update and corresponding error model, such that the error vector x, comprising the error states of the Kalman fitter, may be updated. By way of a non-limiting example, the optical tracker (camera 108, aiding processors 136, fiducial markers 110a-d) may provide loosely coupled relative updates (secondary head pose estimate 138) of the relative position and relative orientation of the head of the user 104 relative to the camera frame. With respect to the relative position update, for Kalman fitter error vector x:
x=[δ{circumflex over (r)}
h/b
b
δ{circumflex over (v)}
h/b
b
δâ
b,h {circumflex over (Φ)} δ{circumflex over (ω)}b,h δ{circumflex over (b)}]T 52)
and noise residual z (e.g., the difference between measured marker positions {tilde over (r)} and estimated marker positions {tilde over (r)} in the camera frame c):
z=[({tilde over (r)}m/cc−{circumflex over (r)}m/cc) 2δqcm(2:4)]=Hx+R 53)
the Kalman fitter measurement model may be updated (based on the secondary head pose estimate 138) such that (for x, z, and Jacobian H):
z=Hx+R 54)
H=[Ĉ
c
b 03 03 03 03 [{circumflex over (r)}m/cc×]] 55)
R=E[δ{tilde over (r)}
m/c
c(δ{tilde over (r)}m/cc)T] 56)
With respect to the relative orientation update, the error δq (noise residual z) between the measured orientation (quaternion qc{tilde over (m)}) and estimated orientation (quaternion qc{circumflex over (m)}) may update the Kalman fitter measurement model such that, for the above error vector x (where C(q) is the DCM equivalent of quaternion rotation q):
z=2δqcm(2:4)=Hx+R 57)
H=[03 03 03−C(qhm) 03 C(qcm)] 58)
R=E[φφT] 59)
The corrected head pose data, based on the updated Kalman fitter measurements, may be forwarded (142) to the HWD 112 to ensure the display of accurate imagery and symbology.
With respect to tightly coupled measurement updates generated by the optical tracker (e.g., based on three-dimensional coordinate sets corresponding to the location of each fiducial marker 110a-d, as determined by the constellation database 140), the pixel location p of a marker m in an image captured by the camera 108 may be determined by the position of the fiducial markers 110a-d with respect to the camera 108 and the camera calibration matrix K (134). The marker position rm/c relative to the camera 108 may be expressed in terms of the camera frame c (120,
z={tilde over (p)}−{circumflex over (p)}=Hx+ηz 60)
H=[H
δr
03 03 HΦ 03 Hδb] 61)
R=Σ
η
=Σδ{tilde over (p)}+Hδr
Referring now to
Referring now to
Referring now to
Referring now to
Referring particularly to
At a step 204, the controller receives head-referenced pose data from the hIMU.
At steps 206 and 208, the controller determines error models corresponding respectively to the head-referenced pose data and the platform-referenced pose data. For example, determined error models may be based on uncertainties in the platform-referenced pose data and head-referenced pose data.
At a step 210, the controller may generate an initial estimate of the head pose relative to the platform frame, based on the head-referenced pose data, the platform-referenced pose data, and the respective error models.
Referring now to
At a step 214, the controller may receive secondary estimated pose data (measurement updates) from an aiding device rigidly mounted to the head of the user and the aircraft. For example, the aiding device may include an optical tracker comprising a camera rigidly mounted to the head and a constellation of fiducial markers rigidly mounted to the aircraft or mobile platform; in some embodiments the camera may be mounted to the mobile platform and the fiducial markers to the aircraft. The aiding device may provide loosely coupled measurement updates, e.g., marker locations relative to a captured image and camera calibration data, or tightly coupled measurement updates, e.g., three-dimensional marker coordinates. The aiding device may include a magnetic tracker incorporating a magnetometer rigidly mounted to the head or helmet (or the aircraft) and a well characterized magnetic source.
At a step 216, the controller determines an error model corresponding to the received secondary estimated pose data. For example, the error model may be based on uncertainties in the secondary estimated pose data.
At a step 218, the Kalman filter generates at least one of corrected head pose data and a corrected error model based on the time-propagated pose data, the received secondary estimated pose data received from the aiding device, and the corresponding error model. For example, based on the received secondary estimated pose data, the Kalman filter may correct either or both of the error model corresponding to the head-referenced pose data and the error model corresponding to the platform-referenced pose data.
Referring now to
At the step 222, the HWD displays flight cues via imagery and symbology to the user, based on the received corrected pose data.
As will be appreciated from the above, systems and methods according to embodiments of the inventive concepts disclosed herein may provide a novel solution to the relative navigation problem, and thereby ensure the required integrity of flight cues, imagery, and symbology displayed to pilots via head-worn display by implementing a numerically stable, UD factorization of the Kalman filter to provide high accuracy platform-referenced or georeferenced head pose data, correcting for the inherent drift of inertial measurement units used to estimate the head pose.
It is to be understood that embodiments of the methods according to the inventive concepts disclosed herein may include one or more of the steps described herein. Further, such steps may be carried out in any desired order and two or more of the steps may be carried out simultaneously with one another. Two or more of the steps disclosed herein may be combined in a single step, and in some embodiments, one or more of the steps may be carried out as two or more sub-steps. Further, other steps or sub-steps may be carried in addition to, or as substitutes to one or more of the steps disclosed herein.
From the above description, it is clear that the inventive concepts disclosed herein are well adapted to carry out the objects and to attain the advantages mentioned herein as well as those inherent in the inventive concepts disclosed herein. While presently preferred embodiments of the inventive concepts disclosed herein have been described for purposes of this disclosure, it will be understood that numerous changes may be made which will readily suggest themselves to those skilled in the art and which are accomplished within the broad scope and coverage of the inventive concepts disclosed and claimed herein.