This disclosure relates generally to navigation systems. More specifically, this disclosure relates to navigation based on an Earth Centered Earth Fixed (ECEF) frame of reference.
Highly-accurate practical navigation schemes are of great importance in various commercial and defense-related applications. Practical navigation schemes are navigation schemes that can be performed using available processing resources (such as processors and memories) while meeting specified operational requirements (such as computational speeds). Many navigation schemes use equations of motion to convert detected movements of a vehicle or other object into an estimated location of the object. To a large extent, the choice of a coordinate system used to integrate the equations of motion can drive the implementation of a particular navigation scheme.
This disclosure relates to navigation based on an Earth Centered Earth Fixed (ECEF) frame of reference.
In a first embodiment, a method includes obtaining information from an inertial measurement unit (IMU) identifying accelerations and changes in angular orientation associated with an object in motion. The method also includes identifying a navigation state of the object within an Earth-fixed frame of reference using the information. The method further includes performing one or more navigation functions based on the navigation state. Identifying the navigation state of the object includes using (i) an object body or IMU-to-inertial transformation that is updated based on the changes in angular orientation and (ii) an inertial-to-Earth-fixed transformation.
In a second embodiment, an apparatus includes an IMU configured to generate information identifying accelerations and changes in angular orientation associated with an object in motion. The apparatus also includes at least one processor configured to identify a navigation state of the object within an Earth-fixed frame of reference using the information and perform one or more navigation functions based on the navigation state. To identify the navigation state of the object, the at least one processor is configured to use (i) an object body or IMU-to-inertial transformation that is updated based on the changes in angular orientation and (ii) an inertial-to-Earth-fixed transformation.
In a third embodiment, a non-transitory computer readable medium contains instructions that when executed cause at least one processor to obtain information from an IMU identifying accelerations and changes in angular orientation associated with an object in motion. The medium also contains instructions that when executed cause the at least one processor to identify a navigation state of the object within an Earth-fixed frame of reference using the information. The medium further contains instructions that when executed cause the at least one processor to perform one or more navigation functions based on the navigation state. The instructions that when executed cause the at least one processor to identify the navigation state of the object include instructions that when executed cause the at least one processor to identify the navigation state of the object using (i) an object body or IMU-to-inertial transformation that is updated based on the changes in angular orientation and (ii) an inertial-to-Earth-fixed transformation.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
For a more complete understanding of this disclosure, reference is made to the following description, taken in conjunction with the accompanying drawings, in which:
As noted above, highly-accurate practical navigation schemes are of great importance in various commercial and defense-related applications. Practical navigation schemes are navigation schemes that can be performed using available processing resources (such as processors and memories) while meeting specified operational requirements (such as computational speeds). Many navigation schemes use equations of motion to convert detected movements of a vehicle or other object into an estimated location of the object. To a large extent, the choice of a coordinate system used to integrate the equations of motion can drive the implementation of a particular navigation scheme.
Many applications use an Earth Centered Inertial (ECI) frame of reference to integrate the equations of motion. One example of an ECI frame of reference is defined based on the Earth's mean equator and equinox at 12:00 PM terrestrial time on Jan. 1, 2000 (referred to as “EME J2000”). ECI frames of reference have their origins at the Earth's center of mass and multiple axes that are fixed, meaning the ECI frames of reference do not rotate as the Earth rotates. As a result, this often makes ECI frames of reference useful in describing the motion of celestial bodies and spacecraft.
This disclosure provides systems and methods that support navigation by maintaining state information (such as position and velocity information) of a vehicle or other object in motion relative to a geocenter in an Earth Centered Earth Fixed (ECEF) frame of reference. An ECEF frame of reference includes an origin at the Earth's center of mass and multiple axes that are fixed relative to the Earth, meaning the ECEF frame of reference rotates as the Earth rotates.
Among other things, this disclosure provides various equations that can be used by a navigation system or method to support ECEF-based navigation. This disclosure also provides unique approaches for maintaining a matrix that is used throughout travel to perform body frame-to-ECEF or inertial measurement unit (IMU)-to-ECEF transformations. This matrix is used to transform object-sensed accelerations (such as of thrust and aerodynamics for a vehicle) into ECEF-based information that is used, along with a unique and natural set of state update equations, to integrate the equations of motion and identify updated positions and velocities. In addition, this disclosure provides unique approaches for integrating the equations of motion in order to maintain position and velocity of a vehicle or other object over time. Prior approaches used to maintain a body frame to Earth-fixed frame transformation rely on the transformation itself, which forces assumptions to be made in order to maintain the transformation. The disclosed approaches presented here have no need for such approximations, which eliminates the need for approximation methods for maintaining the body-to-ECEF transformation matrix. State update equations used to identify position and velocity are second-order for accuracy and are unique, natural, and require less storage and processing.
ECEF-based navigation systems and methods may have various benefits or advantages depending on the implementation. Among other things, expressing navigation equations in ECEF-based coordinates can present various advantages depending on the situation. For example, the described approaches can provide speed and efficiency improvements in navigation calculations compared to ECI-based navigation calculations. As a result, this allows navigation systems and methods to operate more quickly and/or more efficiently when performing navigation-based operations. In some cases, this allows the navigation systems and methods to use computing resources (such as processing and memory resources) more efficiently or to use fewer computing resources.
As particular examples, gravity equations can be expressed naturally in the ECEF frame of reference, requiring fewer computations to transform to other coordinate frames and thereby providing enhanced processing speeds. As another example, if a blended inertial navigation system (INS)/global positioning system (GPS) solution (such as a Kalman Filter) or other inertial/position solution is desired, GPS or similar types of measurements may be expressed naturally in ECEF coordinates, which again can be achieved with less processing and better computational speeds. For solutions that use geodetic coordinates, position and velocity can be maintained in ECEF coordinates, so conversion can be straightforward without any intermediate coordinate frame transformations. State update equations are expressed below as second-order equations for improved speed and efficiency to support the use of the described approaches in various types of applications, including in real-time systems.
The vehicle 102 here includes an IMU 104 and a processing system 106. In some embodiments, these components 104, 106 may form part of the avionics system of the vehicle 102, although this need not be the case. The IMU 104 generally operates to gather information about forces acting on the vehicle 102. The forces that cause the vehicle 102 to move typically include the Earth's gravity, thrust that the vehicle 102 is exerting (such as via one or more rocket engines, jet engines, or propellers), as well as aerodynamic forces and fluid dynamic forces (in the case of water-based vehicles). The IMU 104 can measure accelerations and other information and pass this information to the processing system 106 for analysis. Note that accelerations caused by gravity are not measured by the IMU 104 and are instead computed by the processing system 106. The IMU 104 also gathers information about how the vehicle 102 is rotating so that the processing system 106 can identify the orientation of the vehicle 102 in a given space. The IMU 104 includes any suitable structure configured to identify acceleration and orientation of the vehicle 102, such as one or more sensors. In some embodiments, the IMU 104 includes one or more accelerometers used to measure changes in velocity of the vehicle 102 and one or more gyroscopes used to measure changes in angular orientation of the vehicle 102.
The processing system 106 executes or otherwise implements logic that processes measurement data from the IMU 104 (and possibly one or more other sources) to perform at least one navigation-related function. For example, the processing system 106 generally supports a process known as “navigation” in which the vehicle's position and velocity are known at some specified time interval. As a particular example, the processing system 106 may determine the vehicle's position and velocity every ten milliseconds or at some other regular interval of time. In many cases, the vehicle 102 needs to know its position and velocity relative to some frame of reference and at some time interval in order to carry out one or more other functions of the vehicle 102. As a result, navigation is a critical function, and a failure of navigation can cause a failure of the vehicle 102 itself.
Among other things, the processing system 106 supports the approaches described below to enable the use of an ECEF frame of reference during one or more navigation-related operations. The processing system 106 also supports the approaches described below to provide increased accuracy for one of the matrices used in one or more navigation-related operations to pass accelerations from the IMU 104 or other source(s) to software or other logic of the processing system 106 for processing. In addition, the processing system 106 supports the approaches described below to provide improved techniques to compute position and velocity during one or more navigation-related operations. The associated equations used by the processing system 106 during navigation are formulated in such a way that less processing is needed while retaining higher-order accuracy. This can help to reduce the computing resources used by the processing system 106 or allow fewer computing resources to be used by the processing system 106 to engage in one or more navigation-related operations.
The processing system 106 includes any suitable structure configured to perform one or more navigation-related functions. In general, the navigation-related functionality of the processing system 106 may be implemented in any suitable manner, such as when implemented using dedicated hardware or a combination of hardware and software/firmware instructions. In some embodiments, the navigation-related functionality of the processing system 106 can be implemented in software/firmware instructions that are embedded in or used with an avionics systems or other on-board processor or computer of a space or airborne vehicle (such as a satellite, rocket, rocket booster, drone or other unmanned aerial vehicle, tactical missile, or commercial or military aircraft) or a water vessel (such as an ocean freighter, cruise ship, or submarine).
Calculated positions and velocities of the vehicle 102 (or any other suitable navigation-related information) as determined by the processing system 106 may be used in any suitable manner by the vehicle 102 or a device or system external to the vehicle 102. For example, the calculated positions and velocities may be provided to a vehicle control system for use in making adjustments to the travel path of the vehicle 102. This may allow the vehicle 102 to automatically remain on a desired flight path or other travel path or to automatically make adjustments to its actual flight path or other travel path. As another example, the calculated positions and velocities may be provided to a display system for presentation of the vehicle's location or travel path on a map, such as when presented on a pilot's display screen or other personnel's display screen(s). The calculated positions and velocities of the vehicle 102 may be used in any other suitable manner.
Although
As shown in
The memory 210 and a persistent storage 212 are examples of storage devices 204, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 210 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 212 may contain one or more components or devices supporting longer-term storage of data, such as a read only memory, hard drive, Flash memory, or optical disc. Among other things, the memory 210 and/or the persistent storage 212 may store instructions executed by the processing device 202. The memory 210 and/or the persistent storage 212 may also or alternatively store data used, generated, or collected by the processing system 106, such as data from the IMU 104 or one or more calculated positions and velocities.
The communications unit 206 supports communications with other systems or devices. The communications unit 206 may support communications through any suitable physical or wireless communication link(s), such as a network or dedicated connection(s). As a particular example, the communications unit 206 may support communications with the IMU 104 and any device or system that uses positions and velocities or other navigation-related information associated with the vehicle 102. The communications unit 206 includes any suitable structure configured to enable communications with one or more external components, such as a network interface card or a wireless transceiver.
The I/O unit 208 allows for input and output of data. For example, the I/O unit 208 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 208 may also send output to a display or other suitable output device. Note, however, that the I/O unit 208 may be omitted if the device or system 200 does not require local I/O, such as when the device or system 200 represents a component that can be accessed remotely over a network.
In some embodiments, the device or system 200 uses navigation equations and provides a real-time system for navigation. A real-time system is a time-bound system that has fixed and well-defined time constraints. In this type of system, processing is performed within these defined constraints in order to ensure proper operation of the overall real-time system. Note, however, that any other suitable devices or systems may be used to implement the disclosed approaches described in this patent document (including non-real-time devices or systems).
Although
As shown in
The IMU 300 also includes multiple sensors 304 and 306 used to collect information about movements of a vehicle 102 or other object. The sensors 304 and 306 may be used to collect any desired information about the movements of a vehicle 102 or other object. For example, the sensor(s) 304 may represent one or more accelerometers configured to measure accelerations (changes in velocity) of the vehicle 102 or other object. As a particular example, the sensors 304 may include three accelerometers configured to measure accelerations relative to three axes. The sensor(s) 306 may include one or more gyroscopes configured to measure changes in angular orientation of the vehicle 102. As a particular example, the sensors 306 may include three gyroscopes configured to measure changes in angular orientation relative to three axes. Each of the sensors 304, 306 includes any suitable structure configured to measure one or more characteristics of movement of an object. Note that while shown as being separate components here, one or more accelerometers and one or more gyroscopes may be implemented in a common package.
The IMU 300 typically expresses its measurements within a coordinate system 308 that is distinct from the ECEF coordinate system or other external coordinate system. For example, the IMU 300 may define the x-axis of the coordinate system 308 as extending along the length (front-to-back) of the vehicle 102, the y-axis of the coordinate system 308 as extending across the width (side-to-side) of the vehicle 102, and the z-axis of the coordinate system 308 as extending across the height (top-to-bottom) of the vehicle 102. This may generally align the coordinate system 308 with the vehicle body's coordinate frame, which is convenient in various ways (although this is not necessarily required). Typically, the various axes of the coordinate system 308 are orthogonal to one another (although again this is not necessarily required). The processing system 106 or other suitable system of the vehicle 102 or other object may therefore transform measurements from the IMU 300 from a vehicle body coordinate system or IMU coordinate system 308 into ECEF-based information that can be used to identify the position and velocity of the vehicle 102 or other object in an ECEF frame of reference.
Although
The ECI frame of reference is defined by a +X axis 408a, a +Y axis 408b, and a +Z axis 408c. Note that all three axes also extend in the opposite directions (meaning −X, −Y, and −Z). In the EME J2000 frame of reference, the +X axis 408a is defined by the intersection of the mean equator 404 and the mean ecliptic (the plane of the Earth's orbit around the sun) as of the J2000 epoch. Also, the +Y axis 408b is defined as being 90° from the +X axis 408a in the plane of the mean equator 404. In addition, the +Z axis 408c is defined as being perpendicular to the mean equator 404 as of the J2000 epoch. All of these axes 408a-408c pass through an origin at a geocenter 412 of the Earth 402, which represents a center of mass of the Earth 402 (including the oceans and atmosphere). The ECI frame of reference is therefore fixed and does not rotate as the Earth 402 rotates.
In contrast, the ECEF frame of reference is defined by a +X axis 410a, a +Y axis 410b, and a +Z axis 410c. Again, all of these axes 410a-410c pass through the geocenter 412 of the Earth 402, and all three axes also extend in the opposite directions. In the ITRS frame of reference, the +X axis 410a is defined by the intersection of the mean equator 404 and the mean prime meridian 406. Also, the +Y axis 410b is defined as being 90° from the +X axis 410a in the plane of the mean equator 404. In addition, the +Z axis 410c is defined as being based on true north (geodetic north), meaning the +Z axis 410c extends from the geocenter 412 through the true north pole of the Earth 402 as defined by the mean spin axis of the Earth 402. Unlike the ECI frame of reference, the ECEF frame of reference is not fixed and rotates as the Earth 402 rotates.
The following discussion provides an example basis for navigation operations that can be performed by the processing system 106 for a vehicle 102 or other object using measurements from the IMU 104. More specifically, the following discussion provides an example basis for computations that can be performed by the processing system 106 for navigation using an ECEF frame of reference. As described above, the use of the ECEF frame of reference can provide various benefits or advantages, such as more efficient use of computing resources or use of less computing resources by the processing system 106.
The following nomenclature is used in the discussions below. Note that while often described as involving navigation for a vehicle 102, the discussion below can easily apply to any other moving object.
A transformation between the ECI frame of reference and the ECEF frame of reference is highly complex and depends on four factors, namely precession (a change in the orientation of the Earth's rotational axis), nutation (rocking or swaying of the Earth's rotational axis), the amount of the Earth's rotation, and polar motion (motion of the Earth's rotational axis relative to its crust). This transformation can be expressed as follows:
U(t)=πθNP (1)
The total accelerations acting on the vehicle 102 in the inertial frame of reference (meaning the ECI frame of reference), including gravity, thrust, and aerodynamics, can be expressed as follows:
{right arrow over ({umlaut over (r)}1)}={right arrow over (g1)}+{right arrow over (a1)} (2)
The first time derivative of the DCM transformation from the ECEF frame of reference to the inertial frame of reference (defined as Cei above) can be expressed as follows:
Ċ
e
i
=C
e
iΩiee (3)
The angular rate vector of the ECEF frame of reference relative to the ECI frame of reference used to populate the elements of Ωiee in Equation (3) can be expressed as:
Using the multiplication rule, the second time derivative of the DCM transformation from the ECEF frame of reference to the inertial frame of reference (defined as {umlaut over (C)}ei above) can be expressed as follows:
{umlaut over (C)}
e
i
=C
e
i{dot over (Ω)}iee+ĊeiΩiee
{umlaut over (C)}
e
i
=C
e
i{dot over (Ω)}iee+CeiΩieeΩiee
{umlaut over (C)}
e
i
=C
e
i({dot over (Ω)}iee+ΩieeΩiee) (5)
Assuming the Earth's rotation rate is constant, the following can be obtained:
{umlaut over (C)}
e
i
=C
e
iΩieeΩiee (6)
The transformation of a position vector from the ECEF frame of reference to the ECI frame of reference can be expressed as follows:
{right arrow over (r1)}=Cei{right arrow over (re)} (7)
Taking the derivative with respect to time yields velocity, which can be expressed as:
{right arrow over ({dot over (r)}1)}=Cei{right arrow over ({dot over (r)}e)}+Ċei{right arrow over (re)} (8)
Taking the second derivative with respect to time yields acceleration, which can be expressed as:
{right arrow over ({umlaut over (r)}1)}=Cei{right arrow over ({umlaut over (r)}e)}+Ċei{right arrow over ({dot over (r)}e)}+Ċei{right arrow over ({dot over (r)}e)}+{umlaut over (C)}ei{right arrow over (re)}
{right arrow over ({umlaut over (r)}1)}=Cei{right arrow over ({umlaut over (r)}e)}+2Ċei{right arrow over ({dot over (r)}e)}+{umlaut over (C)}ei{right arrow over (re)}
{right arrow over ({umlaut over (r)}1)}=Cei{right arrow over ({umlaut over (r)}e)}+2CeiΩiee{right arrow over ({dot over (r)}e)}+CeiΩieeΩiee{right arrow over (re)} (9)
Using Equation (2) and solving for {right arrow over ({umlaut over (r)}e)} yields:
{right arrow over (g1)}+{right arrow over (a1)}=Cei{right arrow over ({umlaut over (r)}e)}+Ċei{right arrow over ({dot over (r)}e)}+Ċei{right arrow over ({dot over (r)}e)}+{umlaut over (C)}ei{right arrow over (re)}
−Cei{right arrow over ({umlaut over (r)}e)}=−({right arrow over (g1)}+{right arrow over (a1)})+Ċei{right arrow over ({dot over (r)}e)}+Ċei{right arrow over ({dot over (r)}e)}+{umlaut over (C)}ei{right arrow over (re)}
C
e
i{right arrow over ({umlaut over (r)}e)}={right arrow over (g1)}+{right arrow over (a1)}−Ċei{right arrow over ({dot over (r)}e)}−Ċei{right arrow over ({dot over (r)}e)}−{umlaut over (C)}ei{right arrow over (re)}
C
e
i{right arrow over ({umlaut over (r)}e)}={right arrow over (g1)}+{right arrow over (a1)}−2CeiΩiee{right arrow over ({dot over (r)}e)}−{umlaut over (C)}ei{right arrow over (re)}
C
e
i{right arrow over ({umlaut over (r)}e)}={right arrow over (g1)}+{right arrow over (a1)}−2CeiΩiee{right arrow over ({dot over (r)}e)}−CeiΩieeΩiee{right arrow over (re)} (10)
Multiplying through by (Cei)T yields:
{right arrow over ({umlaut over (r)}e)}={right arrow over (ge)}+{right arrow over (ae)}−2Ωiee{right arrow over ({dot over (r)}e)}−ΩieeΩiee{right arrow over (re)} (11)
Note that Equation (11) is an ordinary, second-order differential equation, which can be integrated once to obtain velocity in the ECEF frame of reference and then once more to obtain position in the ECEF frame of reference. Equation (11) can be transformed into a system of two first-order ordinary differential equations by introducing ECEF velocity as follows:
Airborne accelerometers and other accelerometers used in vehicles 102 or other objects (such as in the IMU 104) measure the difference between inertial kinetic acceleration ({right arrow over (a)}) and gravitational acceleration ({right arrow over (g)}). This difference is called “specific force” and is essentially the sum of all contact forces divided by mass, which can be expressed as:
{right arrow over (f)}={right arrow over (a)}−{right arrow over (g)} (13)
For an accelerometer in free fall in a vacuum, {right arrow over (a)}={right arrow over (g)}, so {right arrow over (f)}={right arrow over (0)}. For an accelerometer at rest in a gravitational field, {right arrow over (a)}={right arrow over (0)}, so {right arrow over (f)}=−{right arrow over (g)}. Therefore, acceleration due to gravity (mass attraction) can be calculated by the processing system 106 during travel to navigate properly. As a result, specific force can essentially be the on-board measurement of the accelerations due to thrust and aerodynamics by three (ideally orthogonal) IMU axes (such as in the coordinate system 308). This is typically output from the IMU 104 in the form of “delta V” increments over an IMU measurement cycle. The IMU measurement cycle is defined as a period of time (Δt), which in some instances can be on the order of milliseconds. The “delta V” increments can be expressed as:
{right arrow over (ΔVb(t+Δt))}=∫tt+Δt{right arrow over (ab(t))}dt (14)
The superscript “b” in Equation (14) implies that the integration is carried out with reference to the vehicle body or IMU frame of reference (which may or may not be identical to one another). If the vehicle body frame of reference and the IMU frame of reference are not identical, they can merely differ by an orthogonal transformation for the purposes of transforming the {right arrow over (ΔV)} vector into vehicle body coordinates.
In order to maintain knowledge of the velocity and position of the vehicle 102 relative to an Earth-fixed frame of reference (the ECEF frame of reference), the vector {right arrow over (ΔV)} can be transformed from the vehicle body/IMU frame of reference into the Earth-fixed frame. In order to accomplish this, an accurate estimate of the orthogonal transformation from the body/IMU frame of reference to the ECEF frame of reference can be maintained by the navigation system (such as by the processing system 106). This can be expressed as:
C
b
e
=C
i
e
×C
b
i (15)
This defines the body/IMU-to-ECEF transformation (Cbe) as the product of the body/IMU-to-ECI transformation (Cbi) and the ECI-to-ECEF transformation (Cie). The body/IMU-to-ECEF transformation can be maintained using angular rates as measured by gyroscopes of the IMU 104 (which may occur in the same manner as in inertial navigation). The ECI-to-ECEF transformation can be determined based on computations by the processing system 106. In some instances, the IMU 104 contains three gyroscopes mounted on three (ideally orthogonal) axes and measure angular rates of the body/IMU frame of reference relative to the inertial frame of reference. These are typically output from the IMU 104 in the form of three “delta angles” about the IMU axes over the IMU measurement cycle. This can be expressed as:
{right arrow over (Δθ1bb(t+Δt))}=∫tt+Δt{right arrow over (ωibb(t))}dt (16)
The angular rate of the vehicle body/IMU frame of reference relative to the Earth-fixed frame of reference can be determined using the following expression:
{right arrow over (ωebb)}={right arrow over (ω1bb)}−Ceb{right arrow over (ω1ee)} (17)
Unfortunately, Equation (17) involves the orthogonal transformation that is to be maintained. In other words, the expression to compute {right arrow over (ωebb)}, which is used to maintain Ceb, relies on Ceb itself. As a result, moving from one IMU measurement cycle to another requires storage of the Ceb matrix, as well as an assumption that the Ceb matrix has not changed (which can introduce small errors if incorrect). In some embodiments, this difficulty can be overcome by realizing that {right arrow over (ψ1ee)} is small compared to {right arrow over (ω1bb)}. The average angular rate of the body/IMU relative to the inertial frame of reference (meaning {right arrow over (ω1bb)}) across the IMU measurement interval can be treated as a constant angular rate across the IMU measurement interval. Also, the angular rate of the Earth-fixed frame of reference relative to the inertial frame of reference (meaning ω1ee) can be regarded as constant over the IMU measurement interval. Using the orthogonal transformation and the angular rate of the Earth-fixed frame of reference relative to the inertial frame of reference from the previous IMU measurement interval, the delta angles of the body/IMU relative to the Earth-fixed frame of reference can be computed as follows:
{right arrow over (Δθebb(t+Δt))}={right arrow over (Δθ1bb(t+Δt))}−Ceb(t){right arrow over (ω1ee(t))}Δt (18)
A second-order algorithm to maintain the quaternion that represents the orthogonal transformation between the vehicle body/IMU frame of reference and the Earth-fixed frame of reference can be expressed as follows:
where:
Another technique for maintaining the orthogonal transformation from the Earth-fixed frame of reference to the vehicle body/IMU frame of reference can be used for vehicles with relatively short and known mission durations. For example, expendable launch vehicles, short-range strategic missiles, unmanned aerial vehicles (UAVs), and other types of vehicles 102 or other objects (such as over short segments of their mission paths when navigating with sensors alone) can fall into this category. The inertial-to-body or inertial-to-IMU transformation can be maintained throughout flight/travel using a second-order update and IMU delta angle measurements. Note that this scheme may use an initial transformation from ECEF-to-ECI and an inertial-to-body frame/IMU transformation at some epoch that have been loaded into the navigation system. The transformation from the Earth-fixed frame of reference to the inertial frame of reference can be computed as follows.
The inertial to Earth-fixed transformation can be expressed as follows:
Here, Ωθ has been redefined here as:
Similarly, the inertial-to-body/IMU transformation given {right arrow over (Δθ1bb(t+Δt))} from the IMU 104 can be expressed as:
Here, Ωθ has been redefined here as
As a result, the following can be obtained:
C
b
e(t+Δt)=Cie(t+Δt)(Cib(t+Δt))T (28)
Note that errors in the transformation from ECI to ECEF can build up over time with this scheme because (i) precession, nutation, and polar motion are not accounted for and (ii) it is assumed that the change in the ECI-to-ECEF transformation is a simple rotation about the ECEF +Z axis 410c. In some cases, quaternions representing the ECI-to-ECEF transformation (meaning quaternions representing Cie) and spanning the mission duration can be stored at a regular time step and interpolated, such as by using a standard quaternion interpolation algorithm. In other words, a sequence of transformations CIe(t1), CIe(t2), . . . , CIe(tn) may be stored on board a vehicle 102 or otherwise stored for different times t1, t2, . . . , tn, and interpolation may be used to obtain the transformation at the desired times.
In these embodiments, it can be seen that −2Ωiee{right arrow over ({dot over (r)}e(t))}−ΩieeΩiee{right arrow over (re(t))} is small and can be integrated using a first-order algorithm (such as rectangular integration). It can also be seen that acceleration due to mass attraction is nearly constant over the IMU measurement cycle Δt. As a result, this leads to the following second-order position and velocity update formulas:
Equations (29) and (30) represent equations of motion that can be numerically integrated by the processing system 106 in order to provide accurate position and velocity of the vehicle 102 or other object over time. Note here that Equations (29) and (30) only involve a current estimate of state and other data from the current time step (where the current state estimate comes from the immediately-prior iteration or measurement cycle), along with IMU updates. As a result, Equations (29) and (30) do not require storing of data from multiple time steps, which is commonly needed in other techniques (such as those that rely on Simpson's rule or Runge-Kutta techniques).
If needed or desired, it is also possible to determine the heading and geodetic latitude, longitude, and altitude of the vehicle 102 or other object. A local North-East-Down (NED) frame of reference is defined as a frame of reference directly below a vehicle 102 or other object at the surface of the Earth reference ellipse. In the NED frame of reference, the +X axis (N) points due north (true north), the +Y axis (E) points due east, and the +Z axis (D) points down along the local vertical (a line perpendicular to the tangent to the reference ellipse directly below the vehicle 102 or other object). Heading can be readily computed in this frame of reference. Here, the conversion from ECEF to NED can be accomplished using the following equations:
To compute heading from navigational data, the following can be used:
{right arrow over (VNED)}=CECEFNED*{right arrow over ({dot over (r)}e)} (34)
ψ=a tan 2(VNED[1],VNED[0]) (35)
It should be noted that {right arrow over (VNED )} has little meaning other than the parameterization of the object's velocity in the NED frame of reference.
Latitude, longitude, and altitude can be computed from navigational data to arbitrary accuracy, such as by using the following iterative procedure. Initial computations can be performed as follows:
One or more iterations may then be performed to loop through the following process until an arbitrary small change in latitude φ is detected:
The above process converges in a few iterations and should not present any computational problems for a modern real-time system. Also note that GPS raw data is supplied in the ECEF frame, which makes implementing a blended INS/GPS navigation solution more straightforward in the ECEF frame.
Note that the above has provided an explicit and detailed mathematical derivation of the equations of motion that may be used by the processing system 106 based on input from the IMU 104. Overall, this approach has various advantages, such as expressing gravity model equations in the Earth-fixed frame of reference, enhanced efficiency in implementing a blended INS/GPS navigation solution, and improved speed and efficiency in a real-time system. Accurate second-order state update equations can also be used, and unique techniques for maintaining the vehicle body or IMU-to-ECEF transformation can be used. In addition, the ability to convert navigation position and velocity in ECEF into heading and geodetic coordinates can be useful in some applications.
Although
As shown in
Update information is received during a measurement cycle from an IMU at step 504. This may include, for example, the processing system 106 receiving the update information from various sensors 304, 306 of the IMU 104. As particular examples, this may include the processing system 106 receiving “delta V” measurements identifying measured accelerations due to thrust and aerodynamics from the IMU 104 and receiving “delta angles” measurements identifying measured angular rates of the vehicle body or IMU frame of reference relative to an inertial frame of reference. An updated navigation state in the ECEF frame of reference is determined at step 506. This may include, for example, the processing system 106 using the update information to integrate or otherwise use Equations (29) and (30) above to identify updated position and velocity of the vehicle 102 or other object.
A determination is made whether to convert the updated navigation state from the ECEF frame of reference to another frame of reference at step 508. If so, the ECEF-based updated navigation state is converted to another frame of reference at step 510. This may include, for example, the processing system 106 using Equations (31)-(42) above and an iterative process to convert the ECEF-based updated navigation state into a heading and geodetic latitude, longitude, and altitude of the vehicle 102 or other object. Note that conversions into one or more other or additional frames of reference may also or alternatively occur here.
The updated navigation state (in the ECEF frame of reference and/or another frame of reference) can be displayed, output, or otherwise used in some manner at step 512. This may include, for example, the processing system 106 using the updated navigation state (or outputting the updated navigation state to another component) to monitor a travel path of the vehicle 102 or other object and, if necessary, make adjustments to operation of the vehicle 102 or other object in order to keep the vehicle 102 or other object on or near a desired travel path. This may also or alternatively include the processing system 106 using the updated navigation state (or outputting the updated navigation state to another component) to display current navigation-related information, possibly along with prior navigation-related information and/or predicted future navigation-related information, to an operator or other personnel associated with the vehicle 102 or other object. Note, however, that the updated navigation state may be used in any other or additional manner.
The process of obtaining updated IMU information and identifying/using an updated navigation state based on the updated IMU information can be repeated any number of times during any number of iterations and at any suitable regular or other interval of time. For instance, the measurement cycle of the IMU 104 and the associated processing by the processing system 106 may occur at a specified time interval, such as every ten milliseconds or at some other regular interval of time. This allows the processing system 106 to repeatedly identify updated navigation state information for the vehicle 102 or other object over time and to use the updated navigation state information as needed or desired.
Although
In some embodiments, various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive (HDD), a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable storage device.
It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The term “communicate,” as well as derivatives thereof, encompasses both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
The description in the present disclosure should not be read as implying that any particular element, step, or function is an essential or critical element that must be included in the claim scope. The scope of patented subject matter is defined only by the allowed claims. Moreover, none of the claims invokes 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words “means for” or “step for” are explicitly used in the particular claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) “mechanism,” “module,” “device,” “unit,” “component,” “element,” “member,” “apparatus,” “machine,” “system,” “processor,” or “controller” within a claim is understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and is not intended to invoke 35 U.S.C. § 112(f).
While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/025,046 filed on May 14, 2020, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63025046 | May 2020 | US |