The present invention relates generally to inertial navigation in GPS-denied environments and localization methods that integrate the use of diverse sensors distributed over different platforms including methods for cooperation among such sensors and platforms and opportunistic synchronization with GPS.
It is an object of this invention to improve upon an Inertial Navigation System by allowing for updates from a Global Positioning Satellite System (GPS) to correct for navigation errors.
It is an object of this invention to combine the Inertial Navigation System with other sensors including hybrid sensors such as RF ranging and Navigation Technology, visual trackers and other sensing systems to create a navigation system that functions in a GPS deprived environment.
This invention is the realization that Inertial Navigation Systems can be improved by using a sensor-based navigation architecture that enables sensors, regardless of their type, nature and intrinsic capabilities to be robustly and cost-effectively incorporated into the navigation system of each mobile user (or vehicle) while leveraging and making optimum utilization of communication between such users. To achieve such robustness and cost effectiveness this navigation architecture incorporates a significant number of key enabling capabilities, which are briefly summarized below:
1. Architectural framework which drastically reduces the number of different interfaces and maximizes navigation performance through
2. Architecture which dynamically configures and reconfigures based on
3. Support for any type of sensor including sensor-vehicle interaction through
4. Support for intra- and inter-vehicle sensor configurations including flexible vehicles (e.g., human) and distributed mobile vehicles through flexible distributed vehicle maps.
a describes the most elementary object consisting of one sensor S1 and one feature F1.
b illustrates when a common feature F1 is observed by two sensors S1 and S2 in the same vehicle.
c illustrates the case where features F1 and F2 are observed by same sensor S1, and features F1 and F2 are related to each other.
a depicts a sensor combination using two identical cameras mounted on the vehicle one after another along main direction of motion and facing down and forward.
1. Vehicle-Referenced (VR) Sensors Group—VR sensors include all sensors attached to a vehicle and which provide vehicle pose and pose derivatives information in current time relative to the pose of the vehicle for previous times. Examples of VR sensors include IMU/INS, speedometers, accelerometers, gyroscopes, related encoders (e.g., for counting steps and measuring wheel rotation) and point referenced range/Doppler sensors.
2. Global (coordinates) Sensors Group (GSG)—GSG sensors include any sensor that can be used to fix and/or reduce the uncertainty of a vehicle's pose relative to the World (i.e., global coordinate system). It includes GPS, features (e.g., RF transmitter) populated at a known locations, road and building maps including the ability to determine if the vehicle is located near a specific feature (e.g., window, road intersection), smart maps including semantic references to features within a map, any type of “outside-in sensor” placed at a globally known location that senses the vehicle and transmits such information to the vehicle (e.g., video camera in a mall), and any pose constraint (e.g., confinement to a room or area).
3. Environment Features (EF) Sensing Group—EF sensors include any type of sensor that can detect a specific feature placed (e.g., paper fiducial, or standard of reference), located (e.g., light fixture, wall opening, etc.) or detected (e.g., edge, color gradient, texture) in the environment. Typically, these sensors have some form of feature labeling and mapping objects. Examples of EF sensors include: photo and video cameras, transmitters with identifiable temporal and/or spectrum characteristics (e.g., RF emitters including communication devices), transducers capable of detecting specific physical (e.g., magnetometers) or chemical properties.
In our architecture, in order to simplify the interfaces and facilitate the interaction between sensor groups, each sensor group contains:
Specifically, in this architecture, each of these groups will compose a generalized sensor which we will refer to as “sensor group object”. Sensor group object includes-and-encapsulates the capabilities of discovering, identifying, initializing, calibrating, switching ON and OFF and, in general, communicating with corresponding sensors through an Application Programming Interface or API common and/or specific for the group. In addition to interfacing with corresponding sensors, the sensor group object will be capable of (1) selecting and/or merging measurement data and/or information from multiple sensors; (2) contributing to the update of corresponding “maps” such as to maximize the group contribution to the overall navigation system performance. In our architecture we focus on the APIs to enable flexible selection of participating sensors within each group, and on the corresponding abstractions including rules for merging data and information from otherwise different sensors.
Additionally, to simplify the interfaces, and since most of the “aiding” to navigation will result from each vehicle's interaction with the immediate environment and from networking, a related navigation encapsulated function was added into a “cluster navigation object.” Specifically, the Main Navigation Loop is composed, as per
To simplify the interfaces and the functions performed by Cluster Navigation even further, the Environment Features group object includes novel “inheritance” abstractions. These abstractions, as described in the next section, simplifies the handling of multiple sensors (including heterogeneous sensors) distributed over multiple vehicles by converting and merging related measurements into a standard format within the Environment Features Sensing Group. This “inheritance abstraction” capability enables easy “plug-and-play” of environment-related aiding sensors to the overall navigation capability without requiring the same type of sensor (e.g., camera) to be installed in all participating vehicles.
Also importantly, the architecture object of this invention is not centered on simultaneous localization and mapping or SLAM's building. On the contrary, if “building a map” is included in the mission objective, and/or the building of such a map becomes relevant for the navigation (i.e., reduces the position uncertainty and/or enables locating the vehicle within a mission provided map or route), the system object of this invention can integrate SLAM as part of the Cluster Navigation capability.
In the following section, we provide a description of the detailed architecture including the internal structure of the various groups and their interaction with the Main Navigation Loop through an example CONOPS while focusing on two objectives of our proposed research: 1) simplify the aiding and integration of new sensors; 2) achieving mission navigation goals for long term position uncertainty while using aided sensing navigation.
Global Navigation Manager
The sensor API includes at least three levels: Core, Specific and Group. The Core-level API is common to all sensors and, in addition to sensor ID and type, information (or how to access information) about basic availability (e.g., ON/OFF and calibration status). The Specific-level API, as the name suggests, has information specific to the sensor (e.g., pixel sensitivity and shutter speed for camera, maximum update rate and noise for IMU or inertial measurement unit) including abstractions and rules governing the transformation of sensors specific measurements to position and/or navigation parameters that are common from all sensors in the group. The Group-level API includes parameters and rules that are common to the group (and meaningful for navigation) including rules relating power, processing and figures of merit (for navigation) and rules for merging its group-level measurements with corresponding measurements from other sensors in the same group. The Core-level is required while the Specific and Group-levels can be obtained from a database (local and/or accessible through the network). At initialization (or when a new sensor is plugged in), the “sensor group” reports to a Global Navigation Manager (GNM) every sensor that is available or become available. APIs may be missing and some sensors may not be calibrated. GNM is aware of mission requirements and eventually will become aware of all sensors in the system, including their availability and readiness (full API and calibrated) to be called upon use for navigation. The mission is dynamic and may call for different sensors at different “situations” and times. The mission requirement includes a cost/performance rules and, over the course of a mission, sensors may be activated, replaced or added as needed. Also, completing APIs and “calibrations” can be performed “on-the-fly.”
Typically, GNM reports the current positional accuracy of the vehicle to the Mission Computer. When the positional accuracy approaches the limit of acceptability (e.g., 9 m rms accuracy versus 10 m rms max), the Global Navigation Manager, or GNM advises Mission Computer about possible actions. These actions may include requesting more sensors and/or corresponding APIs, re-calibration of sensors, turning sensors ON/OFF, slowing the vehicle's motion, coming back along trajectory to reduce uncertainty and repeat last segment of the mission with added-on sensors to improve the local map, or communicate with other player to form a distributed vehicle.
Finally and probably most importantly, GNM reports current positional accuracy of the vehicle to mission computer. In case when positional accuracy approaches limits of acceptable accuracy (for example vehicle gets to 9 m rms accuracy versus 10 m rms max acceptable accuracy and there is no GSG update expected soon to reduce uncertainty) GNM will advice Mission Computer about possible vehicle actions. Those actions may include requesting more APIs, calibration of yet not calibrated sensors, turning on power for currently unpowered sensors, slowing motion down, coming back along trajectory to reduce uncertainty and repeat last segment of the mission with added-on sensors to improve local map, or moving off trajectory mission toward the Global Sensors Group, or GSG, faster update or to meet and communicate with other player to form distributed vehicle.
Global Sensors Group
In general, any measurement from any sensor or source which reduces the global pose uncertainty can be interpreted belonging to the Global Sensors Group. The navigation system receives Geolocated updates at a certain rate (for example 1/hour) and the Geolocated measurement shall allows reducing the vehicle pose uncertainty (e.g., from 10 m rms to 2 m rms).
This API mechanism is the same for all sensor groups and API database can be shared by all sensors on the vehicle or each sensor group can have its own API database. In the case of GSG, Constraints Processing Software talks with any sensor through API. During enumeration phase (or new sensor addition phase) corresponding API is invoked from API Data Base. If vehicle computer does not have particular API it can send request through the network to obtain missing API. Other way to handle this is to download new APIs each time vehicle starts new mission. Either of those methods represents dynamical updated of API database.
In this architecture, in order to simplify the interfacing, we assume that each global sensor includes any hardware and/or software needed to determining corresponding coordinates and uncertainties over time. This allows for easy combination of measurements from a multitude of sensors such as to enable GSG to interact with MNL as a single sensor though a single Group-level API. In our architecture, the Group-level API includes the merging of the constraints that may arise from other types of sensors such as environment sensing. In certain configurations, several of such constraints can be received shortly one after another. For example, a vehicle may be seen simultaneously by two motion detection sensors placed at known poses in the environment such as to enable accurate vehicle location and/or reduction of corresponding location uncertainty. In this architecture, such constraints from multiple sensors are combined and acted together through a common Group-level API.
In general, it should be emphasized that some features (targets) can be installed on the vehicle (i.e., to be discovered by outside-in sensors). In this case, building a Vehicle Features Map may become relevant. In our architecture, such functionality, although typical of “environment sensing,” will be assumed to be an integral part of the outside-in sensor and included in Global Sensor Group. In our research we assessing the need for such functionality vis-a-vis mission requirements as the dimensions of a typical vehicle may be smaller than required position accuracy.
Vehicle Referenced Sensor Group
The Vehicle-Referenced (VR) Sensor Group, illustrated in
The majority of sensors in VR sensor group generate information at a fixed rate determined by the specific hardware, and that may vary from sensor to sensor. Timing Synchronizer handles the various update rates, includes prediction, down-sampling and interpolation as required, and provides synchronized measurements to the Measurements Merger. Time Synchronizer also uses the capabilities of the Group-level API to facilitate the merger of measurements from different sensors.
Measurement Merger facilitated by the Group-level APIs included in these architecture groups and merges measurements of the similar type (e.g., angular velocity from a gyro and angular velocity from wheel encoder) and produces a single angular velocity measurement for the group. Overall, for sensors rigidly attached to the vehicle, all measurements are converted into 18-element vector in vehicle coordinate frame containing 3 positions, 3 orientations, and their first and second derivatives. Non-rigid vehicles are typically described in terms of rigid segments or part and joints and a complete description of the vehicles pose include an 18-coordinated measurement for each joint. Rigidity aspects of the vehicle may not be relevant at all times, and may vary with mission requirements and environment. In our architecture we will use the capabilities of the proposed Group-level API to simplify the interface (and the dimensionality) of the measurements VR exchanges with the Main Navigation Loop. Specifically, and whenever possible, we will integrate measurements inside the VR sensor group to provide only coordinates of the vehicle referenced a selected point (center of the body) while keeping dimensionality of output vector the same as in the case of a totally rigid vehicle. For this we will include descriptions including abstractions and processing rules relating to vehicles parts and rigidity in the corresponding Group-level API.
At the end, the VR sensor group contribution to the Main Navigation Loop of
Environment Features Sensing Group
Most of the navigation aiding capabilities (e.g., aiding to INS including updates and fixes from global sensors) will result from observations and interaction with the immediate environment and from networking with other vehicles. In our architecture, corresponding sensors, including interfacing, calibration, feature labeling, mapping, processing and “merging” are handled by the Environment Features (EF) sensing group. EF contains different types of sensors that can sense features located in immediate environment including: cameras (IR, UV, visible), magnetometers, radars and Laser ranging and detection or LADAR's, RF sensors and interaction with other vehicles through communications.
It makes sense for two vehicles to communicate with each other and exchange information if at least one of the following conditions holds:
The overall navigation capability may (and probably will) benefit from the two-way exchange of pose related information including possibly pose information from multiple features. The actual beneficial value of such an exchange, including related communication and correlation processing costs has to be included and considered by the overall navigation capability. For this, in our architecture, we will structure the APIs for inter-vehicle communications to include rules that would enable assess the “navigational value” of each exchange for each of the sensors involved before any volume (and power) consuming transmission would take place. In the following paragraphs we present an exemplary scenario that we will analyze and expand in our research in order to come up with general rules that could be integrated in our APIs and processing, and then used by each vehicle on decisions related to communicating or not, including decisions about what to communicate and update rates of such a communications.
Let's assume two vehicles. In general, Vehicle #1 will decide about the navigational benefit of communicating with Vehicle #2 if it could receive three different types of information from Vehicle #2:
The most elementary object is the one consisting of one sensor S1 and one feature F1 (shown on
If S1 and S2 are different types of sensors the generation of information in a way that is relevant and usable for navigation may require merging different types of information. For example, a camera providing bearing angle and rangefinder providing range to a common feature will effectively provide the 3-D coordinates of such a feature.
The next case is illustrated in
The above relationships can be described by constructs from graph theory. Specifically, they can be formalized using the following set of definitions and rules:
Assume that sensor S1 belongs to Vehicle #1 and sensor S2 belongs to Vehicle #2. Then Vehicle #1 and #2 will initially have each two objects (S1-F1-F2 and S1-F3 for Vehicle #1 and S2-F2 and S2-F3 for Vehicle #2). Then by forming a Distributed Vehicle, in which Vehicle #1 inherits information from Vehicle #2, one can form more complex objects:
In our research, we will integrate both communications and graph theory constructs to create a rule-based API and object-based communications protocol such as to enable integrating the concepts of “distributed vehicles” in a way that they can enhance both each vehicle and overall combined vehicle navigation.
The described architecture framework allows to design different navigations systems with different performance, cost and size requirements. One could see that specific sensors selection heavily depends on mission requirements. In subsection 1 we present an exemplary worst-case mission scenario example. In subsection 2 we will show a preliminary design of a multi-sensor system that can handle such a mission. In remaining two subsections we present a short analysis of an exemplary IMU and Aiding sensors to be used in a navigation system.
IV.1 Exemplary Worst Case Mission Scenario and Uncertainty Propagation
Suppose that the mission for a certain Vehicle A, as per Specification, is to travel from point S (Start) to point F (Finish) as depicted on
The vehicle's pose uncertainty, 71, will grow until useful Global Referenced sensor information (not from GPS) is received, for example, Vehicle A meets Vehicle B (with positional uncertainty ellipse EB). Assume now that when Vehicle A reaches a generic intermediary point, Vehicle B measures its range to A with uncertainty in range and angle as depicted by yellow sector Y. The updated location of A based on a B-position error and range measurement can be calculated by adding the B-uncertainty ellipse EB for each point (i.e., centered at that point) of sector Y. As a result we obtain the B-based set EA(B) as the new uncertainty (i.e., A-uncertainty) for the Vehicle A. The intersection of this set with the original uncertainty set EA is the new decreased uncertainty set EA for A. Formally, for a generic time “t”, this can be written as
In the worst case however there is no Globally Referenced sensor information available before arrival to point F. Also there are no other vehicles participating in the mission (e.g., the path from S to F goes in the tunnel, while tunnel is curved such communication with points S and F is possible only at the very beginning and the very end of the mission respectively), so Vehicle A cannot improve its pose during the mission by exchanging information with other vehicles. In addition, there is no tunnel map. Also, to make the problem more realistic (and closer to a “worst case”), the tunnel has varying width that is wider than 10 m (i.e., larger than the maximum allowed uncertainty) and the tunnel is significantly curved such that, range and Doppler sensors can just be used just for limited times. Let's also assume that the tunnel walls are relatively smooth and uniformly colored, so there are not many features available for cameras. Finally, the tunnel floor is wet and slipping making wheel encoders not very accurate.
Exemplary Preliminary System Design
To fulfill the above mission, the navigation system will have to have accurate VR sensors, which will allow for the vehicle to cover an as long distance D as possible before reaching 10 m rms pose uncertainty. This distance D is calculated using only the VR sensors and, typically, for low-cost sensors will be just a fraction of the entire tunnel length. In order to slow down the growth of positional uncertainty, we will assume that the navigation system will have a number of EF sensors, which will provide accurate vehicle position updates relative to a segment of the local environment segment, such that pose accuracy will be almost unchangeable from the begging and the end of each segment.
Specifically, we assume a preliminary configurations consisting of the following sensors
0. GPS receiver (marked by 0 not to count it as one of “indoor” sensors)
1. IMU
2. Speedometer/wheel encoder
3. Laser rangefinder
4. Camera 1
5. Portable RF (radar) providing Range and Doppler
6a. Camera 2 factory mounted together with Camera 1
6b. Separate communication Channel
In what follows we provide exemplary initial characterization of the above sensors.
IV.3 Inertial Measurement Unit (IMU)
The typical Inertial Navigation System consists of 3 gyroscopes measuring angular velocity and 3 accelerometers measuring linear acceleration. Often other sensors such as a magnetometer, compass, inclinometer, altimeter, etc. are added to either provide some reference point or to compensate for local existing anomalies. Without going into specifics of different manufacturing technologies for inertial sensors, their accuracy is approximately a function of their size, weight and cost. When everything is reduced to the level of “miniature IMUs,” performance becomes an issue. Because of drifts in gyros and especially in accelerometers, typical miniature systems are capable of keeping reasonable positional accuracy for at most, several seconds. As an example, in 2009, InterSense released the first single-chip IMU-INS unit (called NavChip), which at the time outperformed other existing miniature IMUs. For AIDED-NAV we will use the NavChip as an INS benchmark subsystem. Relevant NavChip specifications are summarized in Table 1.
With the NavChip Angular Random Walk , after the one hour mission (i.e., cross the tunnel) time, the Vehicle will have an accumulated an error of only 0.25 degrees of orientation uncertainty. However it can be easily calculated that for vehicle traveling for 1 hour with 10 km/h linear speed this angular uncertainty can be translated into about 40 meters of position uncertainty. The accumulated uncertainty of accelerometers of exemplary NavChip can be expressed as
The accelerometers will reach 10 m rms uncertainty in about 10 seconds, and about 108 m rms after one hour. The accelerometer performance can be enhanced with internal filters and tight integration with INS. In our system we will assume such a tight integration and internal filters such that, with the aiding accelerometers, we will be able to keep the vehicle within 10m uncertainty for about a minute.
Aiding Sensors
To reduce uncertainty one can consider adding extra accelerometers (and gyros)
Inexpensive speedometers in form of encoders or other devices capturing wheels motion are widely commercially available. Those devices are typically reported to provide linear speed with about 10% accuracy, meaning that for vehicle moving at 10 km/h it will take about a minute to reach 10m rms positional errors. For human motion an accelerometer, which counts steps can be considered as additional sensor also featuring about 10% accuracy in stride length estimation. Speedometers will also benefit from tight integration with INS. In this exercise we will assume that the combined INS +speedometers system would enable the system to be kept within 10 m rms for several minutes.
In this example, the vehicle is located at point A at time as depicted in
It is important to notice that although the measurement updates from a simple range finder is typically instantaneous an generated at the same rate of the INS system, is typically not enough for an effective improvement of the positional uncertainty of the vehicle. Its usability can be improved when combined with a camera, described next.
Aiding Sensor #4: Camera
A single camera provides the bearing angle of a selected feature relative to the vehicle. If a range-finder is slaved to the camera, the then range to every feature can be added to bearing angle; this enables measuring all 6 coordinates of the feature F relative to the vehicle at time t and then recalculation of the vehicle pose for time t+1 until AIDED-NAV can sense another feature. Of course, at least 4 features are needed to unambiguously calculate the vehicle pose, and the more features one uses the more accurately the vehicle pose can be calculated.
Typical modern VGA camera (640×480 pixels) is capable of taking about 60 frames per second. The high-end models, (for instance cameras featuring Kodak KAI-0340M CCD chip), can go as fast as 210 frames/sec, but they are heavy, and require lots of power, making their use problematic on small platforms. CMOS cameras are considerably less expensive, ranging from below $100 till several hundred dollars. Exemplary best CMOS chips nowadays are manufactured by Micron (sold by Aptina Imaging Company) with particularly inexpensive high-quality Micron based cameras from PointGrey, Unibrain, IMI and others. Given that frames need to be captured, transferred to computer memory for processing and processed there, it is unlikely one will need cameras faster than 30 frames/sec. With 30 frames/sec or even slower rate one would expect to be able to have real-time processing on PDA or cell-phone type computer.
Aiding Sensor #5: Doppler and Range sensor
Portable Radar (or RF) sensors can be installed on the vehicle to simultaneously generate range and Doppler information. For short-range devices (with about 100 m range maximum), the required RF power is relatively low such that they are extensively used in commercial applications, including cell phones. In our navigation system, we will integrate radars of this type with two purposes:
Another exemplary promising sensor combinations is to use two identical cameras mounted on the vehicle, one after another along main direction of motion and facing down and forward as depicted in
This application is a non-provisional filing of provisional application No. 61/304,456.
Number | Date | Country | |
---|---|---|---|
61304456 | Feb 2010 | US |