The present disclosure generally describes a method and a system for achieving autonomous driving of vehicles in open-pit sites, like open-pit mines, without using any Global Navigation Satellite System. More particularly, it relates to autonomous driving using alternatives to Global Navigation Satellite Systems.
Robotics and automation technology is attracting interest from the mining industry. For instance, the use of autonomous haulage trucks is increasing worldwide due to the impact in reducing operation costs and increasing workers safety. In an open-pit mine, many mining operations like digging, excavating and transporting materials would benefit from this technology. However, automating mining vehicles is challenging, because the mining environment is harsh, it is continuously changing, and it is dirty and bumpy.
Currently, most self-driving solutions rely on Global Navigation Satellite Systems (GNSS) to determine the position of a vehicle and to associate the position with the surroundings. The most common GNSS is the Global Positioning System (GPS), although other alternatives exist, like Galileo, Glonass and Beidou.
Unfortunately, solutions based on GNSS have a problem of robustness. The GNSS signals may be interfered with by atmospheric disturbances, large buildings, canyons or power lines. In particular, ionospheric scintillations, which are atmospheric phenomena caused by solar activity, causes interruptions in satellites communications. When an autonomous vehicle loses GNSS signal, it must stop as a cautionary measure. Afterwards, it needs be restarted with human intervention. Such a situation involves a high cost in the context of an open-pit mine and is not unusual. For instance, in northern Chile, almost all of the Chilean copper open-pit mines are located in a zone suffering from ionospheric scintillations. Therefore, there is a current need for new technology solutions to deal with situations where GNSS technology is not reliable.
The determination of the position and orientation (pose) of a vehicle, which in the robotics literature is known as self-localization, is essential for its autonomous navigation.
The self-localization of a vehicle inside a mining pit without using any GNSS information is challenging because of the highly symmetry of the environment, which causes that different sensors (e.g. cameras) obtain similar measurements in different positions of the pit. Therefore technologies to address this challenge are needed.
Solutions developed for autonomous vehicles moving along cities or highways are not applicable to open-pit environments. For instance, prior art document US2019146500 A1 discloses autonomous vehicle self-localization techniques using particle filters and visual odometry for place recognition into a digital map in near real-time. The routes within the map can be characterized as a set of nodes, which can be augmented with feature vectors that represent the visual scenes captured using camera sensors and can be constantly updated on the map server and then provided to the vehicles driving the roadways. These techniques are discrete (images are taken at some fixed positions) and unsuitable for an open-pit site like an open-pit mine, taking into account that any visual place recognition would not work in such environment, due to the similarity of the different visual images acquired on the pit wall.
Prior art document SE201950554 discloses techniques for road shape estimation for an ahead driving path of an autonomous vehicle. The method includes the steps of obtaining sensor values concerning the vehicle surroundings and establishing a road model of the ahead road comprising a number of waypoints of the ahead road by linear mapping of the road model, based on the obtained sensor values. Similar as the case of document US2019146500 A1, these techniques are discrete and unsuitable for an open-pit site due to similarity of such an environment.
The present invention was made in view of the shortcomings of the state of the art in autonomous navigation in GNNS-denied environments, and addresses the demanding need for robustness in autonomous vehicles operating in open-pit sites. In particular, an open-pit site may comprise an open-pit mine, a construction site, and other related work-sites. An open-pit site is composed of paths and junctions to be modeled as segments and intersections respectively. Especially, the invention is applicable for any opent-pit site that has an associated topological map, such as open-pit mines and other areas or zones having paths and junctions which can be modeled and represented in that topological map.
The present invention aims at a method and a system as defined by the independent claims. Several advantageous embodiments are defined in the dependent claims.
According to the invention, a vehicle may navigate an open-pit site in a robust way using an alternative to GNNS. The routes in the open-pit site are represented using a model with segments and intersections and neighborhood relations between these elements. An aspect of the invention relies on a proper detection of intersections. Advantageously, the invention avoids collisions within each segment (e.g. wall), and avoid falling into the cliff, without requiring a precise location and by just moving along the segment at a safe distance (range) from the walls. The invention uses an observation map. The observation map stores surroundings information collected by sensors (like an odometer, a LIDAR, an altimeter, a magnetometer, a gyroscope, etc.) in discrete positions within each segment and each intersection.
The observation map is accessed to self-localize the vehicle within each segment. Due to the use of Gaussian processes, the vehicle's pose can be appropriately estimated by comparing the current observations with the observations stored in the map. The use of Gaussian processes allows treating the sensors data, acquired in discrete positions, as data acquired in continuous positions. This is because Gaussian Processes estimate the mean and covariance of a data series in time or space (modeled as random variables) by incorporating prior knowledge (kernels) in the estimation. This represents a technical advantage because if allows to increase the accuracy of the comparisons between the current observations and the stored observations, which can be managed as continuous variables.
Several aspects and embodiments of the present invention will be explained with reference to the appended drawings for a better understanding. Particularly, a method and a system for navigating an autonomous vehicle are presented. The present invention is suitable for autonomous vehicles operating in open-pit sites, like open-pit mines without the need of GNSS.
A vehicle 30 can move along a segment 11 that corresponds to a path or lane or the like. An intersection 12 corresponds to a place in which the vehicle can either change to another segment 11 (e.g. a junction) or can perform certain operations, like going out of the pit or loading material (e.g. a working area).
When a vehicle 30 travels along a segment 11, it is only allowed to move forward staying within its boundaries to avoid crashing into a wall or falling off a cliff. While traversing a segment 11, the exact longitudinal position of the vehicle is less relevant. Consequently, a highly precise localization estimation is not required, and less precise self-localization methods can be used.
On the other hand, when a vehicle 30 is approaching an intersection 12, it is key to correctly decide which of the different segments to take and consequently make the appropriate maneuvers. Thus, self-localization in intersections may demand higher precision.
The topological map 10 includes topological information like neighborhoods relationships among segments and intersections of the graph representing the open-pit site.
An observation map 20 includes surroundings information, such as sensors data for each segment and for each intersection of the topological map 10.
The sensor data is used by a local self-localization module 36 to locally estimate the vehicle's pose within a particular segment or intersection, that is its local pose comprising position and orientation of the vehicle within a certain segment or intersection, which defines a local position and a local orientation.
A global self-localization module 34 obtains in which particular segment or intersection of the pit the vehicle is located. Thus, sensor data acquired while driving can be associated to a segment or intersection.
A segment navigation module 37 controls the vehicle's displacement along each segment, avoiding collisions. The segment navigation technique is principally reactive, that means, it is based on the sensors data for following a route without colliding with an obstacle. In this case in particular, traversing a segment while driving within segment boundaries. Consequently, the target is not to collide with the pit wall and not to fall into the cliff while moving forward. Advantageously, the navigation along segments does not require planning a path/route, as opposed to a deliberative navigation. In fact, a deliberative navigation usually requires knowing a target destination, and generating a trajectory or path free of obstacles to the target destination. Despite being probably more accurate, the deliberative navigation is more complex and computationally expensive.
An intersection detector module 39 compares the observations currently obtained by the sensors 31, with the ones stored in the observation map 20, and determines if the vehicle is at the end of a segment, and then if it is approaching to an intersection. Then, once in the intersection, considering the target to which the vehicle is going and utilizing the topological map 10, some maneuvers are made by means of the actuators 32 to take the appropriate subsequent segment.
An intersection navigation module 35 is used to control these vehicle maneuvers in the intersections, in order to take the new segment.
The intersection detector module 39 determines the selection of which navigation module 35, 37 is the one in charge to send the controls orders (navigation) to the actuators 32 of the vehicle 30. This selection is controlled using a multiplexor 38. There are two specific navigation modules because each one works under different conditions.
Actuators 32 command the vehicle to accelerate, brake, steer, etc. according to the navigation modules 35, 37.
The processing unit 33 generates actuators commands to drive the vehicle 30 along the segment and avoid collisions.
The topological map includes the following information.
For each segment 11:
For each intersection 12:
The pose of the vehicle refers to position and orientation. The pose is divided in two components, a global pose and local pose (internal to each segment or intersection).
The global pose is given by the specific intersection or segment where the vehicle is currently navigating. For example, if a vehicle is navigating through segment Segm 3, between intersections Int 3 and Int 4, the global pose is simply defined as “Segm 3”. If a vehicle is in Int 4, the global pose is simply defined as Int 4. The global pose does not include orientation information. Just the local pose comprises orientation information.
On the other hand, the local pose is defined as the position and orientation of the vehicle in the local reference system of the current segment or intersection. For segments, the local pose of the vehicle is defined by a distance and a movement direction of the vehicle with respect to the local reference system of the segment. In intersections the local pose is defined by the “X” and “Y” coordinates, and the orientation of the vehicle with respect to the local reference system of the intersection. Notice that each segment and intersections has its own local reference system. This approach has technical advantages, it requires less computational resources and it allows avoiding the accumulation of errors. This is because those previous errors generated in past segments or intersections do not accumulate when entering in a subsequent segment or intersection. Consequently, in order to achieve a valid local pose for the vehicle, a less demanding precision and accuracy is needed. As a result, it eases computing specifications and saves processing power.
To enable the self-localization of the vehicle, prior knowledge of its environment is needed. A comparison of the current observations with past observations serves to characterize the place. Past observations are stored in an observation map. An observation comprises surroundings information mainly acquired using sensors, which allow differentiating places within a segment or an intersection.
Several types of sensors may be used to obtain the required surroundings information:
Furthermore, accelerometers, gyroscopes, and inertial measurement sensors allow estimating the local movements of the vehicle.
Even though, only one sensor does not allow to robustly determining the vehicle's pose, a collection of different sensors does. For instance, laser sensors, accelerometers, gyroscopes, electronic compasses and altimeters can be used together. Alternatively, measurement data obtained by laser sensors may be complemented or even replaced by the use of radar data or images acquired using monocular or binocular cameras. However, it must be stressed that the use of just images does not allow solving the self-localization problem by itself, due to the highly symmetry of the of the pit's walls. Images need to be used together with other sensors.
By combining the information stored in the observation map and the current observations/measurements obtained from sensors, the vehicle's pose (position and orientation) is determined.
The techniques used to determine vehicle's pose involves a state estimation algorithm that estimates a non-measurable internal state of a specific dynamic system. Assuming the vehicle's pose cannot be measured directly, it can be considered that it represents a hidden state in that dynamic system, and it can be estimated. There are different state estimation techniques/algorithms, depending on the adopted assumptions (linear system, Gaussian noise, etc.). Consequently, if the behavior of the system is modeled as linear, a Kalman Filter may be used. If it is modeled as non-linear, several options are available, like Extended Kalman Filter, Unscented Kalman Filter, Particle Filter, etc. In the present case, considering the system is non-linear and taking into account the required robustness of the estimation algorithm, a Particle Filter algorithm has been preferably selected.
In general terms, the estimation of the pose of the vehicle includes determining the segment or intersection where the vehicle is. At the beginning, this information is obtained considering that both, the topological map data and the initial position where the vehicle starts its movement, are known. After that, in each instant of time, several pieces of information are processed according to the particle filter algorithm: the current observation (e.g. sensor measurements), the observations stored in the observation map of the corresponding segment/intersection, and the previous pose of the vehicle. Afterwards, using this data, by applying the particle filter algorithm the vehicle's current pose within the segment/intersection is estimated.
The particle filter technique estimates the current position and the orientation of the vehicle by calculating the probability that current sensor data (observations) match previous sensor data (observations) obtained at certain positions and orientations within the particular segment/intersection. The possible position and orientation are continuous variables due the use of Gaussian Processes, which allow transforming the previous discrete data into continuous data. The vehicle's pose is the one that maximizes the matching probability. The estimation gives useful information about the proximity to the end of the segment.
In an acquisition step 72, observations are presently acquired from discrete positions while driving the autonomous vehicle. The observations include surroundings information taken from sensors installed on the vehicle that measure properties of the environment such as height and curvature.
In a topological information gathering step 74, a topological map of the open-pit site is accessed to gather information about intersections and segments of the open-pit site.
In an observational gathering step 76, an observational map of the open-pit site is accessed to gather past surroundings information associated with intersections and with segments of the topological map.
In a processing step 78, past observations, current observations, and odometry information are processed by a processing unit that applies a particle filtering technique and Gaussian processes. The processing step 78 may include two sub-steps for a pose prediction 78a and a pose update 78b. Observations, which are generated from discrete positions, are modelled as a continuous variable.
In a commanding step 79, the autonomous vehicle is maneuvered. When it is in a segment, a moving-forward instruction and/or occasionally a steering instruction for keeping the autonomous vehicle within boundaries is issued. When it is in an intersection, a steering instruction to take a subsequent segment is issued.
The vehicle's current state is given by several sensors 31, typically internal sensors (e.g. odometers and/or inertial sensors) to obtain the vehicle's velocity and the angle of its front wheels, which are used for computing its odometry, in other words its displacement (cartesian pose difference) in each time step. The Pose Prediction sub-step 78a predicts the current pose based on the previous pose and the odometry. The pose update 78b takes as inputs the predicted pose together with the observation map and the current observations, and performs a consistency analysis and then to make a final estimation of the pose. This pose update sub-step 78b also determines if it is necessary to make a transition in the topological map (global pose) from one intersection/segment to another.
The observation map in each segment/intersection is updated periodically, considering that the characteristics of the pit are dynamic. For simplicity, this feature is not shown in the diagram shown in
There are two improvements in the particle filter algorithm used.
As mentioned, when utilizing a topological map and modeling the pose of each particle as a global and local pose, it is not possible to utilize a traditional particle filter, because it has to be able to decide what to do with the particles in the intersections.
For motion noise model is meant the modeling of the errors that exists in the process of estimating the vehicle's movement.
For sample is meant that the new pose of a particle is a sample obtained by sampling a probability distribution.
The correspondence may be expressed as:
The white circles represent the pose of the particles before applying the prediction step in the current time step.
The striped circles represent the pose of the particles after applying the prediction step in the current time step.
In equation (1) it is important to notice that the term P(zk+1|xk+1) is independent of the segment and therefore it is common for both routes. On the other hand, it is assumed that the a priori probability of a segment does not depend on the current vehicle's pose, so that the term P(S|xk+1) can be replaced by P(S). For now, it is assumed that the a priori probability is uniform (P(S)=0.5). Nonetheless, other decision can be made, like giving higher probabilities to a planned segment. With these changes (1) can be calculated as follows:
It is noteworthy that the divisor term in equation (2) is a normalization term.
To calculate P(zk|S, xk), Gaussian Processes are used, which model the mean and covariance of the observations stored in the observation map along each segment. Gaussian Processes are completely defined by their covariance function K(x, x′) or Kernel. Given a collection of observations X, Y, the mean and covariance in a point x* are given by equations (3) and (4):
Then, for each observation z, in each segment S, a Gaussian Process is used to model the probability distribution of said observation as a function of the position x* (a continuous variable):
These and other features, functions, and advantages that have been discussed can be achieved independently in various embodiments or may be combined in yet other embodiments.
Experimental results obtained by a simplified prototype based on the present teachings are presented herein. The ability of the prototype to self-localize within the different segments was tested. The trial was validated in a particular region, utilizing an autonomous vehicle equipped with an array of sensors, including multiple lasers, video cameras, altimeter, magnetometer, IMU.
From the 27 segments considered: 21 have a length between 20 m and 50 m, 5 segments have lengths between 100 m and 300 m, and 1 segment has a length of 3,300 m. From the 12 paths considered, 3 where utilized to train the system and the other 9 to validate it. For the construction of a database corresponding to the observation map, sensors data were obtained from altimeter, magnetometer and IMU. Additionally, a differential GPS was included, which was only utilized to evaluate the performance.
To measure the performance of the prototype, the predicted pose is compared with the corresponding pose of the ground truth (true position), obtained from the GPS and transformed to graph coordinate system.
The GPS information was used in the global map generation process. This information was not utilized at any point during the operation of the prototype, it was only used for its experimental evaluation.
Promising results were obtained for the prototype, as the following:
These outcomes were considered promising and prove feasibility for developing the present invention. The additional evidence and information are merely for illustration purposes and should not be consider limiting of the scope of the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2021/062307 | 12/24/2021 | WO |