There has been much research and progress in the field of unmanned ground vehicles (UGVs). From the increasing popularity of autonomous cars to package delivering robots, the potential applications seem limitless. However, as much as the developers of UGVs would like you to believe, they are far from perfect. Many of these systems begin to fail in inclement weather, such as snow, which is why they are primarily tested in locations with good weather year-round. Additionally, some systems cannot navigate without localization via Global Navigation Satellite System (GNSS). In some environments, such as urban environments, tree canopy and buildings hinder the quality of reception of GNSS, which can negatively impact the performance of the system. GNSS is often combined with inertial measurement units (INS) or wheel encoders to supplement the localization in low quality GNSS areas. However, both INS and wheel encoders are susceptible to drift or slip and the localization scheme will fail if the vehicle travels too far between receiving accurate GNSS positions. Thus, improved systems and method for autonomous navigation in inclement weather and/or urban environments are desired.
The present disclosure addresses the aforementioned drawbacks by providing systems and methods for autonomous vehicles that can navigate sidewalks using GNSS as well as LiDAR sensors.
It is an aspect of the present disclosure to provide a method for determining a location of an unmanned ground vehicle (UGV). The method includes receiving LIDAR data with a computer system, the LIDAR data being received from at least one LIDAR sensor mounted to the UGV, receiving Global Navigation Satellite System (GNSS) data with the computer system, the GNSS data being received from at least one GNSS sensor mounted to the UGV, and computing location data with the computer system, the location data being computed by fusing the LIDAR data and the GNSS data to determine a location of the UGV.
It is another aspect of the present disclosure to provide a method for mapping a pathway. The method includes receiving LIDAR data with a computer system, the LIDAR data being received from at least one LIDAR sensor mounted to a vehicle as the vehicle moves along the pathway, receiving Global Navigation Satellite System (GNSS) data with a computer system, the GNSS data being received from at least one GNSS sensor mounted to the vehicle as the vehicle moves along the pathway, and generating a pathway map based on the LIDAR data and the GNSS data using the computer system, the pathway map including one or more segments each associated with one or more features of the pathway.
It is yet another aspect of the present disclosure to provide a method for navigating a sidewalk at least partially covered with snow. The method includes receiving LIDAR data from at least one LIDAR sensor mounted to a vehicle, determining a command to send to a control system of the vehicle based on the LIDAR data and a map including a sidewalk segment, a curb cut segment, and a grass segment, the map being previously generated based on LIDAR data of the sidewalk without snow cover, and outputting the command to the control system of the vehicle to advance the vehicle down the sidewalk.
It is still another aspect of the present disclosure to provide a system for navigating a sidewalk at least partially covered with snow. The system includes a vehicle including a control system, a LIDAR sensor coupled to the vehicle, and a controller coupled to the vehicle and the LIDAR sensor and including a memory and a processor. The controller is configured to execute instructions stored in the memory to receive LIDAR data from at the LIDAR sensor, determine a command to send to the control system based on the LIDAR data and a map including a sidewalk segment, a curb cut segment, and a grass segment, the map being previously generated based on LIDAR data of the sidewalk without snow cover, and output the command to the control system to advance the vehicle down the sidewalk.
It is a further aspect of the present disclosure to provide a method for navigating a sidewalk. The method includes receiving LIDAR data from at least one LIDAR sensor mounted to a vehicle, determining a command to send to a control system of the vehicle based on the LIDAR data and a map including a sidewalk segment, a curb cut segment, and a grass segment, the map being previously generated based on LIDAR data of the sidewalk, and outputting the command to the control system of the vehicle to advance the vehicle down the sidewalk.
The foregoing and other aspects and advantages of the present disclosure will appear from the following description. In the description, reference is made to the accompanying drawings that form a part hereof, and in which there is shown by way of illustration a preferred embodiment. This embodiment does not necessarily represent the full scope of the invention, however, and reference is therefore made to the claims and herein for interpreting the scope of the invention.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Described here are providing systems and methods for autonomous vehicles that can navigate sidewalks or other pathways using GNSS as well as LIDAR sensors. For instance, the systems and methods described in the present disclosure related to determining a location of an unmanned ground vehicle (“UGV”), mapping a pathway along which a UGV is moving, and/or navigating a UGV along a pathway. In each of these instances, LIDAR and GNSS data are measured and combined in order to provide highly accurate localization of the UGV, which enables localization and/or navigation of the UGV, in addition to mapping of the pathway using the UGV.
Simultaneous localization and mapping (SLAM) is commonly used for real-time 6 degree-of-freedom pose estimation. SLAM is either performed using vision based systems or LIDAR based systems. Both SLAM and adaptive Monte Carlo localization (AMCL) match the current perception of the environment, via LIDAR or vision, to a given environment model or map. No matter the system, both SLAM and AMCL algorithms, are prone to limitations. Since SLAM and AMCL are based on identifying landmarks in the environment, the accuracy of the pose estimation is heavily dependent on these landmarks remaining static. If the environment changes, due to snow cover or construction, SLAM and AMCL will likely fail. To make localization more robust, SLAM has been fused with other sensors such as GNSS or INS. However, these do not solve the underlying constraint of needing a fixed environment, which is not the case when there are various levels of snow on the ground.
LIDAR odometry takes into account that there is a certain amount of sensor information overlap between consecutive LIDAR scans allowing for the pose transformation of the UGV to be estimated by matching the adjacent frames. Common LIDAR odometry methods can produce adequate pose estimation results. However, these are computationally expensive and run at lower frame rates or require high powered computers. One group proposed a lightweight and ground-optimized LIDAR and mapping method (LeGO-LOAM) that is capable of real-time 6 degrees-of-freedom pose estimation on low power embedded systems. Due to its performance and low power requirements, LeGO-LOAM can be used as a localization scheme. Another group used a convolution neural network (CNN) to fuse vision, GNSS, and IMU to create a localization system for UGVs.
Localization aside, it is advantageous to be able to map and understand the environment of the UGV because the type of terrain the vehicle is moving on impacts its mobility. When traveling from pt. A to pt. B, knowing where features such as roads, sidewalks, grass, and other obstacles are located can facilitate making smart decisions and planning a path between the two points. Still another group proposed using LIDAR to create a map of drivable space in real time, however, they do not classify the different segments of the ground. Still yet another group used both LIDAR and vision systems to segment the ground and label the road as well as identify street signs. The localization schemes in both of these rely solely on GNSS and will fail in low quality reception or GNSS denied areas. An additional group performed automated and detailed sidewalk assessment using both LIDAR and vision. However, their system relies heavily on vision whose performance is highly susceptible to lighting and they do not identify other areas of the ground such as roads and grass. In a further study, yet another group focused on autonomous terrain traversability mapping, including both the mapping and classification of the ground whether the vehicle can or cannot travel on it. This may not always be desired because there are many situations where a vehicle has the physical capability to travel somewhere but it should not.
Deep learning is becoming an increasingly popular solution for object detection and semantic environment segmentation as well as road and road sign detection. However, these solutions are typically vision based (as opposed to LIDAR), require training data, and perform poorly in weather conditions such as rain and snow.
In this work, a multi-step approach to facilitate autonomous navigation by small vehicles in urban environments is proposed, allowing travel only on sidewalks and paved paths. More generally, the systems and methods described in the present disclosure enable localization and/or navigation along pathways other than sidewalks or paved paths, including trails such as hiking trails or unpaved biking trails. Similarly, the systems and methods described in the present disclosure also enable mapping of these other types of pathways.
It is desirable to have a vehicle autonomously navigate from point A on one urban block to point B on another block, crossing from one block to another only at curb cuts, and stopping when pedestrians get in the way. A small mobile platform is first manually driven along the sidewalks or other pathways to continuously record LIDAR and GNSS data when little to no snow is on the ground. The algorithm(s) described in the present disclosure can post process the data to generate a labeled traversability map. During this automated process, areas such as grass, sidewalks, stationary obstacles, roads and curb cuts are identified. The ability to classify the ground in an environment, including sidewalks, facilitates appropriate decisions during navigation such as giving the robot information where it is acceptable to travel. By differentiating between these areas using only LIDAR, the vehicle is later able to create a path for travel on suitable areas (e.g., sidewalks and/or roads), and not in other areas (e.g., grass).
An Extended Kalman Filter can be used to fuse the Lightweight and Ground-Optimized LIDAR Odometry and Mapping (LeGO-LOAM) approach with high accuracy GNSS where available, to allow for accurate localization even in areas with poor GNSS, which is often the case in cities and areas covered by tree canopy. This localization approach can be used during the data capture stage, prior to the post-processing stage when labeled segmentation is performed, and again during real time autonomous navigation. In some embodiments, the localization approach can be carried out during real time autonomous navigation using the ROS navigation stack.
There is a gap in previous research on robust localization and navigation; it has not been applied to poor weather conditions. Dynamic environments, such as snow, construction, growing vegetation, cause problems for traditional scan matching localization systems such as AMCL. By using LeGO-LOAM combined with GNSS, the robot is able to localize under many different weather conditions, including snow and rain, where other algorithms (e.g. AMCL) will likely fail. A system that allows the vehicle to autonomously plan and navigate several kilometer-long paths in urban snow covered neighborhoods is described in the present disclosure. A potential application is autonomous wheelchair navigation that could be functional under most weather conditions. Another potential application is steering assist for manually steered motorized wheelchairs in varying weather conditions.
Referring now to
The Husky UGV platform from Clearpath Robotics was used as a base vehicle for mounting hardware and sensors in a number of real world studies for the proposed LIDAR and GNSS based system. It is understood that other platforms can be used as base vehicles in autonomous vehicle systems in accordance with this disclosure. The Husky comes equipped with four 330 mm diameter wheels. In some configurations, the Husky can include rotary encoders; however, the rotary recorders are not necessary for localization, positioning, and/or navigation using the techniques described in the present disclosure. Certain vehicles, such as skid-steers, may not be able to use rotary encoders for positioning, and the Extended Kalman Filter that fuses LeGO-LOAM with high accuracy GNSS can provide localization and/or positioning for skid-steers such as the Husky. That is, the methods described in the present disclosure are capable of determining a location of a UGV without the need of rotary encoder data, which measures the angular position and/or motion of the wheels on the UGV. This is advantageous for UGVs that implement tracks or skid-steers where rotary encoder data are not measured or are not reliably measurable. The vehicle has external dimensions of 990×670×390 mm and has a max payload of 75 Kg with a top speed of 1 m/s. In the center of the vehicle is a weatherproof storage area where the electronics are stored, such as a computer and a WIFI router. Dimensions of the Husky vehicle can be seen in
The on-board computer in the Husky can be a Mini-ITX single board computer with 3.6 GHz Intel i7-770 processor, 16 GB DDR4-2400 RAM, and a 512 GB SSD. The sensors mounted on the vehicle for testing included a high accuracy Trimble GNSS antenna and RTK receiver, a Velodyne VLP-16 LIDAR, and a Phidgets IMU. The Real-time Kinematic (RTK) GNSS receiver and antenna exhibits sub-centimeter accuracy in GNSS areas with good reception. The Velodyne LIDAR (VLP-16) has 16 channels with a vertical field of view of −15° to +15° with each channel separated by 2° horizontally. There are 1800 LIDAR points returned for each channel in the LIDAR which has a maximum measurement range of 100 m. The LIDAR returns up to 300,000 points/second. The Phidgets IMU has a 3-axis accelerometer, gyroscope, and compass. The hardware mounted on the Husky platform can include the GNSS antenna 204, the IMU sensor 208, the cell modem 212, the radar sensor 216, the LIDAR sensor 220, the camera 224, the E-Stop 228, the computational device 232, and/or the wheel encoders 236 in
The Husky UGV is a skid-steer platform (also called a differential drive platform), meaning the wheels on the right side of the vehicle rotate in the same direction and at the same velocity as each other and the same can be stated about the wheels on the left side. The set of wheels on one side spin independently of the wheels on the other side of the vehicle. Differential drive kinematics allows the vehicle to rotate about its center without having to travel forward or backward. This varies from Ackerman steering in which one axle of the vehicle allows the wheels to rotate and on the other axle the wheels are fixed, which is common in many vehicles today.
As described above, the Extended Kalman Filter that fuses LeGO-LOAM with high accuracy GNSS can provide localization and/or positioning for skid-steer platforms. For skid-steer platforms, the wheels commonly slip during vehicle movement, and rotary encoders cannot be relied on for accurate positioning due to wheel slip. The Extended Kalman Filter can accurately localize skid-steer platforms because it relies on LIDAR and GNSS sensors rather than rotary encoders as inputs.
In some embodiments, the vehicle computer runs the Robot Operating System (ROS) for fusion of multiple sensor readings and facilitates implementation of custom control and software algorithms. ROS is a “collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms”. ROS is a powerful tool that allows researchers around the world to collaborate and build off previous work in an easy and efficient manner. Clearpath uses ROS to control all of its vehicles and supports its drivers through ROS. ROS allows the sensors (GNSS, LIDAR, IMU) to be fused together in order to localize and create a labeled map, as well as send velocity commands to the motors while traveling.
A cell modem provides internet access to the vehicle and allows for the GNSS system to wirelessly receive position corrections. An E-Stop receiver is mounted on the outside of the vehicle and allows for remotely pausing the system if needed.
Since one goal is to generate a labeled map of an urban sidewalk environment and autonomously localize and travel between two points, a neighborhood was desired that had many different characteristics that a vehicle might encounter in the urban world. A neighborhood near the University of Minnesota—Twin Cities campus was selected because it satisfied many characteristics which were desired for testing. A table of the desired characteristics can be seen in Table 1 below and an aerial view of the location obtained from Google Maps can be seen in
Since a major focus of this disclosure is the ability to travel in snow covered terrain, images showing the testing environment in the winter are shown below in
In this disclosure, for simplicity, a planar environment is assumed; however, it will be appreciated that the techniques described in the present disclosure can be extended to other environments as desired. This means that the vehicles pose can be satisfied by knowing its X, Y position as well as its yaw (i.e., heading). In order to autonomously navigate between two points, it is advantageous if the vehicle is able to accurately localize in the environment. The localization scheme proposed in this research uses an Extended Kalman Filter to fuse together high accuracy GNSS, LIDAR Odometry, and IMU data. The GNSS system allows for accurate global localization when there is good enough reception, such as in road intersections and clearings in the tree canopy. Global localization allows the vehicle to know where it is in the world, in terms of latitude and longitude. This is advantageous when plotting a path between two global points in the world as it allows for the definition of positions in terms of latitude and longitude.
LIDAR Odometry (LeGO-LOAM) provides dead-reckoning localization in low quality GNSS areas. These areas include locations under tree canopy, indoors, and areas surrounded by buildings. The LIDAR odometry provides localization relative to the most recent accurate GNSS position. The IMU provides an initial yaw measurement of the vehicle upon startup.
The extended Kalman filter (EKF) used is provided by the ROS package robot_localization. Robot_localization is an open source collection of state estimation nodes which implement nonlinear state estimation for vehicles. The EKF uses an “omnidirectional motion model to project the state forward in time, and corrects that projected estimate using perceived sensor data”. The inputs and outputs of the EKF, which are constrained to a planar environment, can be seen in
The EKF process provided is a nonlinear dynamic system, with
x
k=ƒ(xk−1)+wk−1
where xk is the robot's state at time k, ƒ( ) is the nonlinear state transition function, and wk-1 is the normally distributed process noise. Without any assumptions, the state vector x is 12-dimensional which includes the 3D pose and orientation of the vehicle, as well as the respective velocities of the pose and orientation. In some embodiments described in the present disclosure, a planar environment can be assumed, which reduces the number of dimensions down to six. When the EKF receives a measurement, it is in the form
z
k
=h(xk)+vk
where zk is the input measurement at time k, h is the nonlinear sensor model mapping the state into measurement space, and vk is the measurement noise. The initial step of the EKF is called a prediction step that projects the current state of the vehicle and its respective error covariance forward in time:
x
k=ƒ(xk-1)
{circumflex over (P)}
k
=FP
k-1
F
T
+Q
where ƒ( ) is a 3D kinematic model derived from Newtonian mechanics, P is the estimate error covariance, F is the Jacobian of ƒ( ), and Q is the process noise covariance. Finally, a correction step is carried out to update the state vector and covariance matrix.
ROS converts the latitude and longitude coordinates from the receiver to X and Y coordinates in the vehicle frame via a provided navsat_transform node in the ROS navigation stack. The navsat_transform node allows for quick conversions from GNSS coordinates to cartesian coordinates in the vehicle frame. It is assumed that the vehicle starts in a high accuracy GNSS location to ensure the vehicle gets a good initial global position estimate. However, if a high accuracy GNSS location is not available as a starting point, an equivalent LIDAR-based location can be used as a starting point, and the location of the vehicle in latitude and longitude coordinates can be determined from the LIDAR-based location. The first high accuracy GNSS location the vehicle receives is set as the (0,0) XY position of the vehicle. Any subsequent high accuracy GNSS receptions positions are considered relative to this initial start point. A covariance matrix is calculated automatically in ROS based on the quality of GNSS reception. Before feeding the GNSS data into the EKF, coordinates above a certain covariance threshold are filtered out. This stops low accuracy GNSS positions from negatively affecting the localization in the EKF. Tests were performed with only low accuracy GNSS and the results will be discussed in below.
LIDAR odometry provides localization estimates relative to a specific pose. That is, ΔX, ΔY, and Δyaw are estimated from the starting pose of the vehicle given by the GNSS and IMU. For simplicity, a static covariance of 0.01 is set for each pose estimation for LIDAR Odometry before it is fed into the EKF. The LIDAR odometry in an urban environment can travel 500 meters before the drift reaches above 0.5 meters. LIDAR odometry needs many distinct features for an accurate localization estimate, the fewer the features, the faster the drift will accumulate.
The heading of the vehicle can be used to help fully define its pose in a planar environment. Since the initial pose X and Y is given from the GNSS, an IMU with a magnetometer is used get the initial yaw of the vehicle. LIDAR Odometry is then used to estimate the Δyaw from the initial start position.
This research proposes a multistep approach to mapping and classifying the environment that the vehicle is traveling in. These steps include data collection, automated post-processing of data, and autonomous navigation between points while traveling only on sidewalks, or other pathways, and intersections. An overview of the process flow 800 can be seen in
The first step towards generating a classified map of the region of interest is collecting the data 808 that will be processed to generate such a map. A user 804 steers the vehicle manually via commands from a joystick controller or arrows on a keyboard. The user 804 can steer the robot along all the sidewalks, alleys, or other paved or unpaved pathways that they want the vehicle to be able to travel on autonomously in the future. Preferably, the vehicle is driven through all of the potential destinations to which one may want the vehicle to navigate in the future, such as relevant building entrances or bus stops. This data collection is also preferably done when there is no snow on the ground, which can facilitate better segmentation of the environment, since snow can easily cover many distinctive features of objects.
The data 812 collected at 808 comes from the LIDAR, GNSS, and IMU. The LIDAR data is three-dimensional information about the environment around the vehicle. The field-of-view (FOV) of the LIDAR can be seen in
The GNSS data is also collected in order to identify where GNSS high-accuracy reception is available in the neighborhood. This data can also be used when generating a path. The IMU data allows for the initial yaw pose estimation. During the mapping procedure, the vehicle pose is identified using the localization scheme described above.
Despite previous attempts to autonomously explore urban environments, there is a lack of research in autonomous exploration in an urban environment that adheres to socially acceptable means of exploration, such as only using sidewalks and avoiding driving over grass or other people's lawns. In addition to this, autonomously exploring a neighborhood may take much longer than if it was explored without the assistance of a user steering the vehicle. As a result, in some implementations the use of autonomous exploration may not be desired, and instead the vehicle can be manually steered to collect the data. Alternatively, in other embodiments, autonomous exploration can be used. In some embodiments, the vehicle can be configured to follow a walking person and collect the data. In some embodiments, the vehicle can be pushed and/or pulled by a walking person and/or pulled by another vehicle (e.g., a bicycle, motorized wheelchair, etc.) and collect the data. If the vehicle is pulled, it is important that the front view of the sidewalk by the LIDAR sensor not be obstructed.
The second step of the flow 800 is to automatically analyze the data and generate 816 a segmented and labeled map 820 of the area of interest from the LIDAR point cloud. The LIDAR outputs approximately 30,000 points per frame and runs at 10 Hz which results in 300,000 3-D points per second. Each of these points represent the distance to a point on the surface of objects in the environment that the vehicle is in. The LIDAR data points are sent through a custom algorithm that allows for the classification of the points into sidewalks, roads, grass, curbs and curb cuts, and obstacles. Obstacles are defined as trees, bike racks, stairs, etc. These types of objects can be broadly classified together because when traveling between two points, the vehicle just needs to know enough to avoid these objects and doesn't need to know explicitly the type of object it is. For example, knowing the distinction between grass and sidewalk may be more relevant than knowing the difference between a lamp-post and a tree. In many applications, it is desirable for the robot to drive on sidewalks and not on grass, so being able to distinguish between grass and sidewalk is advantageous. However, both the tree and lamp-post represent an object that should be avoided; distinguishing between a tree and lamp-post is not as relevant as their shape. An outline of how the vehicle classifies the environment can be seen in Algorithm 1.
In Algorithm 1 described below, the slope is taken between two column-wise points. In terms of the LIDAR point cloud, two-column wise points will be in separate channels of the LIDAR in the vertical direction. The grade of the road, in reference to the road curvature, is defined as the magnitude of the slope as it rises and falls along its width. Objects such as stairs and retaining walls will fall into the category of obstacle in this algorithm.
Determining where the sidewalk or other pathway is in the environment is a general aspect of map generation 816. It is used to give the vehicle the ability to travel using the pathway and not the surrounding environment, which may otherwise be traversable. For instance, the map may be used to travel on a sidewalk and not on the surrounding grass. A notable characteristic about sidewalks or other paved pathways is that a quality sidewalk (meaning without major holes, cracks, or tree roots) is smooth and flat. This means the LIDAR points falling on the sidewalk should return data that has similar elevation with little variation between consecutive points. Sidewalks are also typically surrounded by grass, curbs, walls, or vegetation which cause a discrete jump in the LIDAR scan. Taking these characteristics into consideration, the sidewalk can be extracted from the raw LIDAR scan. An example of the sidewalk LIDAR classification result can be seen in
Roads in urban environments are similar to sidewalks in that they are typically smooth. However, to differentiate them from sidewalks, roads also have a distinct cross slope to them. The cross slope is designed into roads so that the highest point of the road is in the center which causes water to drain from the road surface to the street gutters. Urban roads are also often surrounded by curbs, meaning the LIDAR scan (when the robot is on a sidewalk) will see an elevation jump from the sidewalk down to the road, the road back up to the sidewalk, or both.
Unlike sidewalks and roads, LIDAR points that fall on grass are noisy. The blades of grass create a high standard deviation of consecutive points that allow for easy classification in comparison to roads and sidewalks. An example of the road LIDAR classification can be seen in
Curbs are associated with the break in elevation between the road and sidewalk or grass median. In a typical urban neighborhood, sidewalks are positioned higher in elevation than roads. There are two common scenarios, the first one is where there is a sidewalk, a curb, then a road. The second being where there is sidewalk then a grass median, then the curb, then the road. Curb cuts are detected when the sidewalk merges with the road.
For the robot to travel only on sidewalks or other pathways, the centerlines of the sidewalks or other pathways can be determined and used to generate a path using the sidewalks or other pathways. More specifically, the GNSS coordinates of the sidewalk or other pathway centerline are desired. The first step to find the pathway centerline is to extract the sidewalk or other pathway directly in front of the robot from the first (or the lowest in elevation) channel of the LIDAR, as shown in
The midpoint of this sidewalk scan is archived, and a new centerline point is found as the robot moves forward. In some embodiments, every one meter, all the centerline points that were archived for the past 1 meter are all averaged and the X and Y points (which are the output of the EKF), which are relative to the starting pose, are saved as the centerline for that section of sidewalk. In some embodiments, centerline points can be generated for sections of lengths other than one meter (e.g., a half meter, two meters, etc.). This point is then converted using the ROS navigation stack to a latitude and longitude coordinate. A visual of this process can be seen in
Locations with High Accuracy GNSS
Areas with good GNSS reception in the neighborhood are recorded and mapped during the initial data collection. This is relevant information for several reasons. If the vehicle localization using LIDAR odometry were to drift too much while it is traveling, one can divert to the nearest area with good GNSS reception in order to zero its error. During path planning, a path can also be generated so that if the vehicle knows it will be traveling for too long in an area relying solely on LIDAR Odometry and an unacceptable amount of drift will likely occur, the path can be modified to navigate to a known area with good GNSS reception to ‘check-in’ and reduce the localization error, and then continue on the rest of the path to the final destination. However, this functionality is not used in this research because in the testing environment all paths currently generated happen to cross through a high accuracy GNSS zone, such as those located at identified intersections.
Now that the environment has been mapped, the system knows where there is grass, sidewalks, roads, curb cuts, and obstacles. This information is used to create a path from pt. A to pt. B using only sidewalks and curb cuts to cross intersections. Consider an objective to determine a path along sidewalks from an origin 1400 to a destination 1404 as seen in
The Google Maps API is used to plot an initial GNSS coordinate path between the origin and destination positions. An advantage of using the Google Maps API is the capability to use landmarks such as street names or addresses directly instead of knowing exact coordinates of the locations to which you want the vehicle to travel. However, Google Maps API will plot a GNSS ‘breadcrumb’ path that lies on the road centerlines and not on the sidewalks. A visual of example paths plotted via Google Maps can be seen in
The road centerline GNSS points can be modified to handle sidewalks and can be shifted over to the appropriate sidewalk centerlines. To apply corrections to the initial path, generated by the Google Maps API, the path is overlaid onto the previously generated segmented map. The GNSS coordinates from the road are shifted to the sidewalk centerlines and curb cuts. Algorithm 2 (described below) will find the closest sidewalk/curb cut entry/departure points to the initial path determined with the road centerlines.
Given a destination coordinate, the vehicle will likely have several different valid path options to take to travel to the destination. An example of all the possible sidewalks determined previously in the area of interest is shown in
Now that a path has been generated between two points using only sidewalks and curb cuts at intersections, the vehicle has the ability to autonomously navigate the path using the ROS navigation stack. The ROS Navigation stack allows for path following by generating the velocity and heading used by the robot to follow and stay on the path. The velocity and heading can then be converted to motor commands to control the wheels on the Husky via a ROS package provided by Clearpath Robotics. The ROS Navigation stack also provides functionalities such as obstacle avoidance using LIDAR scans to avoid people and pets while traveling.
While the robot is following the ‘breadcrumb’ path, the vehicle's pose is identified using the localization scheme of combining high accuracy GNSS and LIDAR Odometry described above. The path planner may only allow the vehicle to cross the road at intersections, or the path planner may allow the vehicle to cross the road using other curb cuts such as driveways.
In order to quantify the accuracy of the system and see how well it performed, the results are divided into to two sections: the mapping results and the localization results.
The automatically generated segmented map of the environment, described above, needs to be compared to a ‘ground truth’ to quantify the accuracy of the map. Since no map already exists, a ‘ground truth’ map was created by using an aerial orthorectified image of the neighborhood and manually edited to add the proper labels. The raw orthorectified image can be seen in
In
To evaluate the accuracy, the true positive (TP) and false positive (FP) for the sidewalk is calculated. This was done by a pixel comparison between the two images using MATLAB. The pixels that are classified as a sidewalk by the LIDAR are checked with the manually labeled image to see if it is a correct classification or a false classification. The results can be seen in Table 2 below. The sidewalk was 91.46% accurately classified and 8.54% falsely classified as sidewalk. As previously mentioned, the system classifies a portion of the road as ‘sidewalk’ while crossing an intersection and consequently much of the false positive classification results below can be attributed to this.
To test the accuracy of the localization system, a ‘ground truth’ can be used. The high accuracy GNSS system described above was used to survey areas in the neighborhood where high accuracy results were available. This can only be done in GNSS areas with good reception, and since a majority of the area is covered in tree canopy or overshadowed by buildings, there were only seven consistent locations in the neighborhood that were identified that could be accurately surveyed. These locations are indicated on the map in
To validate the accuracy in the areas with good GNSS reception, the vehicle was driven to each location with known high quality GNSS reception (clearings, intersections) and parked. GNSS coordinates were recorded over a one minute span. One minute was experimentally found to be enough time to get an accurate estimation of the true latitude and longitude. The GNSS coordinates were then converted to Universal Transverse Mercator (UTM) coordinates, which is in meters, and averaged over that period of time to generate a ‘ground truth’ latitude and longitude. The accuracy of the GNSS position at these locations was found by calculating the circular error probable (CEP). The CEP is a measure of the median error radius of all the location points recorded and allows for the quantification of the quality of the GNSS signal at these locations. A scatter plot showing the results of the GNSS latitude and longitude readings over a one minute period and the CEP error circle for location 1 is shown in
As the robot travels between two points and passes through these surveyed areas, the estimated position of the vehicle from the localization scheme EKF can be compared to the true value as it passes through the surveyed area. The localization results were recorded for two runs. The first run was conducted during the summertime under clear sky conditions and no snow on the ground while the second run was conducted during winter under cloudy conditions with fresh snow cover on the ground.
A table showing the CEP GNSS quality values for all seven locations as well as the total distance from the start (location 1) that the vehicle traveled as it reached the respective location and the recorded error for both the summer and winter runs can be seen in Table 3. The vehicle was given these seven waypoints as consecutive destinations to which to navigate. The vehicle was also given five other intermediate waypoints to travel through, shown as red circles in
The average error of the localization system is 0.056 m in the summer and 0.073 m in the winter with snow on the ground. This metric was chosen to evaluate the localization system as opposed to the Success weighted by Path Length (SPL) measure proposed by Anderson et al. because in the example applications described in the present disclosure the shortest path between two points may not always the best path. In the above Table 3, in order to calculate the error, the true latitude and longitude of each respective point is converted to X Y coordinates in the vehicle frame. As the vehicle passes through the location, the estimated X Y position from the EKF is compared to the true X Y location, and the error is calculated as the difference. Additionally, the distance traveled is the total distance the vehicle has moved from the start of the path (location 1). The locations in Table 3 are all locations with good GNSS reception, and as a result, it is likely that the vehicle will correct its position estimate as it travels through these points. The error of the position estimate before and after the vehicle travels through the respective points, both in summer and winter, can be seen in Table 4 below.
To demonstrate the system traveling a longer distance without feeding it waypoints or correction locations, such as in
As a check to determine if the proposed localization scheme performs better than just by using the LIDAR odometry alone, the vehicle was driven from location 1 to location 3 (
The error at the end of this 350 meter path was 0.153 meters. This is worse than the 0.032 meter error the vehicle incurred while traveling through location 3 during the summer time with GNSS actively being fed into the GNSS (Table 3). This shows that fusing together the high accuracy GNSS together with the LIDAR odometry does in fact provide better localization accuracy. The accuracy of the LIDAR odometry is a function of the number of distinctive features visible to the LIDAR in the nearby environment. This neighborhood provides many distinct features for the LIDAR to process, however, some parts of the neighborhood may provide more distinctive features than others. As a result, it is contemplated that the accuracy of the LIDAR odometry may vary slightly because of this.
In the examples described in the present disclosure, it was assumed that the vehicle starts in a high accuracy GNSS location. This constraint can also be resolved by creating high quality LIDAR based landmarks which allow the robot to localize accurately instead of relying on only GNSS. Also, it is assumed that the sidewalks are of reasonably good quality and are not covered in grass or leaves when collecting the initial data.
Some scenarios where the proposed system may encounter problems are during heavy snowfall or rainfall. The system relies on LIDAR when no GNSS is available and LIDAR data quality deteriorates under these weather conditions. The LIDAR odometry may also drift significantly due to lack of features in the area (as would be the case next to an open field) or finding too many similar features between the high accuracy GNSS locations. This latter situation of too many “similar” features was identified when operating next to a building with many identical window frames adjacent to the robot's path. As a result, the drift may cause the robot to veer off the sidewalk before it can correct for the error with GNSS. This scenario is highly unlikely in a residential neighborhood.
In some configurations, the system does not actively monitor traffic at intersections and as a result is not aware of oncoming vehicles when crossing the intersection. In alternative configurations, features such as the functionality of monitoring traffic at intersections, the LIDAR on the vehicle can be programmed to detect vehicles, bikes, and other pedestrians on the roads can be added. An algorithm could be designed to identify the flow of the traffic in the street and determine if the intersection is clear of any oncoming vehicles. This information could be used to autonomously pause the robot while crossing intersections to avoid the risk of collisions.
Generating localization results in a snowy neighborhood proved challenging. The issues encountered in the winter may have nothing to do with the sensors or the vehicle, but rather can involve piles of snow blocking sidewalks and leaving no viable path for the vehicle to travel. An example of a pile of snow blocking a curb cut that would normally allow the vehicle to get onto a sidewalk is shown in
The quality of snow removal on sidewalks and streets varies highly among neighborhoods and households. This is due to the fact that many households are responsible for the removal of snow on the sidewalks in front of their house. Some people do a good job removing the snow while others may not do it at all. This, in addition to snow plows creating snow piles that block many curb cuts, provides for a challenging environment for sidewalk path planning and navigation in snowy environments. Clearly the local jurisdiction or neighbors need to ensure that the curb cuts are also cleared of snow and not just the sidewalks.
Referring now to
At 2704, the process 2700 can receive data from sensors coupled to the vehicle. In some embodiments, the sensors can generate the data as the vehicle is piloted along a paved pathway such as a sidewalk and/or alley. In some embodiments, the vehicle can be piloted by a person pushing the vehicle and/or a person remotely controlling the vehicle to proceed along the paved pathway. In some embodiments, the vehicle can autonomously navigate along the paved pathway. The data can be generated when there is not snow on the paved pathway. In some embodiments, the sensors coupled to the vehicles can include a GNSS sensor and a LIDAR sensor. In some embodiments, the LIDAR sensor can be a Trimble GNSS antenna and RTK receiver, and the LIDAR sensor can be a Velodyne VLP-16 LIDAR. In some embodiments, the sensors can include an IMU sensor such as a Phidgets IMU. The GNSS sensor can be a high-accuracy GNSS sensor (e.g., accurate to a centimeter or less). The data received from the sensors can include LIDAR scans (e.g., a three-dimensional point cloud) from the LIDAR sensor and a location value (e.g., coordinate values) and/or reception value (e.g., strength of signal value) from the GNSS sensor. In some embodiments, the data received from the sensors can include a yaw value from the IMU sensor. The process 2700 can then proceed to 2704.
At 2708, the process 2700 can generate at least one sidewalk segment based on the data. In some embodiments, the process 2700 can determine that a first portion of a LIDAR point cloud included in the data is located above a second portion of the LIDAR point cloud. The process 2700 can then determine that the first portion of the LIDAR point cloud is included in a sidewalk segment. In some embodiments, the process 2700 can determine that a third portion of the LIDAR point cloud has a roughness below a predetermined threshold, and is included in a sidewalk segment. Sidewalks are commonly surrounded by grass, curbs, walls, vegetation, etc. that are less smooth than the sidewalk. The process 2700 can determine that portions of the LIDAR cloud that are not below the predetermined threshold for roughness are not sidewalk segments. The third portion may be the first portion. In some embodiments, the process 2700 can determine that a fourth portion of the LIDAR point cloud does not have a sufficient curvature to be considered a roadway. Sidewalks are commonly flat, while roadways commonly are curved to allow for drainage. The fourth portion may be the third portion. In some embodiments, the process 2700 can generate alley segments using at least a portion of the same criteria as the sidewalk segment. For example, the process 2700 can determine a fifth portion of the LIDAR point cloud has a roughness below the predetermined threshold and that the fifth portion of the LIDAR point cloud does not have a sufficient curvature to be considered a roadway. The process 2700 can perform the above determinations for any number of LIDAR point clouds included in the data. The process 2700 can then proceed to 2712.
At 2712, the process 2700 can generate at least one roadway segment based on the data. In some embodiments, for a LIDAR point cloud, the process 2700 can determine a target portion of the LIDAR point cloud that includes points below the base of the vehicle. For example, the process 2700 can determined the location of the base of the vehicle based on the mounting height of the LIDAR sensor, which can be predetermined. The process 2700 can include all points in the LIDAR point cloud below the base of the vehicle in the target portion. The process 2700 can determine which subportion(s), if any, of the target portion have sufficient curvature to be included in a roadway segment. The process 2700 can include any subportions of the target portion that exhibit light curvature in the map as roadway segments. The process 2700 can perform the above determinations for any number of LIDAR point clouds included in the data. The process 2700 can then proceed to 2716.
At 2716, the process 2700 can generate at least one grass segment based on the data. In some embodiments, the process 2700 can determine one or more portions of a LIDAR point cloud are noisier than a predetermined threshold. In some embodiments, the process 2700 can determine, for a set of points included in the LIDAR point cloud and corresponding to a single channel of the LIDAR sensor, which points deviate substantially from a previous number of points. In some embodiments, the process 2700 can determine which points have heights that differ substantially (e.g., 1.85 standard deviations) from a previous number of points (e.g., twenty-five points). The number of standard deviations and/or the previous number of points can be selected based on application. The process 2700 can classify points that deviate by more than the predetermined number of standard deviations as part of a grass segment. The process 2700 can perform the above determinations for any number of LIDAR point clouds included in the data. The process 2700 can then proceed to 2720.
At 2720, the process 2700 can generate at least one curb segment and/or curb-cut segment based on the data. In some embodiments, the process 2700 can determine if a portion of a LIDAR point cloud associated with the ground (e.g., points below the LIDAR sensor) includes points that maintain a constant height and then jump to a higher or lower value. In some embodiments, the process 2700 the process can compare heights of points in the LIDAR point cloud across the width of the sidewalk, and determine the height remains constant for a number of points, rapidly changes to a second height, and then remains at the second height for a number of points, indicating a curb. The process 2700 can then include all points between and/or including the edge of the street to the edge of the sidewalk in a curb segment. The process 2700 can generate one or more curb-cut segments by determining portions of the point cloud where the curb segment gradually slopes down to the street, indicating a curb-cut. At 2720, the process 2700 can also generate one or more obstacle segments by determining one or more portions of the LIDAR point scan that do not meet the criteria for a sidewalk segment, an alley segment, a road segment, a grass segment, a curb segment, and/or a curb-cut segment. The process 2700 can perform the above determinations for any number of LIDAR point clouds included in the data. The process 2700 can then proceed to 2724.
At 2724, the process 2700 can generate at least one GNSS marker based on the data. Due to environmental factors such as tree cover, the accuracy of the GNSS sensor may vary while traveling along the paved pathway. The process 2700 can determine locations along the paved pathway at which the GNSS location is of high accuracy by ensuring that its covariance matrix stays below a predetermined threshold (e.g., that the first element of the diagonal stays below 0.00055 in metric units) and will generate a GNSS marker at these locations. When traveling along the paved pathway in the future, the location error of the vehicle can be “zeroed” (e.g., to reduce drift) at these GNSS markers. In some embodiments, the process 2700 can generate the GNSS markers based on fix type (e.g., a RTK fixed integer solution technique). Some GNSS receivers indicate the accuracy of the location based on fix type, which is determined by the receiver based on the number of satellites visible, the dilution of precision of, the satellites, the type of GNSS receiver, and the GNSS technology being used. In some embodiments, fix type may not be available, and the overall quality of the GNSS signal can be determined based on the covariance matrix. The process 2700 can then proceed to 2728.
At 2728, the process 2700 can generate at least one centerline marker based on the data. For each LIDAR point cloud with a sidewalk segment and/or alley segment, the process 2700 can determine a portion of the sidewalk segment or other alley segment corresponding to the first (or the lowest in elevation) channel of the LIDAR sensor (e.g., points generated by the first channel that are included in the sidewalk segment or other alley segment), and determine the center of the portion. The process 2700 can average the centers from a number of LIDAR point clouds and generate a centerline marker based on the average location of the centers. In some embodiments, the process 2700 can average the centers every meter to generate a centerline marker. The process 2700 can convert the location of the centerline markers to GNSS coordinates based on the GNSS data. The process 2700 can then proceed to 2732.
At 2732, the process 2700 can output the map to a memory. In some embodiments, the process 2700 can save the map to a memory in the vehicle. The map can include at least a portion of the LIDAR point clouds included in the LIDAR data, the segments generated at 2708-2720, the GNSS markers generated at 2724, and/or the centerline markers generated at 2728. The process 2700 can then end.
It is understood that the process 2700 may not generate certain segments and/or markers at 2712-2724. In other words, the process 2700 may not generate at least one of a roadway segment, a grass segment, a curb segment, a curb-cut segment, and/or a GNSS marker. For example, the data received at 2704 may not be associated with an area, such as certain types of alleyways, including features such as roadways, grass, curb-cuts, and/or curbs. As another example, certain commercial business districts may not include grass, and the process 2700 may not generate any grass segments based on data associated with the commercial business districts. As yet another example, some areas, such as sidewalks with heavy tree coverage, may receive GNSS signals without sufficient quality for the process 2700 to generate any GNSS markers.
Referring now to
At 2804, the process 2800 can receive navigation data. The navigation data can include a map generated using the process 2700 in
At 2808, the process 2800 can receive location data from sensors located in the vehicle. The location data can be generated in the presence of snow on the paved pathway. The location data can be generated without the presence of snow on the paved pathway. In some embodiments, the sensors coupled to the vehicles can include a GNSS sensor and a LIDAR sensor. In some embodiments, the LIDAR sensor can be a Trimble GNSS antenna and RTK receiver, and the LIDAR sensor can be a Velodyne VLP-16 LIDAR. In some embodiments, the sensors can include an IMU sensor such as a Phidgets IMU. The GNSS sensor can be a high-accuracy GNSS sensor (e.g., accurate to a centimeter or less). The location data received from the sensors can include LIDAR scans (e.g., a three-dimensional point cloud) from the LIDAR sensor and a location value (e.g., coordinate values) and/or reception value (e.g., strength of signal value) from the GNSS sensor. In some embodiments, the location data received from the sensors can include a yaw value from the IMU sensor. The process 2800 can then proceed to 2812.
At 2812, the process 2800 can determine the location of the vehicle based on the location data. In some embodiments, the process 2800 can determine the location of the vehicle based on the GNSS data and/or the LIDAR data using an Extended Kalman Filter (EKF). In some embodiments, the process 2800 can calculate Δx, Δy, and Δyaw values based on the LIDAR data using LeGO-LOAM. In some embodiments, the process 2800 can provide the Δx, Δy, and Δyaw values to an EKF along with x and y values included in the GNSS data if the GNSS data is not accurate enough. In some embodiments, the process 2800 can determine a covariance matrix based on the quality of GNSS reception for the x and y values, and filter out GNSS coordinates if the covariance is above a threshold value. In other words, the process 2800 may only provide LeGO-LOAM data to the EKF if the GNSS data is not accurate enough. To initialize the EKF, the process 2800 may provide a yaw value received from the IMU sensor along with the LIDAR and/or GNSS data. The EKF can generate a global vehicle location value including an x value, a y value, and a yaw value. The global vehicle location value can be used as the location of the vehicle. The process 2800 can then proceed to 2820.
At 2816, the process 2800 can determine if the vehicle is at the destination. In some embodiments, the process 2800 can determine if the location value is within a predetermined margin of the destination (e.g., within five centimeters of the destination), indicating that the vehicle is at the destination. If the vehicle is not at the destination (e.g. “NO” at 2816), the process 2800 can proceed to 2820. If the vehicle is at the destination (e.g., “YES” at 2816), the process 2800 can end.
At 2820, the process 2800 can generate navigation instructions based on the location of the vehicle. In some embodiments, the process 2800 can determine the next centerline marker that the vehicle needs to travel to in order to proceed to the destination, and generate the navigation instructions to cause the vehicle to move from the current location to and/or in the direction of the next centerline marker. In some embodiments, the navigation instructions can include velocity commands. The process 2800 can then proceed to 2824.
At 2824, the process 2800 can cause the vehicle to navigate based on the navigation instructions. In some embodiments, the process 2800 can provide the navigation instructions to a drive system of the vehicle. The process 2800 can then proceed to 2808.
Referring now to
Additionally or alternatively, in some embodiments, the computing device 2950 can communicate information about data received from the sensors 2902 to a server 2952 over a communication network 2954, which can execute at least a portion of the mapping and navigation system 2904 to analyze from data received from the sensors 2902. In such embodiments, the server 2952 can return information to the computing device 2950 (and/or any other suitable computing device) indicative of an output of the mapping and navigation system 2904 to analyze from data received from sensors 2902.
In some embodiments, computing device 2950 and/or server 2952 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a tablet computer, a server computer, a virtual machine being executed by a physical computing device, and so on. In some embodiments, sensors 2902 can include an IMU sensor, a GNSS sensor, and a LIDAR sensor. In some embodiments, the sensors 2902 can be coupled to the vehicle.
In some embodiments, communication network 2954 can be any suitable communication network or combination of communication networks. For example, communication network 2954 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network 2954 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in
Referring now to
In some embodiments, communications systems 3008 can include any suitable hardware, firmware, and/or software for communicating information over communication network 2954 and/or any other suitable communication networks. For example, communications systems 3008 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 3008 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
In some embodiments, memory 3010 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 3002 to present content using display 3004, to communicate with server 2952 via communications system(s) 3008, and so on. Memory 3010 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 3010 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 3010 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 2950. In such embodiments, processor 3002 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 2952, transmit information to server 2952, and so on.
In some embodiments, server 2952 can include a processor 3012, a display 3014, one or more inputs 3016, one or more communications systems 3018, and/or memory 3020. In some embodiments, processor 3012 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 3014 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 3016 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
In some embodiments, communications systems 3018 can include any suitable hardware, firmware, and/or software for communicating information over communication network 2954 and/or any other suitable communication networks. For example, communications systems 3018 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 3018 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
In some embodiments, memory 3020 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 3012 to present content using display 3014, to communicate with one or more computing devices 2950, and so on. Memory 3020 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 3020 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 3020 can have encoded thereon a server program for controlling operation of server 2952. In such embodiments, processor 3012 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 2950, receive information and/or content from one or more computing devices 2950, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
In conclusion, the present disclosure provides a system that can automatically localize by fusing high accuracy GNSS and LIDAR Odometry and an algorithm that can automatically detect and label relevant ground features such as sidewalks, roads, grass, curb cuts when there is little to no snow on the ground. By using this information, the robot can travel from pt. A to pt. B using only sidewalks and curb cuts. Google Maps API can be utilized to provide the advantage of being able to use street names. The API automatically generates a path along road centerlines between the two points, which then can be modified for sidewalks. The system can be used to travel under most weather conditions even when the sidewalks are covered in snow (as long as the snow does not block travel). Vision is intentionally not being used in this research, thus the system is not typically affected by snow cover.
An accuracy rating of 91.46% true positives for mapping sidewalks and an average error for the localization scheme of 0.056 meters in the summer and 0.073 meters in the winter for paths as long as 1.5 Km was achieved. The proposed localization scheme provided better accuracy than relying solely on LIDAR odometry by achieving 0.032 meters of error compared to 0.153 meters moving between locations 1 and 3. Although there is less tree canopy above the vehicle in the winter, likely resulting in better GNSS reception throughout the path, the average error was slightly worse than in the summer. This may be attributed to the snow covering some features that LIDAR odometry uses for localization. Better sidewalk classification can be achieved by further research into understanding how to filter out vegetation in LIDAR scans, however, the accuracy achieved in our project has proven to be sufficient for sidewalk navigation of UGVs in urban environments with or without snow cover. The level of localization accuracy achieved using the systems and methods described in the present disclosure allow for navigation on sidewalks, especially in the case of autonomous wheelchair navigation. If the accuracy were any worse, the vehicle would likely begin traveling on grass or vegetation next to the sidewalk and miss the curb cuts. For accurate navigation of wheelchairs on sidewalks, RTK GNSS is currently used because uncorrected GNSS may not provide the accuracy desired to stay on the sidewalk and reach its destination address. However, with new GNSS satellites being launched every year, high accuracy GNSS will likely be available without the need for RTK within the next few years.
Additionally, LIDAR landmarks for localization estimation can be used in the case that the LIDAR odometry drifts in-between high accuracy GNSS locations. LIDAR landmarks can also remove the limitation of needing high accuracy GNSS for initial position estimate as the landmarks could provide this information instead.
The automatically generated segmented map can also be used for in-depth sidewalk assessment of parameters such as pavement quality, grade and width. This can also be updated over time as new data is collected. Introducing a high accuracy IMU into the localization scheme and actively feeding its data into the EKF (as opposed to just using it for initial heading) may improve the localization accuracy and as such is desired. Thus, in some embodiments, only LIDAR and GNSS may be needed to navigate sidewalks. In some embodiments, the systems and methods described herein can be utilized with a wheelchair (e.g., a motorized wheelchair) in order to provide automatic sidewalk navigation for a wheelchair user.
In some embodiments, the autonomous navigation system can travel on a side of a road where there are no sidewalks would facilitate wheelchair travel in the suburbs where there are fewer sidewalks present.
In some embodiments, the systems and methods described herein can be used for autonomous removal of snow from sidewalks. Most urban centers expect that homeowners remove snow from the sidewalks in front of their properties. Many do not comply. The result is that persons with disabilities are often stuck at home and cannot make their way to nearby stores or to public transit. Furthermore, cities located in northern climates often do not have a maintenance budget sufficient to clear snow from both the roads and the sidewalks. Snowplowing of the sidewalks by small autonomous vehicles may be the answer.
The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.