The present invention relates to indoor mapping systems and, more specifically, a system that employs tightly coupled decision-based fusion of light detection and ranging (LiDAR) and sonar data.
A 2D base map is one of the most essential elements for indoor mobile robots. LiDAR sensors, though very popular in indoor mapping, cannot be used to generate precise indoor maps due to their inability to handle reflective objects, such as glass doors and French windows. Although several approaches to overcoming this problem use high-end LiDAR sensors and signal processing techniques, the cost of these high-end LiDAR sensors can be prohibitive for large-scale deployment of indoor robots. Similarly, sonar sensors have been used to construct indoor maps as well. But sonar-based maps suffer from inaccuracy caused by sonar crosstalk, corner effects, large noise, etc. Although combining these approaches would seem logical, previous fusion attempts usually focus on one particular usage scenario and are unable to generate accurate maps and handle large areas.
In using sonar range finder to compensate LiDAR scanning, especially for glass detection, fusion has been one of the main techniques to obtain the location of glass materials. One way is to fuse sonar readings and laser scans in a Kalman filter fashion, where line segment and corner are used as features for sonar and laser synergy. However, the precision and density of this generated map is not sufficient to support robot navigation. Both pre-fusion and post-fusion methods for glass detection have not solved these problems. The pre-fusion method is to filter sonar and laser data before localization, while the post-fusion one is to conduct localization with laser data separately, then overlapping with sonar results. For example, fusion has been used to detect glass via subtracting the detected range of sonar and LiDAR. This approach is able to produce glass-aware map in small-area environment, but cannot handle large-area environments as the noise of sonar for non-glass area degrades overall LiDAR mapping results, and thus cannot be used for ubiquitous deployment. Another distinct technique for glass detection is to analyze the features of reflected laser intensity, where different methods were proposed to localize glass area with pure LiDAR sensing. This method suffers from affordability as it requires high-precision hence expensive LiDAR to guarantee the sensitivity of detection, and its effectiveness in large-area mapping remains unknown. Accordingly, there is a need in the art for an approach that can employ LiDAR and sonar data to create a reliable map in large scale indoor environment with a high proportion of repetitive areas.
The present invention comprises tightly coupled decision-based fusion of LiDAR and sonar data that effectively detects glass walls/panels, eliminates unknown space caused by range limits of LiDAR, and enrolls global optimization into the fusion. More specifically, the present invention uses a post-accumulation decision-based map fusion strategy that aims to obtain higher mapping quality by utilizing precise localization result of 2D LiDAR point cloud and effective perception compensation of sonar range data. The present invention can produce a reliable and scalable map for mobile robot navigation in both small-scale and large-scale indoor environments. A revisit scan may be provided to fuse the LiDAR map and the sonar map in the pixel level to generate a highly accurate representation for both small-area and large-area real-world environments with various degrees of reflective material.
In a first embodiment, the present invention comprises a method for mapping an indoor space involving the steps of obtaining LiDAR sensor data from an indoor space to be mapped, obtaining sonar data from the indoor space to be mapped, performing pose estimation using the LiDAR sensor data to generate a LiDAR map, performing grid registration and updating using the sonar data and estimated poses to generate a sonar map, and fusing the LiDAR map and the sonar map to generate a final map of the indoor space. The step of performing pose estimation using the LiDAR sensor data may comprise performing local scan matching to transform the LiDAR sensor data to a map frame comprising a plurality of submaps using scan poses. The step of performing pose estimation using the LiDAR sensor data may comprise extracting an initial local pose from a predetermined motion model to identify a plurality of key nodes. The step of performing pose estimation using the LiDAR sensor data may comprise matching the plurality of key nodes to one of the plurality of submaps until the number of matched key nodes exceed a predetermined threshold and then matching the plurality of key nodes to another of the plurality of submaps. The step of performing pose estimation using the LiDAR sensor data may comprise optimizing the plurality of submaps and corresponding matched key nodes to produce a final global pose. The step of fusing the LiDAR map and the sonar map may comprise performing trajectory fitting to generate a final fitted global pose. The step of performing grid registration and updating may comprise mapping the sonar data uses the final fitted global pose. The step of fusing the LiDAR map and the sonar map may comprise performing a second scan at a pixel level of the LiDAR map and the sonar map following the fitted final global pose. The step of performing a second scan at a pixel level of the LiDAR map and the sonar map following the fitted final global pose may comprise casting a plurality of rays from a sensor origin to a boundary of the LiDAR map and the sonar map to record a first occupied grid positioned along each of the plurality of rays. The step of performing a second scan at a pixel level of the LiDAR map and the sonar map following the fitted final global pose may comprise determining distances between obstacles in the LiDAR map and the sonar map using the first occupied grid positioned along each of the plurality of rays. The step of fusing the LiDAR map and the sonar map may comprise fusing the LiDAR map and the sonar map based on differences in the distances between obstacles in the LiDAR map and the sonar map.
In another embodiment, the present invention may be a device capable of navigating within an indoor location including a LiDAR sensor capable of outputting LiDAR data, a sonar sensor capable of outputting sonar data, and a microcontroller coupled to the sonar sensor to receive the sonar data and the LiDAR sensor to receive the LiDAR sensor data, wherein the microcontroller is programmed to construct a final map of the indoor location by performing pose estimation using the LiDAR sensor data to generate a LiDAR map, performing grid registration and updating using the sonar data and estimated posed to generate a sonar map, and fusing the LiDAR map and the sonar map to generate a final map of the indoor space. The microcontroller may be programmed to perform pose estimation using the LiDAR sensor data by performing local scan matching to transform the LiDAR sensor data to a map frame comprising a plurality of submaps using scan poses, extracting an initial local pose from a predetermined motion model to identify a plurality of key nodes, matching the plurality of key nodes to one of the plurality of submaps until the number of matched key nodes exceed a predetermined threshold and then matching the plurality of key nodes to another of the plurality of submaps, and optimizing the plurality of submaps and corresponding matched key nodes to produce a final global pose. The microcontroller may be programmed to fuse the LiDAR map and the sonar map by performing trajectory fitting to generate a final fitted global pose. The microcontroller may be programmed to perform grid registration and updating by mapping the sonar data using the final fitted global pose. The microcontroller may be programmed to fuse the LiDAR map and the sonar map by performing a second scan at a pixel level of the LiDAR map and the sonar map following the fitted final global pose. The microcontroller may be programmed to perform the second scan by casting a plurality of rays from a sensor origin to a boundary of the LiDAR map and the sonar map to record a first occupied grid positioned along each of the plurality of rays. The microcontroller may be programmed to determine distances between obstacles in the LiDAR map and the sonar map using the first occupied grid positioned along each of the plurality of rays. The microcontroller may be programmed to fuse the LiDAR map and the sonar map based on differences in the distances between obstacles in the LiDAR map and the sonar map.
The present invention will be more fully understood and appreciated by reading the following Detailed Description in conjunction with the accompanying drawings, in which:
Referring to the figures wherein like numerals refer to like parts throughout, there is seen in
For pose estimation (PE) 16, LiDAR observation is utilized for localization in our system with LiDAR being a more-precise range finder. The precise localization is achieved by maximizing the probability of individual grid on the map, given the LiDAR observation and other outside signals. Referring to
In Stage II 26, key node scans are first matched on a submap 30 sequentially. When the number of key nodes within one submap 30 reaches its limit, a matching target moves to the next candidate submap. Then, a round of optimization is launched. By following a Sparse Pose Adjustment method, a nonlinear optimization problem can be solved via considering constraints between key node poses and submap poses. Involved with global loop closure, a final global pose (FGP) is generated for the stages to follow and a LiDAR map 32 is constructed.
For Grid Registering and Updating (GRU) 22, LiDAR map 32 is constructed simultaneously with PE introduced in the previous step. All valid LiDAR scans are registered in LiDAR map based on the final global pose (FG). The mapping on the sonar side is converted to mapping with known poses, which is to obtain maximum likelihood probability of each grid on sonar map, given known poses and sonar observations. Simple sonar mapping algorithms are sufficient to meet the system requirement. For example, as seen in
Automatic Decision-based Fusion (ADF) 20 comprises Trajectory Fitting (TF) 40 and Revisit Scan Fusion (Stage III) 42. As in Stage I of PE 16, only those scans surviving from scan matching and motion filter are cached as key nodes and fed to global optimization. However, Revisit Scan Fusion 42 is highly dependent on the quality of final global pose (FG), so trajectory fitting 40 is conducted for the trajectory to generate a final fitted pose (FGfit) of higher quality. Trajectory fitting 40 provides smoothing of the trajectory used by automatic Decision-Based Fusion and can be used as feedback to GRU 22 to improve sonar map density by extrapolating middle status between poses. Stage III aims to fuse LiDAR map 32 and the sonar map 36 which are constructed separately in previous stages. The fusion relies on a second scan performed at the pixel level of map images via following FGfit.
Referring to
Referring to
The present application claims priority to U.S. Nonprovisional Application No. 63/076,508 filed on Sep. 10, 2020, hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63076508 | Sep 2020 | US |