This application is the National Stage Application of PCT/CN2021/136356, filed on Dec. 8, 2021, which claims priority to Chinese Patent Application No. 202110735175.7, filed on Jun. 30, 2021, which is incorporated by reference for all purposes as if fully set forth herein.
The present invention pertains to the field of autonomous map building of mobile robots, and particularly relates to a vision-and-laser-fused 2.5D map building method.
With a development of technologies, autonomous mobile robots are applied in more and more scenarios. The autonomous mobile robot is greatly required by industries, such as industrial handling, site inspection, or the like, mainly due to saving of a labor cost and more safety. Perception and adaptation to an environment are premises of autonomous intelligence of the robot, a simultaneous localization and mapping (SLAM) technology is considered as a core link for realizing autonomous navigation, and a series of SLAM technologies with a laser radar and a camera as cores are widely researched and applied. However, frequent handling of cargoes at an industrial site and an unknown environment of a patrolling site present significant challenges to the SLAM technology. In addition, as the robots surge to a service industry, more and more robots are required to work in home environments which are highly dynamic, and walks of persons and random movement of objects require the SLAM technology to be quite stable for stable work of the robots.
In a laser SLAM solution commonly used in an industrial community, scanning matching is performed dependent on accurate initial values in a structured scenario, and a wheel-type odometer usually has low precision, has an accumulated error, and furthermore is difficult to initialize; a visual SLAM solution usually does not have a navigation function and cannot be put into practical use. Sensor fusion is an effective solution for solving single sensor failure, and no solution for fusing stable open-source 2D laser and visual information exists at present.
2.5D maps are three-dimensional abstract descriptions of one or more aspects of reality or a part thereof to a scale based on three-dimensional electronic map databases.
An object of the present invention is to provide a vision-and-laser-fused 2.5D map building method, which is used for solving a problem of insufficient expression dimensions of an existing map.
In order to achieve the above object, the following technical solution is adopted in the present invention.
A vision-and-laser-fused 2.5D map building method includes:
Preferably, in S1, the inter-frame transformation includes:
Preferably, the least square problem is:
Preferably, in S2, the scanning matching includes:
Preferably, the optimal pose ξ* is:
Preferably, in S3, the loop closure detection includes:
Preferably, in S3, an optimization formula of the back-end global optimization is:
Preferably, in S4, the 2.5D map is M={m(x,y)}, the 2.5D map includes a laser grid map Mgrid={mg(x,y)} and a visual feature map Mfeature={mf(x,y)}, and the 2.5D map M={m(x,y)} is:
m(x,y)={mg(x,y),mf(x,y)},
mf(x,y)={f(x,y,z1),f(x,y,z2), . . . ,f(x,y,zn)},
Preferably, an update form of the visual feature dimension is:
Mfeaturenew={Mfeatureold;fnew,ξf}.
Preferably, the grid dimension includes an unobserved grid and an observed grid, a laser hit probability phit or pmiss is directly given to the unobserved grid, and an update form of the observed grid is:
Due to application of the above technical solution, compared with a prior art, the present invention has the following advantages.
1. In the present invention, the 2.5D map is built using a method of fusing the laser grid and the visual features, and compared with a pure laser map and a pure visual map, richness of dimensions is improved, and completeness of information expression is improved.
2. The 2.5D map building method according to the present invention is not influenced by single sensor failure, and can still stably work in a scenario of sensor degradation.
3. The 2.5D map building method according to the present invention can avoid a laser initialization process of a positioning and repositioning system, and correct positioning can be quickly and accurately recovered in an error positioning scenario.
The technical solution of the present invention will be described clearly and completely below with reference to the accompanying drawings, and apparently, the described embodiments are some but not all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts fall within the protection scope of the present invention.
As shown in
S1: calculating inter-image frame pose transformation according to an RGB-D image sequence to establish a visual front-end odometer, the calculating inter-image frame pose transformation specifically including:
(1) extracting an ORB feature of each image frame on the time sequence, and then establishing a corresponding feature matching relationship, the ORB feature being a feature with rotation invariance, and a feature extraction method being a method in an open-source framework ORB_SLAM; and
(2) taking a previous frame as a reference frame, performing perspective-n-point (PnP) calculation on a current frame and the reference frame, building a least square problem according to a minimized reprojection error, and performing iterative optimization solving to obtain inter-frame pose transformation.
In the present embodiment, the PnP problem is solved using a bundle adjustment (BA) method.
As shown in
S2: taking a visual front-end initial estimation as an initial value of scanning matching, and performing laser front-end coarse-grained and fine-grained searches, a scan-to-map matching mode being adopted in laser front-end scanning matching in the present embodiment.
Specifically, as shown in
A definition of the search space for the scanning matching is shown as follows:
W={ξ0+(rjx,rjy,δθjθ):(jx,jy,jθ)ϵ
S3: performing loop closure detection, and performing back-end global optimization on a 2.5D map according to a detected closed loop, so as to solve a problem of an accumulated error caused by front-end scanning matching, and prevent a possible interleaving phenomenon of a map established only by a front end, specifically:
An optimization formula of the back-end global optimization is:
S4: performing incremental update on visual feature dimensions of the 2.5D map, and performing occupation probability update on grid dimensions, specifically:
In the present embodiment, as shown in
For the update of the visual feature dimension, a feature pose obtained by PnP optimization is incrementally inserted into the map at the front end, the pose of the robot is optimized again at the back end, and an update form of the visual feature dimension is:
Mfeaturenew={Mfeatureold;fnew,ξf}.
The grid dimension includes an unobserved grid and an observed grid, and for the update of the grid dimension, a laser hit probability phit or pmiss is directly given to the unobserved grid, and an update form of the observed grid is:
The above-mentioned embodiments are merely illustrative of the technical concepts and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention are intended to be covered by the protection scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
202110735175.7 | Jun 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/136356 | 12/8/2021 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2023/273169 | 1/5/2023 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20190273909 | Ye et al. | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
111258313 | Jun 2020 | CN |
111536964 | Aug 2020 | CN |
112258600 | Jan 2021 | CN |
112525202 | Mar 2021 | CN |
113624221 | Nov 2021 | CN |
2019067215 | Apr 2019 | JP |
Entry |
---|
Guolai Jiang et al., “A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion” Appl. Sci. 2019, 9, ISSN: 2076-3417, pp. 4-14 (May 22, 2019). |
Yue Xiao “Research on SLAM of robot based on the fusion of LIDAR and Vision” Master's Thesis, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, pp. 27-30, 43 (Jan. 15, 2019). |