The present disclosure is directed to systems and methods for solving Graph SLAM problems, and, in particular, to systems and methods for solving Graph SLAM problems for HD maps.
The aim of Graph SLAM is to find the set of poses {x1, x2, . . . , xN} that minimizes the cost functional
x2(x)=Σeij∈ε(eij(x))tΩijeij(x), (1)
where x=(x1T, x2T, . . . , xNT). This is typically solved in an iterative manner whereby each iteration k entails approximating the cost functional, finding the optimal set of pose updates Δxk, and computing the next set of poses via
xk+1=xkΔxk, (2)
where denotes a pose composition operator. Let d denote the dimensionality of the pose updates (i.e., Δxk∈dN). Substituting (2) into (1), we compute the gradient bk∈dN and the Hessian matrix Hk∈dNxdN of the x2 error as
The updates xk are obtained by linearizing the edges as
and substituting into (3), yielding the quadratic approximation
x2(xk+1)≈½(Δxk)THkΔxk+(bk)TΔxk+x2(xk) (7)
This is minimized by
Δxk=−(Hk)−1bk (8)
Most previously known solvers solve this linear system iteratively on a single node. However, such single-node solvers do not scale well with the size of the mapping region. This necessitates the need for scalable systems and algorithms to build HD Maps for larger areas.
In accordance with one embodiment of the present disclosure, a method of solving a graph simultaneous localization and mapping (graph SLAM) for HD maps using a computing system includes partitioning a graph into a plurality of subgraphs, each of the subgraphs having all of the vertices of the graph and a subset of the edges of the graph. Constrained and non-constrained vertices are defined for the subgraphs. An alternating direction method of multipliers (ADMM) formulation for Graph SLAM is defined using the partitioned graph. A distributed Graph SLAM algorithm is then defined in terms of the constrained and non-constrained vertices based on the ADMM formulation. The distributed Graph SLAM algorithm is then used to solve the Graph SLAM problem for HD maps.
For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and described in the following written specification. It is understood that no limitation to the scope of the disclosure is thereby intended. It is further understood that the present disclosure includes any alterations and modifications to the illustrated embodiments and includes further applications of the principles of the disclosure as would normally occur to a person of ordinary skill in the art to which this disclosure pertains.
There are a variety of frameworks available for large-scale computation and storage. Such frameworks can be broadly categorized as,
In one embodiment, the Apache Spark computing framework with the data stored in HDFS is used to implement the scalable Graph SLAM system. This offers several advantages. Firstly, compared to the existing single-node based solutions, the distributed architecture allows to scale-out the computation for large scale mapping. Further, compared to other distributed solutions like Hadoop; Spark provides a distributed in-memory computation with lazy execution. This makes it very fast and fault-tolerant, especially for iterative tasks compared to Hadoop(Map-Reduce) which needs to write all intermediate data to the disk. In alternative embodiments, any suitable computing framework may be used.
Utilizing such distributed architectures necessitates the adoption of advanced optimization algorithms that can handle the problem (1) at scale. Basically, the problem in (1) entails solving a quadratic program (QP) iteratively. Recent research towards solving such QPs in a distributed setting typically adopts one of the following approaches:
Another broader class of algorithms adopt a splitting technique, which decouples the objective function into smaller sub-problems and solves them iteratively. The resulting sub-problems are easier to tackle and can be readily solved in a distributed fashion using the approaches discussed above. Some such popular algorithms are, Proximal Gradient methods, Proximal (quasi-) Newton methods, Alternating Direction Method of Multipliers (ADMM), etc. Of these approaches ADMM has been widely adopted to solve several large scale practical problems. There are several adaptations of the ADMM algorithm customized to utilize the structure of the problem. One such version well-suited for parallel and distributed settings is consensus ADMM.
Consensus ADMM in it's basic form solves the following problem,
subject to wS=z ∀s=1 . . . S.
The consensus ADMM steps are
Here, Iw
Note that, such a form is well-suited for distributed settings using the map-reduce paradigm in Spark. In this case, the w-step involves distributing (mapping) the computation over several nodes ∀s=1 . . . S. And the (consensus) z-step involves aggregating (reducing) the distributed computations at the master node. Further, for (1), the wS-update entails a closed form solution, and can be efficiently computed using standard linear solvers. Although (10) is guaranteed to converge, the convergence rate of the algorithm heavily depends on the conditionality of the problem (1), and the selection of the parameter ρ.
In order to utilize consensus ADMM for Graph SLAM, the graph G=(ν, ε) into a set of subgraphs {G1, G2, . . . , GS} where each subgraph includes a number of vertices ν such that ν1=ν2= . . . =νS=ν and the sets of edges ε satisfy ∪S=1SεS=εand εS∩εt=Ø if s≠t. Define
Observe that the gradient and the Hessian matrix, respectively, are given by
Under the condition that Δx1k=Δx2k= . . . =ΔxSk, observe that (17) is equal to (7). Thus, our consensus ADMM formulation for Graph SLAM is
subject to Δx1k=Δx2k= . . . =ΔNk, and the resulting over-relaxed ADMM updates are
ΔxSk,l+1=−(HSk+ρI)−1(bSk+ρuSk,l−ρΔ
zk,l+1=αΔ
uSk,l+1=uSk,l+αΔ
Henceforth, the superscript k will be omitted on the ADMM variables Δx, z, and u, and it should be understood that the ADMM iterations are performed as part of the kth Graph SLAM iteration.
While each subgraph G s contains all N vertices (i.e., νS=ν), it contains only a subset of the edges (i.e., εSε). With that in mind, we define three different classes of vertices (see
A simplified example of graph partitioning is depicted in
To simplify notation, let us define v̆Sl:=D̆Svl∈dN, and z̆Sl:=D̆Szl∈dN, as the restriction to the entries of vl and zl, respectively, that are constrained in subgraph GS. Let us define a primal residual for subgraph GS as
ϕSl:=ΔxSl−zl∈dN (22)
and, as before, let
ϕ̆Sl:=Δx̆Sl−z̆Sl=D̆SϕSl∈dN, (23)
denote the restriction to entries that correspond to constrained vertices.
With these definitions in place, we can now break down the updates in terms of constrained and non-constrained vertices and present our distributed algorithm for solving Graph SLAM via ADMM:
The algorithm presented above will converge . . . To obtain a symmetric, positive-definite matrix Ek which can be used to precondition the objective function of (18) as
The matrix Ek should be selected such that it can be computed and applied in a distributed manner using the subgraphs. Preferably, the term EkHSkEk=ΣS=1SEkHSkEk≈I. In order to achieve this, the Hessian HSk is approximated for each subgraph using only contributions to the Hessian in rows and columns that correspond to vertices which are native to the subgraph (see
where δ is the Kronecker delta. Note that eij(x)=e[ij](x), so the x2 error and optimization problem remain unchanged. The native Hessian HS,nativek for a subgraph is defined as
Note that the supports of H1,nativek, . . . , HS,native are disjoint. This allows for fast, distributed computations, and so the preconditioning matrix is defined as
Ek:=(ΣS=1SHS,nativek)−1/2=ΣS=1S(HS,nativek)−1/2 (35)
By redefining the edges, as in (33), more constraints can be accounted for which in turn enables the Hessian to be better approximates, yielding a better preconditioning matrix.
It is worth mentioning that a subgraph's Hessian HSk and its native Hessian HS,nativek can both contain contributions that the other lacks. This is illustrated in
The processing system is operably connected to the memory system 104. The memory system 104 may include any suitable type of non-transitory computer readable storage medium. The memory system 104 may be distributed across multiple devices and/or locations. Programmed instructions are stored in the memory system. The programmed instructions include the instructions for implementing the operating system and various functionality of the system. The programmed instructions also include instructions for implementing the scalable Graph SLAM and ADMM algorithms described herein.
The components of the system may be communicatively connected by one or more networks 106. Any suitable type of network(s) may be used including wired and wireless types of networks. The computing devices include the appropriate network interface devices that enable transmitting and receiving of data via the network(s).
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same should be considered as illustrative and not restrictive in character. It is understood that only the preferred embodiments have been presented and that all changes, modifications and further applications that come within the spirit of the disclosure are desired to be protected.
This application is a 35 U.S.C. § 371 National Stage Application of PCT/EP2019/054467, filed on Feb. 22, 2019, which claims priority to U.S. Provisional Application Ser. No. 62/634,327, filed on Feb. 23, 2018, the disclosures of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/054467 | 2/22/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/162452 | 8/29/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20170019315 | Tapia | Jan 2017 | A1 |
20180218088 | Fischer | Aug 2018 | A1 |
Entry |
---|
Choudhary et al., “Exactly Sparse Memory Efficient SLAM using the Multi-Block Alternating Direction Method of Multipliers”, Sep. 2015, IEEE, 8 pages (Year: 2015). |
Kümmerle, R. et al., “g20: A General Framework for Graph Optimization,” IEEE International Conference on Robotics and Automation (ICRA), May 2011, pp. 3607-3613 (7 pages). |
Forster, C. et al., “IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation,” Georgia Institute of Technology, 2015 (10 pages). |
HP Vertica, Website, About, Internet Archive version dated Sep. 8, 2015, available at https://web.archive.org/web/20150908100849/https://www.vertica.com/about/ (2 pages). |
Pivotal Web Services, Website, Internet Archive version dated Feb. 10, 2018, available at https://web.archive.org/web/20180210201347/http://run.pivotal.io/ (5 pages). |
Apache Spark, Website, Internet Archive version dated Feb. 21, 2018, available at https://web.archive.org/web/20180221194730/https://spark.apache.org/ (2 pages). |
Zinkevich, M. et al., “Parallelized Stochastic Gradient Descent,” Advances in Neural Information Processing Systems 23 (NIPS 2010), 2010 (9 pages). |
Niu, F. et al., “Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent,” Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011 (9 pages). |
Zhang, Y. et al., “Splash: User-friendly Programming Interface for Parallelizing Stochastic Algorithms,” arXiv:1506.07552v2 [cs.LG] Sep. 23, 2015 (27 pages). |
Shamir, O. et al., “Communication-Efficient Distributed Optimization using an Approximate Newton-type Method,” Proceedings of the 31st International Conference on Machine Learning, 2014, PMLR 32(2) (9 pages). |
Zhang, Y. et al., “DiSCO: Distributed Optimization for Self-Concordant Empirical Loss,” Proceedings of the 32nd International Conference on Machine Learning, 2015, PMLR 37 (9 pages). |
Mahoney, M. W., “Randomized Algorithms for Matrices and Data,” Foundations and Trends in Machine Learning, 2010, vol. 3, No. 2 (104 pages). |
Heinze, C. et al., “LOCO: Distributing Ridge Regression with Random Projections,” arXiv:1406.3469v4 [stat.ML] Jun. 8, 2015 (37 pages). |
Parikh, N. et al., “Proximal Algorithms,” Foundations and Trends in Optimization, 2013, vol. 1, No. 3 (113 pages). |
Becker, S. et al., “A quasi-Newton proximal splitting method,” Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012 (9 pages). |
Boyd, S. et al., “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends in Machine Learning, 2010, vol. 3, No. 1, 1-122 (125 pages). |
Carlone, L. et al., “Eliminating Conditionally Independent Sets in Factor Graphs: A Unifying Perspective based on Smart Factors,” IEEE International Conference on Robotics & Automation (ICRA), May-Jun. 2014, pp. 4290-4297 (8 pages). |
Dhar, S. et al., “ADMM based Scalable Machine Learning on Spark,” IEEE International Conference on Big Data (Big Data), Oct.-Nov. 2015, pp. 1174-1182 (9 pages). |
Keuper, J. et al., “Asynchronous Parallel Stochastic Gradient Descent A Numeric Core for Scalable Distributed Machine Learning Algorithms,” Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments (MLHPC2015), Nov. 2015 (11 pages). |
Bonnans, J. F. et al., “A family of variable metric proximal methods,” Mathematical Programming, vol. 68, 1995, pp. 15-47 (33 pages). |
Oliphant, J. et al. “SciPy: Open Source Scientific Tools for Python,” 2001, Internet Archive available at https://web.archive.org/web/20180624105556/www.scipy.org/citing.html (archive date Jun. 24, 2018) (3 pages). |
Parente, L.A., et al., “A class of inexact variable metric proximal point algorithms,” SIAM Journal on Optimization, vol. 19, No. 1, pp. 240-260, 2008 (21 pages). |
Lee, J. D. et al., “Proximal newton-type methods for minimizing composite functions,” SIAM Journal on Optimization, vol. 24, No. 3, pp. 1420-1443, 2014 (24 pages). |
International Search Report corresponding to PCT Application No. PCT/EP2019/054467, dated Jun. 5, 2019 (3 pages). |
Choudhary, S. et al., “Exactly Sparse Memory Efficient SLAM using the Multi-Block Alternating Direction Method of Multipliers,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2015 (8 pages). |
Grisetti, G. et al., “A Tutorial on Graph-Based SLAM,” IEEE Intelligent Transportation Systems Magazine, vol. 2, No. 4, pp. 31-43, Feb. 4, 2011 (13 pages). |
Huang, G. et al., “Consistent Sparsification for Graph Optimization,” 2013 European Conference on Mobile Robots (ECMR), 150-157, Sep. 25, 2013 (8 pages). |
Paull, L. et al., “A Unified Resource-Constrained Framework for Graph SLAM,” 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1346-1353, May 16, 2016 (8 pages). |
Zhao, L. et al., “Linear SLAM: A Linear Solution to the Feature-based and Pose Graph SLAM based on Submap Joining,” 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 24-30, Nov. 3, 2013 (7 pages). |
Number | Date | Country | |
---|---|---|---|
20210003398 A1 | Jan 2021 | US |
Number | Date | Country | |
---|---|---|---|
62634327 | Feb 2018 | US |