The present disclosure is directed to systems and methods for solving Graph SLAM problems, and, in particular, to systems and methods for solving Graph SLAM problems for HD maps.
The aim of Graph SLAM is to find the set of poses {x1, x2, . . . , xN} that minimizes the cost functional
x
2(x)=Σeij∈ε(eij(x))tΩijeij(x), (1)
where x=(x1T, x2T, . . . , xNT). This is typically solved in an iterative manner whereby each iteration k entails approximating the cost functional, finding the optimal set of pose updates Δxk , and computing the next set of poses via
x
k+1
=x
k
Δx
k, (2)
where denotes a pose composition operator. Let d denote the dimensionality of the pose updates (i.e., Δxk∈
dN). Substituting (2) into (1), we compute the gradient bk∈
dN and the Hessian matrix Hk∈
dNxdN of the x2 error as
The updates xk are obtained by linearizing the edges as
and substituting into (3), yielding the quadratic approximation
x
2(xk+1)≈½(Δxk)THkΔxk+(bk)TΔxk+x2(xk) (7)
This is minimized by
Δxk=−(Hk)−1bk (8)
Most previously known solvers solve this linear system iteratively on a single node. However, such single-node solvers do not scale well with the size of the mapping region. This necessitates the need for scalable systems and algorithms to build HD Maps for larger areas.
In accordance with one embodiment of the present disclosure, a method of solving a graph simultaneous localization and mapping (graph SLAM) for HD maps using a computing system includes partitioning a graph into a plurality of subgraphs, each of the subgraphs having all of the vertices of the graph and a subset of the edges of the graph. Constrained and non-constrained vertices are defined for the subgraphs. An alternating direction method of multipliers (ADMM) formulation for Graph SLAM is defined using the partitioned graph. A distributed Graph SLAM algorithm is then defined in terms of the constrained and non-constrained vertices based on the ADMM formulation. The distributed Graph SLAM algorithm is then used to solve the Graph SLAM problem for HD maps.
For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and described in the following written specification. It is understood that no limitation to the scope of the disclosure is thereby intended. It is further understood that the present disclosure includes any alterations and modifications to the illustrated embodiments and includes further applications of the principles of the disclosure as would normally occur to a person of ordinary skill in the art to which this disclosure pertains.
There are a variety of frameworks available for large-scale computation and storage. Such frameworks can be broadly categorized as,
In one embodiment, the Apache Spark computing framework with the data stored in HDFS is used to implement the scalable Graph SLAM system. This offers several advantages. Firstly, compared to the existing single-node based solutions, the distributed architecture allows to scale-out the computation for large scale mapping. Further, compared to other distributed solutions like Hadoop; Spark provides a distributed in-memory computation with lazy execution. This makes it very fast and fault-tolerant, especially for iterative tasks compared to Hadoop(Map-Reduce) which needs to write all intermediate data to the disk. In alternative embodiments, any suitable computing framework may be used.
Utilizing such distributed architectures necessitates the adoption of advanced optimization algorithms that can handle the problem (1) at scale. Basically, the problem in (1) entails solving a quadratic program (QP) iteratively. Recent research towards solving such QPs in a distributed setting typically adopts one of the following approaches:
Another broader class of algorithms adopt a splitting technique, which decouples the objective function into smaller sub-problems and solves them iteratively. The resulting sub-problems are easier to tackle and can be readily solved in a distributed fashion using the approaches discussed above. Some such popular algorithms are, Proximal Gradient methods, Proximal (quasi-) Newton methods, Alternating Direction Method of Multipliers (ADMM), etc. Of these approaches ADMM has been widely adopted to solve several large scale practical problems. There are several adaptations of the ADMM algorithm customized to utilize the structure of the problem. One such version well-suited for parallel and distributed settings is consensus ADMM.
Consensus ADMM in it's basic form solves the following problem,
subject to wS=z ∀s=1 . . . S.
The consensus ADMM steps are
Here, Iw
Note that, such a form is well-suited for distributed settings using the map-reduce paradigm in Spark. In this case, the w-step involves distributing (mapping) the computation over several nodes ∀s=1 . . . S. And the (consensus) z-step involves aggregating (reducing) the distributed computations at the master node. Further, for (1), the wS-update entails a closed form solution, and can be efficiently computed using standard linear solvers. Although (10) is guaranteed to converge, the convergence rate of the algorithm heavily depends on the conditionality of the problem (1), and the selection of the parameter ρ.
In order to utilize consensus ADMM for Graph SLAM, the graph G=(ν, ε) into a set of subgraphs {G1, G2, . . . , GS} where each subgraph includes a number of vertices ν such that ν1=ν2= . . . =νS=ν and the sets of edges ε satisfy ∪S=1SεS=εand εS∩εt=Ø if s≠t. Define
Observe that the gradient and the Hessian matrix, respectively, are given by
Under the condition that Δx1k=Δx2k= . . . =ΔxSk, observe that (17) is equal to (7). Thus, our consensus ADMM formulation for Graph SLAM is
subject to Δx1k=Δx2k= . . . =ΔNk, and the resulting over-relaxed ADMM updates are
Δx
S
k,l+1=−(HSk+ρI)−1(bSk+ρuSk,l−ρΔ
z
k,l+1
=αΔ
k,l+(1−α)zk,l, (20)
u
S
k,l+1
=u
S
k,l
+αΔ
k,l+1+(1−α)zk,l−zk,l+1 (21)
Henceforth, the superscript k will be omitted on the ADMM variables Δx, z, and u, and it should be understood that the ADMM iterations are performed as part of the kth Graph SLAM iteration.
While each subgraph G s contains all N vertices (i.e., νS=ν), it contains only a subset of the edges (i.e., εSε). With that in mind, we define three different classes of vertices (see
A simplified example of graph partitioning is depicted in
To simplify notation, let us define v̆Sl:=D̆Svl∈dN, and z̆Sl:=D̆Szl∈
dN, as the restriction to the entries of vl and zl, respectively, that are constrained in subgraph GS. Let us define a primal residual for subgraph GS as
ϕSl:=ΔxSl−zl∈dN (22) and, as before, let
ϕ̆Sl:32 Δx̆Sl−z̆Sl=D̆SϕSl∈dN, (23)
denote the restriction to entries that correspond to constrained vertices.
With these definitions in place, we can now break down the updates in terms of constrained and non-constrained vertices and present our distributed algorithm for solving Graph SLAM via ADMM:
The algorithm presented above will converge . . . To obtain a symmetric, positive-definite matrix Ek which can be used to precondition the objective function of (18) as
The matrix Ek should be selected such that it can be computed and applied in a distributed manner using the subgraphs. Preferably, the term EkHSkEk=ΣS=1SEkHSkEk≈I. In order to achieve this, the Hessian HSk is approximated for each subgraph using only contributions to the Hessian in rows and columns that correspond to vertices which are native to the subgraph (see dN denotes a vector which is 1 in entries that correspond to vertex νi and 0 elsewhere, and define Δxjk accordingly. The edge may be defined as
where δ is the Kronecker delta. Note that eij(x)=e[ij](x), so the x2 error and optimization problem remain unchanged. The native Hessian HS,nativek for a subgraph is defined as
Note that the supports of H1,nativek, . . . , HS,native are disjoint. This allows for fast, distributed computations, and so the preconditioning matrix is defined as
E
k:=(ΣS=1SHS,nativek)−1/2=ΣS=1S(HS,nativek)−1/2 (35)
By redefining the edges, as in (33), more constraints can be accounted for which in turn enables the Hessian to be better approximates, yielding a better preconditioning matrix.
It is worth mentioning that a subgraph's Hessian HSk and its native Hessian HS,nativek can both contain contributions that the other lacks. This is illustrated in
The processing system is operably connected to the memory system 104. The memory system 104 may include any suitable type of non-transitory computer readable storage medium. The memory system 104 may be distributed across multiple devices and/or locations. Programmed instructions are stored in the memory system. The programmed instructions include the instructions for implementing the operating system and various functionality of the system. The programmed instructions also include instructions for implementing the scalable Graph SLAM and ADMM algorithms described herein.
The components of the system may be communicatively connected by one or more networks 106. Any suitable type of network(s) may be used including wired and wireless types of networks. The computing devices include the appropriate network interface devices that enable transmitting and receiving of data via the network(s).
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same should be considered as illustrative and not restrictive in character. It is understood that only the preferred embodiments have been presented and that all changes, modifications and further applications that come within the spirit of the disclosure are desired to be protected.
This application claims priority to U.S. Provisional Application Ser. No. 62/634,327 entitled “SCALABLE GRAPH SLAM FOR HD MAPS” by Irion et al., filed Feb. 23, 2018, the disclosure of which is hereby incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/054467 | 2/22/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62634327 | Feb 2018 | US |