Scalable graph SLAM for HD maps

Information

  • Patent Grant
  • 11892297
  • Patent Number
    11,892,297
  • Date Filed
    Friday, February 22, 2019
    5 years ago
  • Date Issued
    Tuesday, February 6, 2024
    4 months ago
Abstract
A method of solving a graph simultaneous localization and mapping (graph SLAM) for HD maps using a computing system includes partitioning a graph into a plurality of subgraphs, each of the subgraphs having all of the vertices of the graph and a subset of the edges of the graph. Constrained and non-constrained vertices are defined for the subgraphs. An alternating direction method of multipliers (ADMM) formulation for Graph SLAM is defined using the partitioned graph. A distributed Graph SLAM algorithm is then defined in terms of the constrained and non-constrained vertices based on the ADMM formulation. The distributed Graph SLAM algorithm is then used to solve the Graph SLAM problem for HD maps.
Description
TECHNICAL FIELD

The present disclosure is directed to systems and methods for solving Graph SLAM problems, and, in particular, to systems and methods for solving Graph SLAM problems for HD maps.


BACKGROUND

The aim of Graph SLAM is to find the set of poses {x1, x2, . . . , xN} that minimizes the cost functional

x2(x)=Σeij∈ε(eij(x))tΩijeij(x),  (1)

where x=(x1T, x2T, . . . , xNT). This is typically solved in an iterative manner whereby each iteration k entails approximating the cost functional, finding the optimal set of pose updates Δxk, and computing the next set of poses via

xk+1=xkcustom characterΔxk,  (2)

where custom character denotes a pose composition operator. Let d denote the dimensionality of the pose updates (i.e., Δxkcustom characterdN). Substituting (2) into (1), we compute the gradient bkcustom characterdN and the Hessian matrix Hkcustom characterdNxdN of the x2 error as












x
2

(

x

k
+
1


)

=





e
ij


ε





(


e

i

j


(


x
k



Δ


x
k


)

)

T



Ω

i

j





e

i

j


(


x
k



Δ


x
k


)




,




(
3
)













b
k

=

2






e
ij


ε





(


e

i

j


(

x
k

)

)

T



Ω
ij






e

i

j






Δ



x
k










(
4
)













H
k

=

2






e
ij


ε





(




e

i

j






Δ



x
k



)

T



Ω

i

j








e

i

j






Δ



x
k



.








(
5
)







The updates xk are obtained by linearizing the edges as











e

i

j


(

x

k
+
1


)





e

i

j


(

x
k

)

+





e

i

j






Δ



x
k




Δ


x
k







(
6
)








and substituting into (3), yielding the quadratic approximation

x2(xk+1)≈½(Δxk)THkΔxk+(bk)TΔxk+x2(xk)  (7)

This is minimized by

Δxk=−(Hk)−1bk  (8)


Most previously known solvers solve this linear system iteratively on a single node. However, such single-node solvers do not scale well with the size of the mapping region. This necessitates the need for scalable systems and algorithms to build HD Maps for larger areas.


SUMMARY

In accordance with one embodiment of the present disclosure, a method of solving a graph simultaneous localization and mapping (graph SLAM) for HD maps using a computing system includes partitioning a graph into a plurality of subgraphs, each of the subgraphs having all of the vertices of the graph and a subset of the edges of the graph. Constrained and non-constrained vertices are defined for the subgraphs. An alternating direction method of multipliers (ADMM) formulation for Graph SLAM is defined using the partitioned graph. A distributed Graph SLAM algorithm is then defined in terms of the constrained and non-constrained vertices based on the ADMM formulation. The distributed Graph SLAM algorithm is then used to solve the Graph SLAM problem for HD maps.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1(a) shows how the vertices of subgraphs G1 and G2 of graph GS are defined;



FIG. 1(b) shows the constrained vertices of subgraphs G1 and G2.



FIG. 2 is a schematic depiction of a computing system for implementing the distributed Graph SLAM algorithm for HD maps described herein.





DETAILED DESCRIPTION

For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and described in the following written specification. It is understood that no limitation to the scope of the disclosure is thereby intended. It is further understood that the present disclosure includes any alterations and modifications to the illustrated embodiments and includes further applications of the principles of the disclosure as would normally occur to a person of ordinary skill in the art to which this disclosure pertains.


There are a variety of frameworks available for large-scale computation and storage. Such frameworks can be broadly categorized as,

    • a. Single-node in-memory systems, where the entire data is loaded and processed in the memory of a single computer.
    • b. In-disk systems, where the entire data resides in disk, and chunks of it are loaded and processed in memory. Although there are not many available tools that adopts this approach for solving (1), most of the single-node solutions can be extended through the standard in-disk capabilities available in standard computation platforms. For example, memmapfile in MATLAB, mmap in Python, etc.
    • c. In-database systems, where the data is stored in a database and the processing is taken to the database where the data resides.
    • d. Distributed Storage and Computing systems, where the data resides in multiple nodes, and the computation is distributed among those nodes. Typical frameworks involve, Hadoop, Apache Spark (distributed in-memory analytics with storage on HDFS).


In one embodiment, the Apache Spark computing framework with the data stored in HDFS is used to implement the scalable Graph SLAM system. This offers several advantages. Firstly, compared to the existing single-node based solutions, the distributed architecture allows to scale-out the computation for large scale mapping. Further, compared to other distributed solutions like Hadoop; Spark provides a distributed in-memory computation with lazy execution. This makes it very fast and fault-tolerant, especially for iterative tasks compared to Hadoop(Map-Reduce) which needs to write all intermediate data to the disk. In alternative embodiments, any suitable computing framework may be used.


Utilizing such distributed architectures necessitates the adoption of advanced optimization algorithms that can handle the problem (1) at scale. Basically, the problem in (1) entails solving a quadratic program (QP) iteratively. Recent research towards solving such QPs in a distributed setting typically adopts one of the following approaches:

    • a. First Order methods, which use first order information from the objective like., gradient estimates, secant approximates etc. Such methods incur low per-iteration computation complexity and provide dimension independent convergence. However, the convergence rates heavily depend on the step-size selection and the conditionality of the problem.
    • b. Second Order methods, which use additional curvature information like, Hessian (or approximations like L-BFGS), and improves the conditionality of the problem. Such methods provide improved convergence rates but results to worse per-step computation costs.
    • c. Randomized algorithms, which use a subsampled or low rank representation of the original large scale data to solve a small scale equivalent problem.


Another broader class of algorithms adopt a splitting technique, which decouples the objective function into smaller sub-problems and solves them iteratively. The resulting sub-problems are easier to tackle and can be readily solved in a distributed fashion using the approaches discussed above. Some such popular algorithms are, Proximal Gradient methods, Proximal (quasi-) Newton methods, Alternating Direction Method of Multipliers (ADMM), etc. Of these approaches ADMM has been widely adopted to solve several large scale practical problems. There are several adaptations of the ADMM algorithm customized to utilize the structure of the problem. One such version well-suited for parallel and distributed settings is consensus ADMM.


Consensus ADMM in it's basic form solves the following problem,











min


w
1

,


,


w

S
,



z




Σ

s
=
1

S




f
s

(

w
s

)







(
9
)








subject to wS=z ∀s=1 . . . S.


The consensus ADMM steps are











w
S

l
+
1


=




argmin

w
s







f
s

(

w
s

)


+


ρ
2







w
s

-

z
l

+

u
s
l




2
2











z

l
+
1


=





argmin
z





I


x
1

l
+
1


,



,

x
S

l
+
1




(
z
)


+


ρ
2







w
s

l
+
1


-

z
l

+

u
s
l




2
2









=



1
s






s
=
1

S


w
s

l
+
1












u
s

l
+
1


=


u
s
l

+

w
s

l
+
1


-


z

l
+
1


.







(
10
)








Here, Iw1, . . . ,wS(z)={z|w1= . . . =wS=z}.


Note that, such a form is well-suited for distributed settings using the map-reduce paradigm in Spark. In this case, the w-step involves distributing (mapping) the computation over several nodes ∀s=1 . . . S. And the (consensus) z-step involves aggregating (reducing) the distributed computations at the master node. Further, for (1), the wS-update entails a closed form solution, and can be efficiently computed using standard linear solvers. Although (10) is guaranteed to converge, the convergence rate of the algorithm heavily depends on the conditionality of the problem (1), and the selection of the parameter ρ.


In order to utilize consensus ADMM for Graph SLAM, the graph G=(ν, ε) into a set of subgraphs {G1, G2, . . . , GS} where each subgraph includes a number of vertices ν such that ν12= . . . =νS=ν and the sets of edges ε satisfy ∪S=1SεS=εand εS∩εt=Ø if s≠t. Define












x
s
2

(

x
s

k
+
1


)

:=





e
ij



ε
s






(


e

i

j


(


x
k



Δ


x
s
k


)

)

T



Ω

i

j





e

i

j


(


x
s
k




Δx
s
k


)




,




(
11
)














b
s
k

:=


2






e
ij


ε





(


e

i

j


(

x
k

)

)

T



Ω
ij






e

i

j






Δ



x
k









dN



,




(
12
)













H
s
k

:=


2






e
ij


ε





(




e

i

j






Δ



x
k



)

T



Ω

i

j







e

i

j






Δ



x
k










dN

.






(
13
)







Observe that the gradient and the Hessian matrix, respectively, are given by












b
k

=




s
=
1

S



b
s
k

.







(
14
)
















H
k

=




s
=
1

S


H
s
k



,





(
15
)
















If


Δ


x
1
k


=


Δ


x
2
k


=


=

Δ


x
S
k





,


then




x
2

(

x

k
+
1


)


=




s
=
1

S



x
s
2

(

x
s

k
+
1


)



,





(
16
)
















1
2






s
=
1

S




(

Δ


x
S
k


)

T



H
S
k




+




s
=
1

S




(

b
S
k

)

T


Δ


x
S
k



+





e
ij


ε





(


e
ij

(

x
k

)

)

T



Ω
ij





e
ij

(


x
k



x
k


)

.








(
17
)







Under the condition that Δx1k=Δx2k= . . . =ΔxSk, observe that (17) is equal to (7). Thus, our consensus ADMM formulation for Graph SLAM is













arg

min



Δ


x
1
k


,


,

Δ


x
N
k








Σ

s
=
1

s



1
2




(

Δ


x
s
k


)

T



H
s
k


Δ


x
s
k


+



(

b
s
k

)

T


Δ


x
s
k



,




(
18
)








subject to Δx1k=Δx2k= . . . =ΔNk, and the resulting over-relaxed ADMM updates are

ΔxSk,l+1=−(HSk+ρI)−1(bSk+ρuSk,l−ρΔxk,l),  (19)
zk,l+1=αΔxk,l+(1−α)zk,l,  (20)
uSk,l+1=uSk,l+αΔxk,l+1+(1−α)zk,l−zk,l+1  (21)

Henceforth, the superscript k will be omitted on the ADMM variables Δx, z, and u, and it should be understood that the ADMM iterations are performed as part of the kth Graph SLAM iteration.


While each subgraph G s contains all N vertices (i.e., νS=ν), it contains only a subset of the edges (i.e., εScustom characterε). With that in mind, we define three different classes of vertices (see FIG. 1a for an illustration).

    • 1) A constrained vertex of subgraph GS is a vertex which is constrained by at least one edge e∈εS. Let NS denote the number of constrained vertices in subgraph GS. The entries in ΔxSl and us that correspond to constrained vertices are denoted as Δx̆Slcustom characterdNS and ŭSlcustom characterdNS, respectively, and the corresponding entries in bSk and rows/columns in HSk are denoted by b̆Skcustom characterdNS and H̆Skcustom characterdNSxdNS. Furthermore, let D̆Skcustom characterdNSxdN denote the matrix which down-samples to the entries constrained in subgraph GS (i.e., D̆SΔx̆Sl=Δx̆Slcustom characterdNS and D̆STΔx̆Slcustom characterdN).
    • 2) A non-constrained vertex of subgraph GS is a vertex which is not constrained by any edges in εS. Let {circumflex over (D)}Scustom characterd(N−NS)xdN denote the matrix which down-samples to the entries not constrained in subgraph GS. Let νlcustom characterdN denote the values assumed by the non-constrained entries in u1l, . . . , uSl; in other words, {circumflex over (D)}Svl={circumflex over (D)}SuSlcustom characterd(N−NS) for all s.
    • 3) When partitioning the graph, each vertex is designated to be a native vertex of exactly one subgraph for which it is a constrained vertex; when multiple subgraphs contain edges that constrain a vertex, the choice of its native subgraph must be decided. νS,native to denote the set of vertices which are native to subgraph GS.


A simplified example of graph partitioning is depicted in FIG. 1. The graph G is partitioned into subgraphs G1 and G2. The graph G includes four columns and four rows. Referring to FIG. 1(b), the first subgraph G1 contributes to rows and columns 1-3 of the Hessian, and the second subgraph G2 contributes to rows and columns 3 and 4. The darker shaded regions indicate native, constrained vertices for subgraph G1, and the lighter shaded areas indicate native, constrained vertices for subgraph G2. The subgraphs' non-constrained vertices are not labeled.


To simplify notation, let us define v̆Sl:=D̆Svlcustom characterdN, and z̆Sl:=D̆Szlcustom characterdN, as the restriction to the entries of vl and zl, respectively, that are constrained in subgraph GS. Let us define a primal residual for subgraph GS as

ϕSl:=ΔxSl−zlcustom characterdN  (22)

and, as before, let

ϕ̆Sl:=Δx̆Sl−z̆Sl=D̆SϕSlcustom characterdN,  (23)

denote the restriction to entries that correspond to constrained vertices.


With these definitions in place, we can now break down the updates in terms of constrained and non-constrained vertices and present our distributed algorithm for solving Graph SLAM via ADMM:










Δ



x


s

l
+
1



=


-


(



H


s
k

+

ρ

I


)


-
1





(



b




s
k

+

ρ



u


s
l


-

ρ


z
s
l



)






(
24
)













z

l
+
1


=


z
l

-

α


v
1


+


α
S






s
=
1

S




D


s
T

(


Δ



x


s

l
+
1



-


z


s
l

+


v


s
l


)








(
25
)













ψ

l
+
1


=


z

l
+
1


-

z
l






(
26
)













v

l
+
1


=



(

1
-
α

)



v
l


-

ψ

l
+
1







(
27
)














ϕ


s

l
+
1


=


Δ



x


s

l
+
1



-


z


s

l
+
1







(
28
)














ψ


s

l
+
1


=



z


s

l
+
1


-


z


s
l






(
29
)














u


s

l
+
1


=



u


s
l

+

α



ϕ


s

l
+
1



+


(

α
-
1

)




ψ


s

l
+
1








(
30
)

















ϕ


s

l
+
1




2

=


S






ψ

k
+
1


=

v
k




2


+




s
=
1

S






ϕ


s

k
+
1




2


-







ψ


s

k
+
1


+


v


s
l




2

.






(
31
)







The algorithm presented above will converge . . . To obtain a symmetric, positive-definite matrix Ek which can be used to precondition the objective function of (18) as














s
=
1

S



1
2




(

Δ


x
s
k


)

T



H
s
k


Δ


x
s
k



+



(

b
s
k

)

T


Δ


x
s
k



=






s
=
1

S



1
2




(

Δ


x
s
k


)

T



E
k



H
s
k



E
k


Δ


x
S
k



+



(

b
S
k

)

T



E
k


Δ


x
S
k



=





s
=
1

S



1
2




(

Δ



x
˜

s
k


)

T




H
¯

s
k


Δ



x
˜

S
k



+



(


b
~

S
k

)

T


Δ




x
˜

S
k

.








(
32
)







The matrix Ek should be selected such that it can be computed and applied in a distributed manner using the subgraphs. Preferably, the term EkHSkEkS=1SEkHSkEk≈I. In order to achieve this, the Hessian HSk is approximated for each subgraph using only contributions to the Hessian in rows and columns that correspond to vertices which are native to the subgraph (see FIG. 1(b). For example, suppose eij is a binary edge which constrains vertices νi and νj. Let Δxik:=1i⊙Δxk, where ⊙ denotes element-wise multiplication and 1icustom characterdN denotes a vector which is 1 in entries that correspond to vertex νi and 0 elsewhere, and define Δxjk accordingly. The edge may be defined as











e

[

i
,
j

]


(


x
k

Δ


x
k


)

:=




e
ij

(

x
k

)

+


δ



v
i



v

s
,
native





v
j



v

s
,
native










Δ



x
i






e
ij

(


x
k

Δ


x



)




|



Δ




x





=


0





Δ


x
k


+


δ



v
i



v

s
,
native





v
j



v

s
,
native










Δ



x
j






e
ij

(


x
k

Δ


x



)




|



Δ




x





=


0





Δ


x
k


+


δ



v
i



v

s
,
native





v
j



v

s
,
native










Δ



x






e
ij

(


x
k

Δ


x



)




|



Δ




x





=


0




Δ


x
k







(
33
)








where δ is the Kronecker delta. Note that eij(x)=e[ij](x), so the x2 error and optimization problem remain unchanged. The native Hessian HS,nativek for a subgraph is defined as











H

s
,
native

k

:

=

2






e
[
ij
]



ε
s






(




e

[

i

j

]






Δ



x
k



)

T



Ω

i

j








e

[

i

j

]






Δ



x
k



.








(
34
)







Note that the supports of H1,nativek, . . . , HS,native are disjoint. This allows for fast, distributed computations, and so the preconditioning matrix is defined as

Ek:=(ΣS=1SHS,nativek)−1/2S=1S(HS,nativek)−1/2  (35)

By redefining the edges, as in (33), more constraints can be accounted for which in turn enables the Hessian to be better approximates, yielding a better preconditioning matrix.


It is worth mentioning that a subgraph's Hessian HSk and its native Hessian HS,nativek can both contain contributions that the other lacks. This is illustrated in FIG. 1, where edge e23∈ε1 constrains vertices ν2∈ν1,native and ν3∈ν2,native. The Hessian H1k contains contributions from edge e23 in the (2, 2), (2, 3), (3, 2), and (3, 3) blocks, but e23 only contributes to the (2, 2) block of H1,nativek. On the other hand, the native Hessian H2,nativek contains the (3, 3) block contribution from e23 but H2k does not contain any contributions from e23. As vertices ν2 and ν3 are native to different subgraphs, the (2, 3) and (3, 2) block contributions from edge e23 do not contribute to any native Hessians—such is the price that must be paid to ensure that the algorithm remains distributed. This also demonstrates the importance of partitioning the graph in such a way that the number of edges that span different subgraphs is minimized.



FIG. 2 depicts an exemplary system 100 for implementing the scalable Graph SLAM framework described above. The system 100 includes a processing system 102 that is operatively connected to a memory system 104. The processing system 102 includes one or more processors which may be located in the same device/location or may be distributed across multiple devices at one or more locations. Any suitable type of processor(s) may be used.


The processing system is operably connected to the memory system 104. The memory system 104 may include any suitable type of non-transitory computer readable storage medium. The memory system 104 may be distributed across multiple devices and/or locations. Programmed instructions are stored in the memory system. The programmed instructions include the instructions for implementing the operating system and various functionality of the system. The programmed instructions also include instructions for implementing the scalable Graph SLAM and ADMM algorithms described herein.


The components of the system may be communicatively connected by one or more networks 106. Any suitable type of network(s) may be used including wired and wireless types of networks. The computing devices include the appropriate network interface devices that enable transmitting and receiving of data via the network(s).


While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same should be considered as illustrative and not restrictive in character. It is understood that only the preferred embodiments have been presented and that all changes, modifications and further applications that come within the spirit of the disclosure are desired to be protected.

Claims
  • 1. A computer-implemented method of solving a graph simultaneous localization and mapping (graph SLAM) problem for HD maps using a distributed computing system, the method comprising: partitioning a graph into a plurality of subgraphs using at least one processor of the distributed computing system, the graph including a plurality of vertices and a plurality of edges that extend between vertices, each of the vertices corresponding to a pose during mapping, each of the edges defining a spatial constraint between two vertices, wherein each of the subgraphs includes all of the vertices of the graphs, and wherein each of the subgraphs include a subset of the edges of the graph;defining constrained, non-constrained and native vertices for each of the subgraphs using the at least one processor;defining an alternating direction method of multipliers (ADMM) formulation for Graph SLAM based on the partitioned graph using the at least one processor;defining updates for the ADMM formulation based on the ADMM algorithm using the at least one processor;defining updates for a distributed Graph SLAM algorithm based on the updates for the ADMM formulation and in terms of the constrained and the non-constrained vertices of the subgraphs using the at least one processor; andsolving the updates for the distributed Graph SLAM algorithm to solve the Graph SLAM problem for HD maps using the at least one processor to provide a distributed computing solution for the Graph SLAM problem for HD maps,wherein the ADMM formulation is a consensus ADMM formulation given by
  • 2. The method of claim 1, wherein the updates for the consensus ADMM formulation are given by ΔxSk,l+1=−(HSk+ρI)−1(bSk+ρuSk,l−ρΔxk,l),zk,l+1=αΔxk,l+(1−α)zk,l,uSk,l+1=uSk,l+αΔxk,l+1+(1−α)zk,l−zk,l+1, wherein ρ is a parameter,wherein Iw1, . . . ,wS(z)={z|w1= . . . =wS=z},wherein z and u are variables of the consensus ADMM formulation.
  • 3. The method of claim 2, wherein v̆Sl:=D̆Svl∈dN define restriction of entries of vl and zl respectively, that are constrained in subgraph Gs, wherein a primal residual for subgraph Gs is defined as ϕsl:=Δxsl−zl∈dN, and wherein ϕ̆sl:=Δx̆sl−z̆sl=D̆sϕsl∈dN denotes restriction to entries that correspond to constrained vertices.
  • 4. The method of claim 3, wherein the updates for the distributed Graph SLAM algorithm are defined in terms of constrained and non-constrained vertices such that
  • 5. The method of claim 1, wherein the computing system comprises a distributed storage and computing system where the data resides in multiple nodes, and the computation is distributed among those nodes.
  • 6. The method of claim 5, wherein the computing system has an Apache Spark framework with data stored on a Hadoop Distributed File System (HDFS).
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a 35 U.S.C. § 371 National Stage Application of PCT/EP2019/054467, filed on Feb. 22, 2019, which claims priority to U.S. Provisional Application Ser. No. 62/634,327, filed on Feb. 23, 2018, the disclosures of which are incorporated herein by reference in their entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/EP2019/054467 2/22/2019 WO
Publishing Document Publishing Date Country Kind
WO2019/162452 8/29/2019 WO A
US Referenced Citations (2)
Number Name Date Kind
20170019315 Tapia Jan 2017 A1
20180218088 Fischer Aug 2018 A1
Non-Patent Literature Citations (29)
Entry
Choudhary et al., “Exactly Sparse Memory Efficient SLAM using the Multi-Block Alternating Direction Method of Multipliers”, Sep. 2015, IEEE, 8 pages (Year: 2015).
Kümmerle, R. et al., “g20: A General Framework for Graph Optimization,” IEEE International Conference on Robotics and Automation (ICRA), May 2011, pp. 3607-3613 (7 pages).
Forster, C. et al., “IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation,” Georgia Institute of Technology, 2015 (10 pages).
HP Vertica, Website, About, Internet Archive version dated Sep. 8, 2015, available at https://web.archive.org/web/20150908100849/https://www.vertica.com/about/ (2 pages).
Pivotal Web Services, Website, Internet Archive version dated Feb. 10, 2018, available at https://web.archive.org/web/20180210201347/http://run.pivotal.io/ (5 pages).
Apache Spark, Website, Internet Archive version dated Feb. 21, 2018, available at https://web.archive.org/web/20180221194730/https://spark.apache.org/ (2 pages).
Zinkevich, M. et al., “Parallelized Stochastic Gradient Descent,” Advances in Neural Information Processing Systems 23 (NIPS 2010), 2010 (9 pages).
Niu, F. et al., “Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent,” Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011 (9 pages).
Zhang, Y. et al., “Splash: User-friendly Programming Interface for Parallelizing Stochastic Algorithms,” arXiv:1506.07552v2 [cs.LG] Sep. 23, 2015 (27 pages).
Shamir, O. et al., “Communication-Efficient Distributed Optimization using an Approximate Newton-type Method,” Proceedings of the 31st International Conference on Machine Learning, 2014, PMLR 32(2) (9 pages).
Zhang, Y. et al., “DiSCO: Distributed Optimization for Self-Concordant Empirical Loss,” Proceedings of the 32nd International Conference on Machine Learning, 2015, PMLR 37 (9 pages).
Mahoney, M. W., “Randomized Algorithms for Matrices and Data,” Foundations and Trends in Machine Learning, 2010, vol. 3, No. 2 (104 pages).
Heinze, C. et al., “LOCO: Distributing Ridge Regression with Random Projections,” arXiv:1406.3469v4 [stat.ML] Jun. 8, 2015 (37 pages).
Parikh, N. et al., “Proximal Algorithms,” Foundations and Trends in Optimization, 2013, vol. 1, No. 3 (113 pages).
Becker, S. et al., “A quasi-Newton proximal splitting method,” Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012 (9 pages).
Boyd, S. et al., “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends in Machine Learning, 2010, vol. 3, No. 1, 1-122 (125 pages).
Carlone, L. et al., “Eliminating Conditionally Independent Sets in Factor Graphs: A Unifying Perspective based on Smart Factors,” IEEE International Conference on Robotics & Automation (ICRA), May-Jun. 2014, pp. 4290-4297 (8 pages).
Dhar, S. et al., “ADMM based Scalable Machine Learning on Spark,” IEEE International Conference on Big Data (Big Data), Oct.-Nov. 2015, pp. 1174-1182 (9 pages).
Keuper, J. et al., “Asynchronous Parallel Stochastic Gradient Descent A Numeric Core for Scalable Distributed Machine Learning Algorithms,” Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments (MLHPC2015), Nov. 2015 (11 pages).
Bonnans, J. F. et al., “A family of variable metric proximal methods,” Mathematical Programming, vol. 68, 1995, pp. 15-47 (33 pages).
Oliphant, J. et al. “SciPy: Open Source Scientific Tools for Python,” 2001, Internet Archive available at https://web.archive.org/web/20180624105556/www.scipy.org/citing.html (archive date Jun. 24, 2018) (3 pages).
Parente, L.A., et al., “A class of inexact variable metric proximal point algorithms,” SIAM Journal on Optimization, vol. 19, No. 1, pp. 240-260, 2008 (21 pages).
Lee, J. D. et al., “Proximal newton-type methods for minimizing composite functions,” SIAM Journal on Optimization, vol. 24, No. 3, pp. 1420-1443, 2014 (24 pages).
International Search Report corresponding to PCT Application No. PCT/EP2019/054467, dated Jun. 5, 2019 (3 pages).
Choudhary, S. et al., “Exactly Sparse Memory Efficient SLAM using the Multi-Block Alternating Direction Method of Multipliers,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2015 (8 pages).
Grisetti, G. et al., “A Tutorial on Graph-Based SLAM,” IEEE Intelligent Transportation Systems Magazine, vol. 2, No. 4, pp. 31-43, Feb. 4, 2011 (13 pages).
Huang, G. et al., “Consistent Sparsification for Graph Optimization,” 2013 European Conference on Mobile Robots (ECMR), 150-157, Sep. 25, 2013 (8 pages).
Paull, L. et al., “A Unified Resource-Constrained Framework for Graph SLAM,” 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1346-1353, May 16, 2016 (8 pages).
Zhao, L. et al., “Linear SLAM: A Linear Solution to the Feature-based and Pose Graph SLAM based on Submap Joining,” 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 24-30, Nov. 3, 2013 (7 pages).
Related Publications (1)
Number Date Country
20210003398 A1 Jan 2021 US
Provisional Applications (1)
Number Date Country
62634327 Feb 2018 US