MULTI-SENSOR NAVIGATION

Information

  • Patent Application
  • 20240045041
  • Publication Number
    20240045041
  • Date Filed
    October 21, 2022
    a year ago
  • Date Published
    February 08, 2024
    2 months ago
Abstract
This disclosure relates to a method executed by a processor for navigating a moving vehicle through an environment. The processor captures the environment with a first sensor mounted on the moving vehicle to create first sensor data with a second sensor mounted on the moving vehicle to create second sensor data. The processor then determines a first trajectory of the first sensor and a second trajectory of the second sensor. The processor then estimates the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory and uses the estimated spatial relationship to combine the first sensor data and the second sensor data into a combined multi-sensor representation of the environment. Finally, the processor navigates the moving vehicle based on the combined multi-sensor representation of the environment.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims priority from Australian Provisional Patent Application No 2021903403 filed on 25 Oct. 2021, the contents of which are incorporated herein by reference in their entirety.


TECHNICAL FIELD

This disclosure relates to navigation using multiple sensors, such as, but not limited to, the use of camera and LiDAR sensors for navigation.


BACKGROUND

Cameras and light detection and ranging (LiDAR) sensors (or short “LiDARs”) can be mounted on a moving vehicle to provide complementary information about the environment of the vehicle. While a camera captures an essentially 2D pixel representation of the environment in the form of a colour image, a LiDAR captures a 3D point cloud, which comprises distance information for each sampling point at respective zenith and azimuth angles. So essentially, a LiDAR provides a 3D representation of the surrounds in a polar coordinate system, which can be converted to a Cartesian coordinate system. However, LiDARs typically do not provide colour information but only the location of sampling points.


For effective navigation of the vehicle, it is desirable to obtain the location and orientation of the vehicle relative to its environment from the data generated by the camera and the LiDAR. However, the relative spatial relationship between the camera and the LiDAR is often not known, which makes combining these two data sources difficult.


Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each of the appended claims.


Throughout this specification the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.


SUMMARY

This disclosure provides a method for estimating the spatial relationship between a first sensor, such as a camera, and a second sensor, such as a LiDAR. This spatial relationship can then be used to combine the data from both sensors. The combined data can then be used for navigation or other applications. Importantly, the spatial relationship between the two sensors is based on respective trajectories of the two sensors, which can be determined based on the respective sensor data.


A method for navigating a moving vehicle through an environment comprises:

    • capturing the environment with a first sensor mounted on the moving vehicle to create first sensor data;
    • capturing the environment with a second sensor mounted on the moving vehicle to create second sensor data;
    • determining a first trajectory of the first sensor over time using the first sensor data, the first trajectory comprising multiple locations relative to a first world coordinate system for the first sensor;
    • determining a second trajectory of the second sensor over time using the second sensor data, the second trajectory comprising multiple locations relative to a second world coordinate system for the second sensor;
    • estimating the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory;
    • using the estimated spatial relationship to combine the first sensor data and the second sensor data into a combined multi-sensor representation of the environment; and
    • navigating the moving vehicle based on the combined multi-sensor representation of the environment.


A method for estimating a spatial relationship between a first sensor and a second sensor comprises:

    • determining a first trajectory of the first sensor over time using sensor data indicative of a scene captured over time by the first sensor moving relative to the scene;
    • determining a second trajectory of the second sensor over time using sensor data indicative of a scene captured over time by the second sensor moving relative to the scene; and
    • calculating the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory.


Since the calculation is based on both trajectories, the determination implicitly incorporates the mapping of the sensors relative to their environments.


In some embodiments, estimating the spatial relationship comprises optimising an objective function that includes:

    • two or more of the multiple locations of the first trajectory,
    • two or more of the multiple locations of the second trajectory, and
    • the spatial relationship.


In some embodiments, a reference transformation transforms the first world coordinate system to the second coordinate system, and the objective function is independent from the reference transformation.


In some embodiments, the objective function is based on locations on the first trajectory and the second trajectory that correspond in time.


In some embodiments, a first location of the first trajectory temporally corresponds to a first location of the second trajectory, a second location of the first trajectory temporally corresponds to a second location of the second trajectory; the objective function represents a difference between:

    • a result of a transformation from the first point of the first trajectory to the second point of the first trajectory, and
    • a result of a transformation from the first point on the second trajectory to the second point on the second trajectory subject to the spatial relationship.


In some embodiments, the objective function represents the difference multiple times for further temporally corresponding locations to improve accuracy under noise.


In some embodiments, the method further comprises synchronising the first sensor data with the second sensor data to determine temporally corresponding locations.


In some embodiments, synchronising comprises interpolation of the first trajectory.


In some embodiments, the interpolation is based on quaternion representation of the locations of the first trajectory.


In some embodiments, optimising the objective function comprises optimising a modified objective function subject to an equality constraint.


In some embodiments, the objective function is modified by replacing, in the second transformation, a subject transformed by the spatial relationship with a single variable and the equality constraint is a constraint on the single variable to be equal to the subject transformed by the spatial relationship.


In some embodiments, optimising the objective function comprises iteratively performing the steps of:

    • optimising a first sub-problem to update an estimate of the spatial relationship given a current value of the single variable; and
    • optimising a second sub-problem to update an estimate of the single variable given the current estimate of the spatial relationship.


In some embodiments, the method comprises optimising the objective function using alternating direction method of multipliers (ADMM).


In some embodiments, the first trajectory and the second trajectory each comprise multiple Euclidean transformations, each of the multiple Euclidean transformations including a rotation and translation, representing one of the locations.


Software, when executed by a computer, causes the computer to perform the any of the above methods.


A system comprises:

    • a first sensor and a second sensor; and
    • a processor configured to:
      • determine a first trajectory of the first sensor over time using sensor data indicative of a scene captured over time by the first sensor moving relative to the scene;
      • determine a second trajectory of the second sensor over time using sensor data indicative of a scene captured over time by the second sensor moving relative to the scene, and
      • calculate a spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory.


Optional features disclosed above as optional features of the method, are also optional features of the system.





BRIEF DESCRIPTION OF DRAWINGS

An example will now be described with reference to the following drawings:



FIG. 1 illustrates a system for navigating a moving vehicle.



FIG. 2 is a simplified representation of FIG. 1.



FIG. 3 illustrates a method for navigating a moving vehicle through an environment.



FIG. 4 is a visual representation of an equality between various transformations.



FIG. 5 illustrates a computer system for estimating a spatial relationship between a first sensor and a second sensor and then navigating a vehicle.



FIG. 6 illustrates a method for estimating a spatial relationship between a first sensor and a second sensor.





DESCRIPTION OF EMBODIMENTS


FIG. 1 illustrates a system 100 for navigating a moving vehicle (not shown for simplicity). System 100 comprises a LiDAR 101 and a camera 102. LiDAR 101 captures a 3D point cloud 103 of an object 104 by emitting laser beams and measuring the time of flight of reflected light sensed back at LiDAR 101. This measurement is repeated for multiple different angles of the laser beam to create the point cloud 103.


Camera 102 captures an image 105 of another object 106 by capturing illuminating light reflected from object 106. This way, camera 102 captures multiple pixel values in one image 105. These pixel values may represent red/green/blue (RGB) channels, a single intensity channel or other channels, such as hyperspectral images.


A processor 107 receives data from the LiDAR 101 and the camera 102 and performs the method disclosed herein to determine the spatial relationship between LiDAR 101 and camera 102.


As the vehicle moves, LiDAR 101 moves along a LiDAR trajectory 110 and camera 102 moves along a camera trajectory 120. The trajectories 110 and 120 are defined by respective sequences of locations.



FIG. 2 is a simplified representation of FIG. 1 and shows a first location 111 of LiDAR trajectory 110 and a second location 112 of LiDAR trajectory 110. Similarly, FIG. 2 shows a first location 121 of camera trajectory 120 and second location 122 of camera trajectory 120. In this context, the terms ‘first’ and ‘second’ are meant in a temporal sense such that the LiDAR 101 and camera 102 are at the first location before they come to the second location. Here, the first location 111 of the LiDAR trajectory 110 and the first location 121 of the camera trajectory have an identical time stamp. This means locations 111 and 121 have been captured simultaneously. Similarly, second locations 112 and 122 have an identical time stamp as they have been captured simultaneously. In cases where no two locations with identical time stamps can be found, one of the two trajectories can be interpolated to generate the missing locations.


Each of the locations are relative to a respective world coordinate system, also referred to as “global reference”. More specifically, LiDAR locations 111 and 112 are relative to a world LiDAR coordinate system 201, while camera locations 121 and 122 are relative to a world camera coordinate system 202. It is noted that in some examples disclosed herein, the locations 111, 112, 121, 122 are referred to as Euclidean transformations or simply transformations. These transformation include a translation (move) and a rotation (orientation). For location 112, for example, the translation is indicated at 203, which essentially transforms the origin of LiDAR world coordinate system 201 onto the location 112. There may also be a rotation to reflect the orientation (direction of view) of the LiDAR at location 112. Similarly, transformation 204 transforms the camera world coordinate system 202 to location 122, potentially including a rotation to represent the orientation (direction of view) of the camera.


The world coordinate systems 201 and 202 are arbitrary and set at the beginning of performing the method. For example, the respective origins and orientation of the world coordinate systems may be set as the location and orientation when the LiDAR and camera are first turned on.


The trajectories 110 and 120 may be determined based on data generated by the LiDAR 101 and camera 102, respectively. These trajectory determinations may be performed by providing a reference objects into the environment that the respective sensor can easily recognise. The reference objects may have a known and stationary position and orientation in the environment such that processor 107 can calculate the trajectories based on point clouds and images, respectively. It is noted that both sensors (e.g., LiDAR 101 and camera 102) may capture different reference objects to calculate the respective trajectories, which has the advantage that the optimal reference can be chose for each sensor.


Since LiDAR 101 and camera 102 are mounted on a vehicle (or another moving platform) it can be assumed that the LiDAR 101 and camera 102 have a fixed spatial relationship between each other. In other words, there exists a Euclidean transformation (shown at 205 in FIG. 2 and later referred to as ‘p’) that maps the first location 121 of the camera onto the first location 111 of the LiDAR. As the LiDAR 101 and camera 102 move along respective trajectories 110 and 120, this transformation 205 should remain constant. It is now an aim to calculate this transformation 205. In a basic sense, it can be said that due to the fixed transformation 205, any change between locations 111 and 112 should also be the change between locations 121 and 122. However, in cases where noise is present in the sensor data, the observed change will not be equal in general. Therefore, processor 107 can optimise an objective function to minimise the difference between the two changes (which can again be represented as respective transformations).


For completeness, it is also noted that there is a transformation that maps the LiDAR world coordinate system 201 to the camera world coordinate system 202 indicated at 206 and later referred to as ‘q’.



FIG. 3 illustrates a method 300, as performed by processor 107, for navigating a moving vehicle through an environment. First, the processor 107 captures 301 the environment with a first sensor, such as LiDAR 101, mounted on the moving vehicle to create first sensor data and captures 302 the environment with a second sensor, such as camera 102 mounted on the moving vehicle to create second sensor data. Then, processor 107 determines 303 the first trajectory 110 of the first sensor over time using the first sensor data. As described above, the first trajectory 110 comprises multiple locations 111/112 relative to a first world coordinate system 201 for the first sensor 101. The processor 107 further determines 304 a second trajectory 120 of the second sensor over time using the second sensor data. As also described above, the second trajectory 120 comprises multiple locations relative to a second world coordinate system 202 for the second sensor.


Based on the first trajectory 110 and the second trajectory 120, processor 107 estimates the spatial relationship 205 between the first sensor and the second sensor and then uses the estimated spatial relationship to combine the first sensor data and the second sensor data into a combined multi-sensor representation of the environment. Finally, processor 107 navigates the moving vehicle based on the combined multi-sensor representation of the environment.


The following explanation provides a more mathematical description of methods 300:


Preliminaries

The function ƒ(x; p) denotes a Euclidean transformation of the 3D point x according to the parameters p. This applies to the transform 203 in the sense that location 111 can be seen as a transform from the world coordinates system 201. This also applies to locations 112, 121, 122 and to transforms 205 and 206. The mathematical formulation of this transform is





ƒ(x;p)=R(p)x+t(p)  (1)


where R(p) is a function which computes an orthogonal 3×3 matrix from the parameters p and t(p) is a function which computes a vector of length 3 from the parameters p. The matrix R(p) is said to rotate the 3D point x and t(p) is said to translate (or move) the point.


The parameter vector p contains six values p=[θ1, θ2, θ3, tx, ty, tz]T where θ1, θ2 and θ3 denote the parameters of the rotation matrix R(p) and t(p)=[tx, ty, tz]T is the 3D translation.


The transformation ƒ(x; p) can be generalised to an arbitrary 3D shape X=[x1, x2, x3, . . . , xP] consisting of P points via





ƒ(X;p)=R(p)X+t(p)1T  (2)


where 1T is a row vector of length P containing only ones. By definition, X has dimension 3×P.


Trajectory Estimation

As set out above with reference to LiDAR trajectory 110 and camera trajectory 120, the term trajectory represents the motion of a sensor through 3D space. It is represented as a sequence of Euclidean transforms {p1, . . . , pN} estimated at various times {τ1, . . . , τN} (including 111, 112, 121, 122). The 3D space the sensor moves through is termed the sensor's world coordinate system (201/202) and each Euclidean transform pi transforms shapes observed by the sensor at time τi to the sensor's world coordinate system.


For example, assume a checkerboard pattern with square dimensions 20 mm×20 mm is in a fixed position in a room. A camera observes the pattern in an image acquired at time τ1 and the 3D shape of the checkerboard relative to the camera at time τ1 is computed using an algorithm g(τ1). The camera is then moved and the 3D shape of the checkerboard g (τ2) is computed from a new image captured at time τ2. Since the checkerboard is in a fixed position in the world coordinate system, then the trajectory {p1, p2} for the camera must satisfy





ƒ(g1),p1)=ƒ(g2),p2).  (3).


There is an ambiguity in Equation 3 with respect to the definition of the sensor's world coordinate system. It can be shown that





ƒ(ƒ(g1);p1);q)=ƒ(ƒ(g2);p2);q)  (4)


for all Euclidean transform parameters q indicating that the world coordinate system is arbitrary. Convention is that one of the Euclidean transforms in the trajectory (e.g. p1) is fixed to the identity transform p1=0. The identity transform has the property ƒ(X; p=0)=ƒ(X; 0)=x i.e. it neither translates nor rotates the input shape X.


With p1=0, then p2 can be computed by solving





ƒ(g1);0)=g1)=p(g2);p2)  (5)






g1)=R(p2)g2)+t(p2)1T  (6)


In practice, Equation 6 does not hold with equality as the estimate of the 3D shape g(τi) is noisy. The algorithm outlined in: Olga Sorkine-Hornung and Michael Rabinovich, Least-squares rigid motion using SVD https://igl.ethz.ch/projects/ARAP/svd_rot.pdf, 2017 can be used to solve for p2 if the noise is normally distributed.


It should be noted that using a calibration target such as the checkerboard pattern is one of many ways to estimate the trajectory of the sensor. Alternate methods such as odometry estimation or Simultaneous, Localisation and Mapping (SLAM) algorithms assume the environment is fixed and identify correspondences between different observations of the same structure whilst making assumptions on the trajectory or the observations e.g. the sensor moves smoothly through 3D space. One example of a SLAM algorithm is the Wildcat algorithm by Data61 of the Commonwealth Science and Industrial Research Organisation, Australia.


This disclosure provides a method that uses the estimation of a sensor's trajectory to align two trajectories under the assumption that the transformation between sensors is constant for all time, and the transform between the world coordinate system of each sensor trajectory is unknown.


Trajectory Alignment

Let {ci} be the trajectory of a camera 120. As before, each ci denotes a Euclidean transform, such as 121/122, that transforms shapes observed by the camera at time τi to their position in the camera's world coordinate system.


Let {dj} be the trajectory of a LiDAR 110 where each dj denotes a Euclidean transform, such as 111, 112 where observations made by the LiDAR can be transformed to positions in the LiDAR's world coordinate system.


We know from Equation 4 that the camera and LiDAR world coordinate systems are related by an unknown Euclidean transformation q 206.


It is assumed that the camera 102 is rigidly connected to the LiDAR 101. A rigid relationship means that the position and orientation of the camera 102 relative to the LiDAR 101 is constant. Mathematically, this is defined as a 3D shape X observed by a LiDAR 101 is always observed at the position ƒ(X; p) by the camera 102. The parameters p is also referred to as the extrinsic parameters.


To derive the trajectory alignment algorithm there is another assumption: the 3D shape X observed by the LiDAR 101 is rigidly attached to the LiDAR 101 and to the camera 102 and the rank of the matrix X representing the 3D shape is 3.


With these assumptions, it is now possible to state the following relationship





ƒ(ƒ(X;di);q)=ƒ(ƒ(X;p);ci)  (7)


which holds for all i. Here, ƒ(ƒ(X; di); q) is the position of X in the LiDAR trajectory's world coordinate system 201 transformed to the camera trajectory's world coordinate system 202 using the unknown transform parameters q 206. The value ƒ(ƒ(X; p); ci) is the position of X in the camera trajectory's world coordinate system 202 using the unknown extrinsic parameters p 205.


Thus trajectory alignment requires solving for p and q given the arbitrary non planar shape X and the trajectories {ci} and {di}.


Eliminating q

The transform q 205 can be eliminated by considering how the 3D shape X in the local coordinate system at time i changes when transformed in to the local coordinate system at time j. Below shows how this formulation eliminates the Euclidean transform q.


The proof requires the following notational conventions





ƒ(ƒ(X;a);b)=ƒ(X;a∘b)  (8)





ƒ−1(X;a)=ƒ(X;a−1)  (9)





ƒ−1(ƒ(X;a);b)=ƒ(X;a∘b−1)  (10)





ƒ−1(ƒ(X;a);a)=ƒ(X;a∘a−1)=X  (11)


The proof begins with the noise free case (Equation 7).





ƒ(ƒ(X;di);q)=ƒ(ƒ(X;p);ci)  (12)





ƒ(X;di∘q)=ƒ(X;p∘ci)  (13)





ƒ(X;di∘q∘q−1∘dj−1)=ƒ(X;p∘ci∘cj−1∘p−1)  (14)





ƒ(X;di∘dj−1)=ƒ(X;p∘ci∘cj−1∘p−1)  (15)


For succinctness, we introduce uk=ci∘cj−1 and vk=di∘dj−1 to denote the k th pairing of two transforms, giving





ƒ(X;vk)=ƒ(X;p∘uk∘p−1)  (16)


From Equation 16 we see that the parameter q has been eliminated.



FIG. 4 illustrates the equality in Equation 16 that enables the calculation of p and will later form the objective function of an optimisation problem. Equation 16 states that on the left hand side a virtual object X is transformed 401 to the LiDAR's world coordinate system 201 from the LiDAR's location 111 at time i, and then transformed back 402 to the local LiDAR location at time j 112. On the right hand side, the same object X is transformed 403 from the LiDAR's point of view at time i 111 to the camera's point of view at time i 121 and then transformed 404 to the camera's world coordinate system 202. The virtual object X is then transformed back 405 to the camera's point of view at time j 122 and finally transformed back 406 to the LiDAR's point of view at time j 112.


In that sense, it is now clear that calculating the spatial relationship uses an equation with two locations ci, cj 111, 112 on the first trajectory 110, two locations di, dj 121,122 on the second trajectory 120 and the spatial relationship p itself. Again, it is noted that Equation (16) is independent from the parameter q that transforms the LiDAR's world coordinate system 201 to the camera's world coordinate system 202.


Isotropic Gaussian Model

In most real-world applications, Equation 16 will not hold with equality due to noise present in the sensor trajectories {ci} and {dj}. In this section it is assumed that there is an error






e
k=ƒ(X;vk)−ƒ(X;p∘uk∘p−1)  (17)


representing the difference between the left and right hand side of Equation 16. In other words, the objective function that is based on Equation (16) represents a difference between (a) the result of a transformation from the first point 111 to the second point 112 on the first trajectory 110 and (b) the result of a transformation from the first point 121 to the second point 122 on the second trajectory but subject to the spatial relationship p between the LiDAR 101 and the camera 102. “Subject to the spatial relationship p” means here that the object is first transformed 403 to the local camera point of view 121 and the result of the transformation is then transformed back 406 to the local LiDAR point of view as explained above.


This shows that the determination of the spatial relationship relies on four locations including two locations on each of the two trajectories. Multiple transformations are performed between these four locations and the two world coordinate systems. These transformations include essentially two groups of transformations, which both should ideally result in the same endpoint. The first group of transformations combined includes two LiDAR locations on the LiDAR trajectory and the LiDAR world coordinate system 201 while the second group of transformations includes two camera locations on the camera trajectory and the camera world coordinate system 202. The second group also includes the spatial relationship between the LiDAR and the camera. Since the points from both trajectories are included, the determination incorporates both trajectories as a basis for finding the spatial relationship between both sensors. The determination of the spatial relationship now aims to minimise this difference between the result of the two groups of transformations.


It is further assumed that ek is a random variable that can be modelled using an isotropic multivariate Gaussian distribution






P(ek|p)=custom-character(ek;0,λk−1I)  (18)


where the scalar λk>0 is known for all k.


The joint probability of all correspondences assuming independence between correspondences is therefore










P

(



e
1



e
2






e
N


|
p

)

=




i
=
1

N


P

(


e
k

|
p

)






(
19
)







The maximum likelihood solution of the extrinsics parameters p is therefore











arg


min

p






i
=
1

N




λ
i

2







f

(

X
;

v
i


)

-

f

(

X
;

p


u
i



p

-
1




)




F
2







(
20
)







where ∥.∥F2 is the squared Frobenius Norm, which is a matrix norm of a matrix defined as the sum of the absolute squares of its elements. Equation (20) essentially means that the difference between the two groups of transformations is minimised over further temporally corresponding locations (in addition to 111/121, 112/122) to improve accuracy under noise. One potential difficulty with Equation (20) is that it may be relatively difficult to solve since enforcing orthogonality of the rotations may be difficult.


Therefore, an equivalent objective can be obtained using the knowledge that





X−Y∥F2=∥ƒ(x;p)−ƒ(Y;P)∥F2  (21)


giving











arg


min

p






i
=
1

N




λ
i

2







f

(

X
;


v
i


p


)

-

f

(

X
;

p


u
i



)




F
2







(
22
)







Optimisation Using the ADMM

Introducing the dummy variable Y in to Equation 22, we arrive at









min





i
=
1

N




λ
i

2







f

(

X
;


v
i


p


)

-

f

(

Y
;

u
i


)




F
2







(
23
)










s
.
t
.

Y

=

f

(

X
;
p

)





which can be optimised using the alternating direction method of multipliers (ADMM) as described in Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1-122, January 2011. Other methods include as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others.


ADMM takes the form of a decomposition-coordination procedure, in which the solutions to small local subproblems are coordinated to find a solution to a large global problem. ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization.


The scaled augmented Lagrangian custom-character for the current problem is as follows










L

(

p
,
Y
,
Z

)

=




λ
i

2







f

(

X
;


v
i


p


)

-

f

(

Y
;

u
i


)




F
2


+


ρ
2






Y
-

f

(

X
;
p

)

+
Z



F
2







(
24
)







Subproblem p

The following objective is the subproblem for the variable p










p
*

=



arg


min

p




L

(

p
,
Y
,
Z

)






(
25
)







where p denotes variable p for iteration k and p* denotes the updated variable for iteration k+1. That is, the proposed algorithm iteratively optimises the solution by repeatedly calculating an update on p and then on Y (see below).


This problem has a closed form solution as shown in Olga Sorkine-Hornung and Michael Rabinovich. Least-squares rigid motion using SVD. https://igl.ethz.ch/projects/ARAP/svd_rot.pdf, 2017.


Subproblem Y

The following is the subproblem involving the variable Y










Y
*

=



arg


min

p




L

(

p
,
Y
,
Z

)






(
26
)














=




arg


min

Y






i
=
1

N




λ
i

2







f

(

X
;


v
i


p


)

-

f

(

Y
;

u
i


)




F
2




+


ρ
2






Y
-

f

(

X
;
p

)

+
Z



F
2








(
27
)














=




arg


min

Y






i
=
1

N




λ
i

2







f

(

X
;


v
i


p


u
i

-
1




)

-
Y



F
2




+


ρ
2






Y
-

f

(

X
;
p

)

+
Z



F
2








(
28
)














=



[





i
=
1

N


λ
i


+
ρ

]


-
1


[





i
=
1

N



λ
i



f

(

X
;


v
i


p


u
i

-
1




)



+

ρ

(


f

(

X
;
p

)

-
Z

)


]






(
29
)







Again, Y denotes variable Y for iteration k while Y* denotes variable Y for iteration k+1. So the proposed algorithm calculates update p*, then uses that updated value to calculate update Y*, and uses that updated value to calculate update p*and so on to iteratively optimise the solution until a termination criterion is met.


After p* and Y* are computed, the scaled dual variables Z* are updated using






Z*=Z+Y*−ƒ(X;p*).  (30)


The ADMM algorithm repeats the steps in equations (25) to (3) until convergence. The termination criterion for convergence can be formulated as a threshold on the primal residual r*=vec(Y*−ƒ(X; p*)) and the dual residual










s
*

=


ρ
[



X
T


I




1

I


]

[




v

e


c

(


R

(

p
*

)

-

R

(
p
)


)








t

(

p
*

)

-

t

(
p
)





]





(
31
)







whereby ∥r*∥2≤ϵpri and ∥s*∥2≤ϵdual. The value of ϵpri is typically very small to ensure the equality constraint Y=ƒ(X; p) is satisfied, leaving ϵdual to control the quality of the solution p*, with example values for both ϵ of 10−3 or =10−4.


The operator A⊗B computes the Kronecker product of the matrices A and B.


Choosing λi

There are a number of strategies for choosing the values of λi. One basic strategy is λ1= . . . =λN=1. This unfortunately treats each correspondence equally which can lead to undesirable outcomes when a small number of correspondences are erroneous.


To handle outliers, an iterative strategy can be employed whereby the current estimate of the extrinsic parameters p is used to calculate the new values of λi. One such strategy is to use the function





λi=custom-character(ei;0,σ−2I)=custom-character(ƒ(X;vi∘p)−ƒ(X;p∘ui);0,σ−2I)  (32)


where σ is a constant.


An extension to the above strategy is to shrink σ after each iteration





σ←  (33)


where 0<c<1. A typical value for c is in the range 0.95<c<1. Guaranteed convergence of the extrinsic parameters p is achieved by enforcing a lower bound on σ.


Synchronisation

An implicit assumption introduced above is that the time ηi at which the LiDAR transform di was estimated is equal to the time τi at which the camera transform ci was estimated. This is referred herein as a temporal correspondence. Unfortunately, this means the LiDAR and camera share the same clock which may be difficult to achieve.


From this point forward we represent trajectories by the set {(ci, τi)} where ci is the i th transform and τi is the timestamp at which the i th transform was estimated. In order to successfully perform trajectory alignment processor 107 uses a pair of trajectories which are synchronised i.e. {(ci, ηi)} and {(di, ηi)} form temporally corresponding points because the same ηi is used as a timestamp in both transforms.


If it is assumed that the relationship between a LiDAR timestamp η and a camera timestamp τ is known (i.e. τ=g(g) for some function g) then processor 107 samples the acquired camera trajectory {(ci, τi)} at the timestamps {g(ηi)} in order to create the synchronised camera trajectory {(ĉi, ηi)}.


Sampling a trajectory {(ci, τi)} at time τ is achieved by performing the following operations:

    • Find the transform cα which has the largest timestamp τα such that τα<τ.
    • Find the transform cβ which has the smallest timestamp τβ such that τ<τβ.
    • Let qα be the quaternion representation of transform cα.
    • Let qβ be the quaternion representation of transform cβ.
    • Let






λ
=


τ
-

τ
α




τ
β

-

τ
α









    • Linearly interpolate between qα and qβ to obtain q









q=q
α+λ(qβ−qα)  (34)

    • Normalise q such that ∥q∥=1 i.e. q←∥q∥−1 q.
    • Convert q back to the transform representation used in the trajectory.


      Temporal Relationship with Unknown Parameters


In practice, the function g which relates a timestamp of the LiDAR to a camera timestamp has unknown parameters Δη. For example





τ=g(η;Δη)=η+Δη  (35)


This would involve an algorithm for estimating Δη simultaneously with the extrinsics parameters p. It is typical for bounds on possible values of Δη to exist and the tolerance/accuracy of the possible values to be known. This allows a discrete number of possible values for Δη to be created and a brute force search can then be performed.


The brute force search involves applying the steps below to each candidate value of Δη

    • Create the synchronised camera trajectory {(ĉi, ηi+Δη)} from the acquired camera trajectory {(ci, τi)} using the algorithm described in the previous section.
    • Estimate the extrinsic parameters p as per Section 3.
    • Compute the joint likelihood (Equation 19) for the optimised extrinsic parameters.


Once completed, the optimal value of Δη is the value which produced the maximum joint likelihood.


Computer System


FIG. 5 illustrates a computer system 500 for estimating a spatial relationship between a first sensor and a second sensor and then navigating a vehicle. The computer system 500 comprises processor 107, which already featured in FIG. 1 connected to a program memory 501, a data memory 502, a communication port 503 and a control output port 504. The program memory 501 is a non-transitory computer readable medium, such as a hard drive, a solid state disk or CD-ROM. Software, that is, an executable program stored on program memory 501 causes the processor 107 to perform the method in FIG. 3 and FIG. 4, that is, calculates the spatial relationship between two sensors based on respective trajectories of those sensors. The term “calculating a spatial relationship” refers to calculating a value that is indicative of the spatial relationship. This also applies to related terms.


The processor 107 may store the sensor data, trajectories and spatial relationship on data store 502, such as on RAM or a processor register. Processor 107 may also send the determined spatial relationship via communication port 503 to a server or other computer, such as central control room.


The processor 107 may receive data, such as LiDAR and camera data, from data memory 502 as well as from the communications port 503. In one example, the processor 107 receives sensor data from LiDAR 101 and camera 102 via communications port 503, such as by using a Wi-Fi network according to IEEE 802.11. The Wi-Fi network may be a decentralised ad-hoc network, such that no dedicated management infrastructure, such as a router, is required or a centralised network with a router or access point managing the network.


In one example, the processor 107 receives and processes the sensor data in real time. This means that the processor 107 calculates or updates the spatial relationship every time sensor data is received from LiDAR 101 and camera 102 and completes this calculation before the LiDAR 101 and camera 102 send the next sensor data update.


Although communications port 503 and control port 504 are shown as distinct entities, it is to be understood that any kind of data port may be used to receive data, such as a network connection, a memory interface, a pin of the chip package of processor 107, or logical ports, such as IP sockets or parameters of functions stored on program memory 104 and executed by processor 107. These parameters may be stored on data memory 502 and may be handled by-value or by-reference, that is, as a pointer, in the source code.


The processor 107 may receive data through all these interfaces, which includes memory access of volatile memory, such as cache or RAM, or non-volatile memory, such as an optical disk drive, hard disk drive, storage server or cloud storage. The computer system 500 may further be implemented within a cloud computing environment, such as a managed group of interconnected servers hosting a dynamic number of virtual machines.


It is to be understood that any receiving step may be preceded by the processor 107 determining or computing the data that is later received. For example, the processor 107 determines a point cloud and stores the point cloud in data memory 502, such as RAM or a processor register. The processor 107 then requests the data from the data memory 502, such as by providing a read signal together with a memory address. The data memory 502 provides the data as a voltage signal on a physical bit line and the processor 107 receives the point cloud via a memory interface.



FIG. 3 and FIG. 6 are to be understood as a blueprint for the software program and may be implemented step-by-step, such that each step in FIGS. 3 and 6 is represented by a function in a programming language, such as C++ or Java. The resulting source code is then compiled and stored as computer executable instructions on program memory 501.


Method for Determining the Spatial Relationship


FIG. 6 illustrates a method 600 for estimating a spatial relationship p 403 between a first sensor and a second sensor. Typically, processor 107 performs method 600 and in that sense determines 601 a first trajectory of the first sensor, such as LiDAR 101, over time using sensor data indicative of a scene captured over time by the first sensor moving relative to the scene. Processor 107 also determines 602 a second trajectory of the second sensor, such as camera 102, over time using sensor data indicative of a scene captured over time by the second sensor moving relative to the scene. Finally, processor 107 calculates 603 the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory.


Since the calculation is based on both trajectories, the determination implicitly incorporates the mapping of the sensors relative to their environments. As a result, the calculation is accurate and robust to sensor noise. It is noted that the more specific features discussed above with reference to method 300 in FIG. 3 and in particular with reference to Equations (20) and (23) and optimisation using an ADMM, also apply to method 600 in FIG. 6.


It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims
  • 1. A method for navigating a moving vehicle through an environment, the method comprising: capturing the environment with a first sensor mounted on the moving vehicle to create first sensor data;capturing the environment with a second sensor mounted on the moving vehicle to create second sensor data;determining a first trajectory of the first sensor over time using the first sensor data, the first trajectory comprising multiple locations relative to a first world coordinate system for the first sensor;determining a second trajectory of the second sensor over time using the second sensor data, the second trajectory comprising multiple locations relative to a second world coordinate system for the second sensor;estimating the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory;using the estimated spatial relationship to combine the first sensor data and the second sensor data into a combined multi-sensor representation of the environment; andnavigating the moving vehicle based on the combined multi-sensor representation of the environment.
  • 2. The method of claim 1, wherein estimating the spatial relationship comprises optimising an objective function that includes: two or more of the multiple locations of the first trajectory,two or more of the multiple locations of the second trajectory, andthe spatial relationship.
  • 3. The method of claim 2, wherein a reference transformation transforms the first world coordinate system to the second coordinate system, andthe objective function is independent from the reference transformation.
  • 4. The method of claim 2 or 3, wherein the objective function is based on locations on the first trajectory and the second trajectory that correspond in time.
  • 5. The method of claim 4, wherein a first location of the first trajectory temporally corresponds to a first location of the second trajectory,a second location of the first trajectory temporally corresponds to a second location of the second trajectory;the objective function represents a difference between:a result of a transformation from the first point of the first trajectory to the second point of the first trajectory, anda result of a transformation from the first point on the second trajectory to the second point on the second trajectory subject to the spatial relationship.
  • 6. The method of any one of claims 2 to 5, wherein the objective function represents the difference multiple times for further temporally corresponding locations to improve accuracy under noise.
  • 7. The method of any one of claims 2 to 6, wherein optimising the objective function comprises optimising a modified objective function subject to an equality constraint.
  • 8. The method of claim 7, wherein the objective function is modified by replacing, in the second transformation, a subject transformed by the spatial relationship with a single variable and the equality constraint is a constraint on the single variable to be equal to the subject transformed by the spatial relationship.
  • 9. The method of claim 8, wherein optimising the objective function comprises iteratively performing the steps of: optimising a first sub-problem to update an estimate of the spatial relationship given a current value of the single variable; andoptimising a second sub-problem to update an estimate of the single variable given the current estimate of the spatial relationship.
  • 10. The method of any one of claims 2 to 9, wherein the method comprises optimising the objective function using alternating direction method of multipliers (ADMM).
  • 11. The method of any one of the preceding claims, further comprising synchronising the first sensor data with the second sensor data to determine temporally corresponding locations.
  • 12. The method of claim 11, wherein synchronising comprises interpolation of the first trajectory.
  • 13. The method of claim 12, wherein the interpolation is based on quaternion representation of the locations of the first trajectory.
  • 14. The method of any one of the preceding claims, wherein the first trajectory and the second trajectory each comprise multiple Euclidean transformations, each of the multiple Euclidean transformations including a rotation and translation, representing one of the locations.
  • 15. A method for estimating a spatial relationship between a first sensor and a second sensor, the method comprising: determining a first trajectory of the first sensor over time using sensor data indicative of a scene captured over time by the first sensor moving relative to the scene;determining a second trajectory of the second sensor over time using sensor data indicative of a scene captured over time by the second sensor moving relative to the scene; andcalculating the spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory.
  • 16. The method of claim 15, wherein estimating the spatial relationship comprises optimising an objective function that includes: two or more of the multiple locations of the first trajectory,two or more of the multiple locations of the second trajectory, andthe spatial relationship.
  • 17. The method of claim 16, wherein a reference transformation transforms the first world coordinate system to the second coordinate system, andthe objective function is independent from the reference transformation.
  • 18. The method of claim 16 or 17, wherein the objective function is based on locations on the first trajectory and the second trajectory that correspond in time.
  • 19. The method of claim 18, wherein a first location of the first trajectory temporally corresponds to a first location of the second trajectory,a second location of the first trajectory temporally corresponds to a second location of the second trajectory;the objective function represents a difference between:a result of a transformation from the first point of the first trajectory to the second point of the first trajectory, anda result of a transformation from the first point on the second trajectory to the second point on the second trajectory subject to the spatial relationship.
  • 20. The method of any one of claims 16 to 19, wherein the objective function represents the difference multiple times for further temporally corresponding locations to improve accuracy under noise.
  • 21. The method of any one of claims 16 to 20, wherein optimising the objective function comprises optimising a modified objective function subject to an equality constraint.
  • 22. The method of claim 21, wherein the objective function is modified by replacing, in the second transformation, a subject transformed by the spatial relationship with a single variable and the equality constraint is a constraint on the single variable to be equal to the subject transformed by the spatial relationship.
  • 23. The method of claim 22, wherein optimising the objective function comprises iteratively performing the steps of: optimising a first sub-problem to update an estimate of the spatial relationship given a current value of the single variable; andoptimising a second sub-problem to update an estimate of the single variable given the current estimate of the spatial relationship.
  • 24. The method of any one of claims 16 to 23, wherein the method comprises optimising the objective function using alternating direction method of multipliers (ADMM).
  • 25. The method of any one of the claims 15 to 24, further comprising synchronising the first sensor data with the second sensor data to determine temporally corresponding locations.
  • 26. The method of claim 25, wherein synchronising comprises interpolation of the first trajectory.
  • 27. The method of claim 26, wherein the interpolation is based on quaternion representation of the locations of the first trajectory.
  • 30. The method of any one of claims 15 to 27, wherein the first trajectory and the second trajectory each comprise multiple Euclidean transformations, each of the multiple Euclidean transformations including a rotation and translation, representing one of the locations.
  • 29. Software that, when executed by a computer, causes the computer to perform the method of any one of the preceding claims.
  • 30. A system comprising: a first sensor and a second sensor; anda processor configured to: determine a first trajectory of the first sensor over time using sensor data indicative of a scene captured over time by the first sensor moving relative to the scene;determine a second trajectory of the second sensor over time using sensor data indicative of a scene captured over time by the second sensor moving relative to the scene, andcalculate a spatial relationship between the first sensor and the second sensor based on the first trajectory and the second trajectory.
Priority Claims (1)
Number Date Country Kind
2021903403 Oct 2021 AU national
PCT Information
Filing Document Filing Date Country Kind
PCT/AU2022/051268 10/21/2022 WO