HIGH-PRECISION ODOMETRY ESTIMATION METHOD BASED ON DOUBLE-LAYER FILTERING FRAMEWORK

Information

  • Patent Application
  • 20240312061
  • Publication Number
    20240312061
  • Date Filed
    April 26, 2024
    a year ago
  • Date Published
    September 19, 2024
    8 months ago
Abstract
A high-precision odometry estimation method based on a double-layer filtering framework, comprising the following steps: acquiring sensing information of a binocular camera, a main body IMU, a legged auxiliary IMU, a legged joint motor and a foot end force sensor; fusing the rotation angle and rotational velocity of the legged joint motor, and the measurement information of the legged auxiliary IMU and the foot end force sensor by a bottom-layer legged filter to obtain a velocity observation in a legged auxiliary IMU coordinate system; then inputting the velocity observation into an upper-layer filter, and fusing the acquired observation information of the main body IMU and binocular camera by the upper-layer filter to complete information fusion and outputting an odometry estimation result of a robot. The method can provide an effective solution for the autonomous positioning of the quadruped mobile robot in the rapid movement.
Description
TECHNICAL FIELD

The present disclosure belongs to the technical field of autonomous positioning of mobile robots, and in particular to a high-precision odometry estimation method based on a double-layer filtering framework.


BACKGROUND

With the rapid development of artificial intelligence, robots, unmanned vehicles, virtual reality, augmented reality and other fields have become research hotspots. Especially when the environment is unknown and GPS signals cannot be used, realizing accurate autonomous positioning and building maps is the research premise in this field.


An inertial measurement unit (IMU) can measure the acceleration and rotational velocity of the mobile platform at a very high frequency, so as to realize relatively high-precision position and attitude estimation in a short period of time. However, the IMU has zero drift and is affected by noises. With the increase of running time, the cumulative error caused by calculating the relative posture by integrating (accumulating) will gradually increase and cannot be corrected. Therefore, other sensors that can sense the external environment are usually used for auxiliary positioning to reduce the influence of IMU noise accumulation and zero drift on the accuracy of odometry estimation. Visual-aided inertial odometry (VIO) corrects the noise and zero drift of the IMU through the camera. Due to its low cost and high robustness, it has become one of the earliest and more common positioning methods successfully applied in aerospace, VR/AR, autonomous driving and other fields. However, in some challenging application scenarios, the VIO algorithm framework also has its limitations: the image frequency is relatively low, and the discretization error and linearization error lead to a significant decline in odometry estimation accuracy or even positioning failure in high dynamic scenes; the image blur in the scene of severe jitter leads to the failure of the conventional feature point matching algorithm; when the mobile platform performs special motions such as plane motion, uniformly accelerated motion or rotation around a fixed axis, the VIO positioning method has serious degradation problems, which leads to the inability to correctly estimate the motion scale. All the above scenes can be found in practical applications, and even some mobile platforms have the above conditions at the same time. For example, when the quadruped mobile robot runs fast, the robot body shakes violently and moves quickly (the fastest quadruped mobile robot can reach 4 m/s at present). Fortunately, these problems can be solved by fusing other sensors on the platform and by multi-sensor fusion.


In recent years, the problem of localization has become a hot research topic in the field of mobile robots, largely due to the invention and application of many new sensors, such as laser radar, cameras and other sensors, which have contributed to the vigorous development of laser SLAM and visual SLAM. Multi-sensor fusion is one of the main research directions in the field of mobile robot perception. According to the difference of fusion framework, it can be divided into loose coupling and tight coupling. The loose coupling algorithm processes the measured values of various sensors separately to obtain independent pose estimates, and then fuses the results of multiple pose estimates. The tight coupling frame uses the measured values of various sensors to predict and update a group of attitude variables, and finally obtains an optimal position and attitude estimation that conforms to all sensor observations. The loose coupling method is simple in design, and can easily fuse multiple sensor observations. However, when fusing multiple independent pose estimates, linearization errors will be introduced into the observation noise, which will cause the residual model to no longer obey the approximate Gaussian distribution, resulting in large errors in pose estimates. It is generally believed in the research field that the precision of tight coupling multi-sensor fusion algorithm is obviously higher than that of loose coupling method, but its algorithm design is also more complicated. According to the application platform and application scene, choosing the appropriate sensor combination and designing the corresponding tight coupling algorithm is one of the main problems to be solved in the practical application of robot perceptual positioning algorithm.


According to the back-end fusion algorithm, the visual inertial odometry estimation algorithm can be divided into filtering-based algorithm and optimization-based algorithm. The optimization-based method aims at minimizing the re-projection error of feature point observation, and iteratively optimizes the pose of system entities. It relies on g2o, Ceres and other optimization frameworks, and can obtain more accurate pose estimation. However, iterative optimization requires more computing resources. In practical applications, most mobile robots are systems with scarce computing resources (such as single chip microcomputer and on-board chip based on an ARM architecture), and it is difficult to meet the real-time requirements of pose estimation by using iterative optimization methods. The algorithm based on Multi-state Constrained Kalman filter (MSC-KF) mostly makes Markov assumptions, and only needs a relatively small amount of matrix operations to complete more accurate state estimation, which requires less computing resources and can better meet the needs of practical applications.


At present, the common visual inertial odometry estimation algorithm cannot estimate the position and attitude with high precision in the process of fast movement of the quadruped mobile robot, and the laser-based positioning algorithm is difficult to land in practical application because of its extremely high sensor price. Therefore, how to fully consider the platform characteristics of the quadruped mobile robot and reasonably select additional auxiliary sensors for fusion positioning is an urgent technical problem in the application field.


SUMMARY

The object of the present disclosure is to provide a high-precision odometry estimation method based on a double-layer filtering framework, and the present disclosure realizes high-precision odometry estimation in extremely challenging scenes such as violent shaking, fast moving and the like by combining IMU tight coupling fusion visual features and external sensors such as a legged odometer.


The object of the present disclosure is achieved through the following technical solution: a high-precision odometry estimation method based on a double-layer filtering framework, including the following steps:


S1, setting intrinsic parameters of a binocular camera and an IMU, and setting extrinsic parameters between a main body IMU and the binocular camera, extrinsic parameters between a main body IMU and a legged odometer, and extrinsic parameters between a legged auxiliary IMU and a legged joint motor; acquiring feedback information of the legged joint motor, measurement information of a foot end force sensor, measurement information of the main body IMU and the legged auxiliary IMU, and binocular vision observation information;


S2, fusing the information of a rotation angle and a rotational velocity of the legged joint motor, the measurement information of the legged auxiliary IMU and the foot end force sensor acquired in S1 by a bottom-layer legged filter, wherein a state vector of the bottom-layer legged filter is firstly defined, an average velocity observation of the legged and a corresponding variance thereof are calculated, then a Kalman gain corresponding to the variance is calculated, and the state vector of the bottom-layer legged filter is updated according to the Kalman gain to obtain a velocity observation in a legged auxiliary IMU coordinate system; and


S3, inputting the velocity observation obtained in step S2 into an upper-layer filter, fusing the observation information of the main body IMU, the camera and the legged odometer acquired in S1 by the upper-layer filter, wherein a state vector of the upper-layer filter is firstly defined, the state vector is predicted by a motion model of the main body IMU to obtain a prior of the state vector, and then the prior of the state vector is updated by using binocular vision observation to obtain a vision-main body IMU fusion state estimated value; finally, a state of the vision-main body IMU fusion state estimated value is updated by using the velocity observation of a legged odometer to obtain an estimated value of the state vector of the upper-layer filter is obtained, thereby completing information fusion and outputting an odometry estimation result of a robot.


The present disclosure has the following beneficial effects: the high-precision odometry estimation method based on the double-layer filtering framework can efficiently integrate various sensors such as IMU, camera, foot motor feedback, foot end sensor and the like, and effectively solve the problem that the existing method can hardly obtain high-precision pose estimation under the condition that the mobile platform moves rapidly and shakes violently; at the same time, a MSC-KF framework is selected as the main body, and the multiple states are linearly interpolated to realize the time alignment of multiple sensors; by considering both the fusion accuracy and algorithm efficiency, the real-time relative pose estimation can be carried out on the onboard platform with limited computing resources.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram of a two-layer filtering fusion framework according to the present disclosure.



FIG. 2 is a schematic diagram of sensor distribution applied to the sensing system of a quadruped mobile robot.





DESCRIPTION OF EMBODIMENTS

The present disclosure will be described in detail with reference to examples and drawings. The inventive principle of the high-precision odometry estimation method based on the double-layer filtering framework is as follows:


A Multi-state Constrained Kalman filter (MSC-KF) algorithm is used to fuse the observation data of an IMU, a legged odometer and a binocular camera to obtain the high-precision pose estimation of the mobile platform, which is a method to fuse the observations of various sensors to realize the high-precision pose estimation. Among them, the relative pose of IMU measurement data obtained by numerical integration (first approximation) is used as a prediction, the camera observation corrects the state by the update strategy of Multi-state Constrained Kalman Filter algorithm, and the legged odometer further corrects updates the state again by the update strategy of an Extended Kalman Filter (EKF) algorithm. The overall framework of the system is consistent with the framework of the MSC-KF algorithm, that is, the camera pose in the past period of time is stored in the state vector in the form of a dynamic window, so as to form an image feature constraint to update the state. The frequency of observation data of the legged odometer is often high, and updating the state with the observation frequency of the legged odometer will obviously increase the time needed for state estimation, therefore the present disclosure updates the adjacent states in the dynamic window by linear interpolation. The MSC-KF filtering framework is used as the main body, and the state in the dynamic window is linearly interpolated to assist the update. This combined framework can effectively improve the operation efficiency of the fusion algorithm and ensure the real-time performance of the algorithm.


Example 1

This example provides a high-precision odometry estimation method based on a double-layer filtering framework. FIG. 1 is a schematic diagram of a double-layer filtering fusion framework of the present disclosure; FIG. 2 is a schematic diagram of sensor distribution applied to the sensing system of a quadruped mobile robot, which mainly includes the following steps:


S1, a binocular vision inertial odometry estimation system is constructed, intrinsic parameters of the binocular camera and the IMU and extrinsic parameters among the binocular camera, the IMU and the legged odometer are input, and the information of the rotational velocity and rotation angle feedback of the binocular camera, the IMU, the legged auxiliary IMU, the legged joint motor and the foot end force sensor is acquired.


The construction mode of the sensing system of a a quadruped robot is as follows: FIG. 2 is a schematic diagram of sensor distribution of the quadruped robot sensing system. As shown in FIG. 2, the sensing system entity of the quadruped robot includes a foot end force sensor 1, a binocular camera 2, a main body IMU3, a legged auxiliary IMU4, and a legged joint motor 5. The binocular camera 2 is fixedly arranged on the head of the quadruped robot; the main body IMU3 is installed on the head of the quadruped robot; the legged auxiliary IMU4 is installed in the body center of the quadruped robot; the legged joint motor 5 is installed on the second movable joint of each leg of the robot, and can feed back the rotation angle and rotational velocity information of the joint; the foot end force sensors 1 are respectively installed at the ends of four legs of the robot.


Description of internal and extrinsic parameters: the intrinsic parameters of the IMU include the variance of accelerometer measurement noise, the variance of accelerometer random walk noise, the variance of gyroscope measurement noise and the variance of gyroscope random walk noise. The intrinsic parameters of the binocular camera include the focal coordinates, focal length and distortion parameters of the camera. The extrinsic parameters required by the double-layer filtering fusion algorithm include the extrinsic parameters between the main body IMU and the binocular camera, the extrinsic parameters between the main body IMU and the legged odometer, and the extrinsic parameters between the legged auxiliary IMU and the legged joints. The extrinsic parameters between the main body IMU and the binocular camera include the rotation matrix from the IMU coordinate system to the camera coordinate system and the position of the camera coordinate system in the main body IMU coordinate system. The extrinsic parameters between the main body IMU and the legged odometer include the rotation matrix from the main body IMU coordinate system to the legged odometer coordinate system and the position of the legged odometer coordinate system in the main body IMU coordinate system. The extrinsic parameters between the legged auxiliary IMU and the legged joints include the rotation matrix from the legged auxiliary IMU coordinate system to the legged coordinate system and the position of the legged coordinate system in the legged auxiliary IMU coordinate system.


Description of sensor measurement information: the sensor information includes feedback information of the legged joint motor, the measurement information of the foot end force sensor, the measurement information of the main body IMU and legged auxiliary IMU, and binocular vision observation information; the feedback information of the legged joint motor includes motor rotation angle information and rotational velocity information, and the acquisition frequency is 1000 Hz; the measurement information of the foot end force sensor includes the force when the foot end contacts the ground, and the acquisition frequency is 1000 Hz; the measurement information of the main body IMU and the legged auxiliary IMU includes the accelerations along the three axes (x,y,z) of the IMU coordinate system and the rotational velocities around the three axes, and the signal acquisition frequency is 400 Hz; the binocular vision observation information includes a left eye picture and a right eye picture acquired by a binocular camera at the same time.


S2, information fusion is carried out by a bottom-layer legged filter; the core idea of this step is to regard the touch point of the quadruped robot as the movement tip of the manipulator, and under the condition that the motor velocity of each joint is known, the relative position between the touch point of the leg and the robot body can be obtained by solving the forward kinematics by a DH parameter method; the upper-layer filter has two working stages: prediction stage and update stage; in the prediction stage, the upper-layer filter performs first-approximation integration operation on the acceleration and rotational velocity measured by the main body IMU between two adjacent images, and then obtains the movement displacement and rotation angle of the quadruped robot during this period; in the updating stage, the upper-layer filter uses the Multi-state Constrained Kalman Filter fusion method to fuse and update the results of the IMU prediction stage with camera observation, and uses the Extended Kalman Filter fusion method to further fuse and update the odometry estimation results updated by the camera with the velocity observation of the legged odometer, which specifically includes that following sub-steps:


S201, the state vector of the bottom-layer legged filter is defined, and the predicted state vector is obtained by using the observation model of the legged auxiliary IMU, specifically:


the state vector of the bottom-layer legged filter is defined as







x
T

=

[



p


R


v



b
a




b
w




]





where, p represents a position of a robot body in a world coordinate system, R represents a rotating attitude of the robot body in the world coordinate system, v represents a velocity of the robot body in the world coordinate system, ba represents a zero bias error bias of an accelerometer of the legged auxiliary IMU, and bw represents a zero bias error bias of a gyroscope of the legged auxiliary IMU.


The following is obtained according to the observation model of legged auxiliary IMU:












I



ω
~

WI


=




I


ω
WI


+

b
w

+

n
ω











I



a
~

WI


=




I


a
WI


+

b
a

+

n
a









where, I{tilde over (ω)}WI superscript represents a measured angular velocity of the legged auxiliary IMU in the IMU coordinate system, I{tilde over (α)}WI superscript represents a measured acceleration of the legged auxiliary IMU in the IMU coordinate system, IωWI and IαWI represent real angular velocity and acceleration of the legged auxiliary IMU in the IMU coordinate system, and nαand n107 are noises of the acceleration and a rotational velocity observation, all of which adopt a zero-mean Gaussian white noise model.


The observation of the legged auxiliary IMU is transformed into the body coordinate system as a control input uk of the system, and a corresponding input at tk is obtained:







u
k

=


[




ω
k






a
k




]

=

[





R
IB

(




I



ω
~

WI





-

(


b
w

+

n
ω


)


)











R
IB





(
I





a
~

WI


-

(


b
a

+

n
a


)


)




]






where, RIB is a corresponding rotation transformation from the legged auxiliary IMU coordinate system to the body coordinate system, ωk is an angular velocity in the corresponding body coordinate system at a time tk, and αk is the acceleration in the corresponding body coordinate system at the time tk−1.


The following prediction relationship is obtained according to an IMU motion model:







x

k
_


=


[




p

k
-
1







v

k
-
1






0





b

k
-
1

a






b

k
-
1

a




]

+

[





v

k
-
1



Δ

t







(



-


ω
~


k
^





v

k
-
1



+



(

R

k
-
1


)

T


g

+

a
k


)


Δ

t







R

k
-
1




exp

(



ω
~


k
^



Δ

t

)






0




0



]






wherein, custom-character represents a state vector predicted at a time t, and subscripts k-1 and k respectively represent corresponding variables at tk−1 and tk, Δt=tk−tk−1, {tilde over (ω)}krepresents converting the angular velocity vector in the body coordinate system at the time tk into a skew symmetric matrix, Rk−1 represents the rotating attitude of the robot body in the world coordinate system at the time tk−1, and exp(·) represents an exponential mapping operation; custom-character is acceleration of gravity.


S202, the velocity of the touch point of a single leg custom-character in relation to the body coordinate system is calculated;


for each leg, a state of contact custom-character ∈{0,1} is defined, where 1 means that the leg custom-character touches the ground at a time tk; next, according to the forward kinematics model of the series structure, the interference of noise is considered at the same time, and at the time tk, the velocity of the touch point of the single leg custom-character in relation to the body coordinate system is obtained as follows:








v
˜

k
l

=



-

J

(


θ
˜

k

)






θ
˜

.

k


-

ω
×


f
k

(


θ
˜

k

)


+

η
v






where, ƒk(·) and J(·) are a dynamic model of a legged robot and a corresponding Jacobian matrix; {tilde over (θ)}k and {tilde over ({dot over (θ)})}k are a joint angle and a rotational velocity of a second joint of a leg and a foot; ω is a body rotational velocity in a coordinate system of the legged odometer of the robot and is measured by the legged auxiliary IMU; and ηv indicates a measurement velocity noise, with a value of zero-mean Gaussian white noise and a variance of around 1e−3.


S203, considering that a plurality of legs touch the ground at the same time, a possibility of each leg touching the ground is taken as a weight corresponding to the obtained velocity by using the velocity of the touch point of the single custom-character in relation to the body coordinate system obtained in step S202, a body velocity at a corresponding moment, that is, a weighted average velocity value of a plurality of legs, is calculated, and an average velocity observation is obtained as:








v
˜

k

=









l

C





P
k

(


s
k


=

1




"\[LeftBracketingBar]"


f
l
k




)




v
~

k
l









l

C





P
k

(


s
k


=

1




"\[LeftBracketingBar]"


f
l
k




)



+

η
v






where, Pk(·) is a relationship between a corresponding touchdown force and a touchdown probability (which can be obtained by experimental test fitting), C is a set of all legs, ƒlk represents a contact force with the ground when the leg custom-character touches the ground at the time tk, which is obtained by the following dynamic model of a serial robot:







f
l
k

=


-


(


J
T

(


θ
˜

k

)

)


-
1





(


τ
l

-


h
l

(



θ
˜

k

,



θ
˜

.

kb

,
g

)


)






where, hl(·) represents a continuous Newton-Euler formula, bcustom-character represents acceleration of gravity, and τl is a moment of each leg.


S204, a variance corresponding to the average velocity observation in step S203 is calculated, a corresponding Kalman gain is calculated from the variance, and the state vector predicted in step S201 is updated by using the Kalman gain; an updated state vector is obtained; wherein a velocity vector in the updated state vector is the velocity observation required by the upper-layer filter; specifically:


the variance Pkv corresponding to the average velocity observation in step S203 is calculated as:







P
k
v

=


P
0
v

+


[


1
2



(




(


σ
x
2

,

σ
y
2

,

σ
Z
2


)

+


I
3




Δ

f

a




)


]

2






where, Povis an initial variance at a time 0, α is a normalized coefficient determined by an empirical value, I3 is an identity matrix, Δƒ represents an average difference of the force of each leg per unit time of discontinuity of the force caused by an impact when touching the ground, and ck is expressed as a total number of legs touching the ground at tk, then:







Δ

f

=


1

c
k








"\[LeftBracketingBar]"



f
l
k

-

f

l
-
1

k




"\[RightBracketingBar]"








∧(σx2, σy2, σz2) represents the variance of the velocity corresponding to each leg:









(


σ
x
2

,

σ
y
2

,

σ
z
2


)


=


1

c
k







(



v
˜

k

-


v
˜

k
l


)




(



v
˜

k

-


v
˜

k
l


)

T








the corresponding Kalman gain Kk is obtained as follows by combining the above formulas:







K
k

=


P
k





H
T

(



HP
k



H
T


+

P
k
v


)


-
1







where H represents a Jacobian matrix corresponding to velocity observation; because the velocity observation here zk={tilde over (v)}k, the corresponding Jacobian matrix is the selection matrix of the corresponding linear velocity.


Then the corresponding updated state vector xk is obtained by updating prediction with the measured and previously calculated Kalman gains:







x
k

=


x

k
¯


+


K
k

(


z
k

-

H


x

k
¯




)






where xk contains a body velocity vk at the corresponding time tk; the velocity vector in the updated state vector is the velocity observation required by the upper-layer filter.


S3, the velocity observation obtained in step S2 is input into an upper-layer filter, the observation information of the main body IMU, the camera and the legged odometer acquired in S1 by the upper-layer filter is fused, wherein a state vector of the upper-layer filter is firstly defined, the state vector is predicted by a motion model of the main body IMU to obtain a prior of the state vector, and then the prior of the state vector is updated by using binocular vision observation to obtain a vision-main body IMU fusion state estimated value; finally, a state of the vision-main body IMU fusion state estimated value is updated by using the velocity observation of a legged odometer to obtain an estimated value of the state vector of the upper-layer filter is obtained, thereby completing information fusion and outputting an odometry estimation result of a robot, specifically:


S301, firstly, the state vector of the upper-layer filter is defined, and the state vector xk of the upper-layer filter at the time tk is defined as below:







x
k
*

=


[


x

l
,
k

T




x


c

l

,
k

T


]

T





where, xI, k represents the state vector of the main body IMU at the time tk; xcl,k is a historical state window saved at the time tk, with a specific form of:








x

I
,
k


[





G

I
k




q
¯

T








G


p
T

I
k









I
k



v
T





b

b
,
k

T




b

a
,
k

T




]

T







x

cl
,
k


=

[





G

I
k




q
¯

T








G


p

I
k

T








G


v

I
k

T









G

I

k
-
M
+
1





q
¯

T








G


p

I

k
-
M
+
1


T








G


v

I

k
-
M
+
1


T





]





where, IkGq represents a rotation quaternion from the world coordinate system to the main body IMU coordinate system at the time tk; Gplk represents a position of the main body IMU coordinate system in the world coordinate system at the time tk; lkv represents the velocity of the robot in the main body IMU coordinate system at the time tk; bg,k is a bias of a rotation velocity measured by the main body IMU; bα, k is a bias of an acceleration measured by the main body IMU; {lk−iGq, GpIk−i, GvIk−1}(i=0, . . . ,M−1) are a pose and a velocity at a time tk−i cloned and saved in the historical state window.


S302, the state vector of the upper-layer filter is predicted by using the motion model of the main body IMU, and the prior and variance of the state vector of the upper-layer filter are obtained, specifically:


S3021, the rotational velocity and linear acceleration are acquired by the main body IMU, and the observation model of the main body IMU is constructed, the main body IMU includes a triaxial gyroscope and a triaxial accelerometer, wherein in the IMU coordinate system, the observation model of the main body IMU at the time tk is expressed as:







ω

m
,
k


=





I
k


ω

+

b

g
,
k


+

n

g
,
k












a

m
,
k


=





G

I
k


R




(



G

a

)

k


+



G

g



)

+

b

a
,
k


+

n

a
,
k






where, ωm,k is an angular velocity measured by the IMU at the time tk; αm,k is the representation of the linear acceleration measured by the main body IMU at the time tk in the world coordinate system; Ikω is a true value of the angular velocity the main body IMU at the time tk; Gαk is a true value of the linear acceleration of the main body IMU at the time tk; Gcustom-character is the representation of the acceleration of gravity in the world coordinate system; in this paper, the bias of the acceleration and angular velocity of the IMU is modeled as a random walk model by a classical method, therefore na,k and ng,k are zero-mean Gaussian white noises.


S3022, the motion model of the main body IMU is constructed:











G
I



q
_

.




(
t
)


=


1
2




Ω

(




I

ω



(
t
)


)

·



G
I


q
_





(
t
)



,





G



p
.

I




(
t
)


=




G


v
I




(
t
)














G



v
.

I




(
t
)


=




G


a
I




(
t
)



,
,




b
.

g

(
t
)

=




n

ω

g


(
t
)





b
.

a

(
t
)


=


n

ω

a


(
t
)







where Iω=[ω1 ω2 ω3]T is the angular velocity measured by the main body IMU; GαI is the linear acceleration measured by the IMU, which is expressed in the world coordinate system; in this paper, the bias of the acceleration and angular velocity of the IMU is modeled as a random walk model by a classical method, therefore nωcustom-character and n107 αare both zero-mean Gaussian white noises; finally








Ω

(
ω
)

=

[




-

ω





ω





-

ω
T




0



]


,




wherein ω represents an antisymmetric matrix.


S3023, the prior of the state vector is calculated by using the observation model of the main body IMU obtained in step S3021 and the motion model of the main body IMU constructed in step S3022:


assuming that the IMU measurement values in the time interval [tk, tk+1] are numerically integrated, the pose change of the robot during this time interval can be obtained, and then the prior pose of the state vector of the robot at the time tk+1 can be obtained:









x
ˆ

I

(

t

k
+
1


)

=

f

(




x
ˆ

I

(

t
k

)

,


u
m

(


t
k

:

t

k
+
1



)


)





where, um(tk: tk+1) represents all measured values of the IMU in a time interval [tk, tk+1], and {circumflex over (x)}I(tk+1) and {circumflex over (x)}I(tk+1) are state estimated values at tk and tk+1 respectively; a function f( )performs discrete-time value integration for the measured values of the IMU followed by superposition on a current state vector to obtain a predicted value of the state vector at a next moment.


S3024, the error state vector is defined and the covariance matrix of the state vector is calculated, specifically:


in order to combine the prior of the state vector with visual observation, it is necessary to calculate the covariance matrix of the prior state vector, therefore, at the same time of state prediction, it is also necessary to fuse the covariance matrix of the last moment and the IMU measurement noise, and the error state vector is defined:





{tilde over (x)}k*=[IkG{tilde over (θ)}TG{tilde over (p)}IkT Ik{tilde over (v)}T {tilde over (b)}g,kT {tilde over (b)}α,kT {tilde over (x)}clT]T


wherein except the error state vector of an attitude angle IkGθ, the error state vectors corresponding to other state vectors x are: {tilde over (x)}=x−{circumflex over (x)}, where {circumflex over (x)} is an estimated value of the state vector x; the error vector of an attitude is defined by using








δ


q
_


=



q
_





q
_

^


-
1






[





1
2




θ
~

T




1



]

T



;




the motion model of the main body IMU is linearized at a current state estimated value:










x


˜

.



(
t
)


=




F
c

(
t
)

·


x
˜

(
t
)


+



G
c

(
t
)

·

n

(
t
)







where n(t)=[n*custom-characterT nTωcustom-charactern*αT nωαT]T is a system noise vector, and a covariance matrix thereof is Qc=custom-character[n(t)·n(t)T]; Fc is an error state transition matrix; Gc is an input noise matrix; an IMU dynamic model can be transformed from a nonlinear time- invariant model to a linear time-varying model by linearization; Φ(tk+1, tk) is defined as a state transition matrix of a linearization model, and is used to propagate the covariance matrix of the state from the time tk to the time tk+1, and Φ(tk+1, tk) can be solved by the following formula:








Φ
˙

(

t
,

t
k


)

=



F
c

(
t
)



Φ

(

t
,

t
k


)






wherein an initial condition of the above formula is: Φ(tk, tk)=I15+6M, and M is a number of historical states cloned and saved in the historical state window; a standard EKF framework is used to calculate the covariance matrix of the prior pose:







P

k
+

1




"\[LeftBracketingBar]"

k




=



Φ

(


t

k
+
1


,

t
k


)



P

k




"\[LeftBracketingBar]"

k






Φ
T

(


t

k
+
1


,

t
k


)


+

Q

d
,
k







where Qd,k is a noise covariance matrix of a discrete-time system:







Q

d
,
k


=




t
k


t

k
+
1





Φ

(


t

k
+
1


,
τ

)




G
c

(
τ
)



Q
c




G
c
T

(
τ
)




Φ
T

(


f

k
+
1


,
τ

)


d


τ
.







After each state prediction, MSC-KF will clone the pose components in the current state and save them in the historical pose window x a for subsequent observation and update.


S303, the prior of the state vector is updated using binocular vision observation to obtain the vision-main body IMU fusion state estimated value, specifically including the following sub-steps:


S3031, a visual observation residual is constructed; a Jacobian matrix of visual observation is derived;


according to an embodiment of the present disclosure, visual observation based on feature points is used; firstly, feature extraction is carried out on an image and feature matching is carried out among multiple frames; the feature points observed by multiple frames of images will be triangulated; finally, the triangulated feature points will be re-projected on the image plane, and the error between the feature points and the actual observation results of each frame is the observation residual.


MSC-KF algorithm will save the poses corresponding to the past M image observations in the state vector xcl,k at the time tk, and save the feature points at the same time. Assuming that the camera observes that the coordinate is cjpƒ=[xj yj zj]T at the time tj(j=k, k−1, . . . ,k−M+1), and the relationship between the three-dimensional coordinate of the coordinate point and its two-dimensional pixel coordinate on the image plane (assuming that the camera has calibrated the internal and extrinsic parameters in advance, this refers to the regularized camera observation), that is, the visual observation model is:










z

n
,
j


=



1

z
j


[




x
j






y
j




]

+

n

z
,
j













C
j



p
f


=




I
C


R
G

I
j





(




G


p
f


-



G


p

I
,
j




)





C


p
I











where cjpƒ=[xj yj zj]T is the three-dimensional coordinate of the feature point in the camera coordinate system; Gpƒ is the three-dimensional coordinate of the observed feature point in the world coordinate system, nz,j is zero-mean Gaussian white noise; {CIR, CpI} is the rotation matrix and relative position between the camera and IMU, which is calibrated by off-line extrinsic parameters. As shown in the above-mentioned state definition, unlike the traditional EKF-SLAM algorithm, the MSC-KF algorithm does not save the position of the feature point in the world coordinate system in the state, but triangulates the feature point of the common observation of the same feature point by using the historical image in the sliding window to obtain the estimated value G{circumflex over (p)}ƒ of the feature point position. GpI,j and IjGR in the above formula are kept in the historical state xcl, k of the sliding window, and CIR and CpI are extrinsic parameters calibrated in advance, therefore after obtaining G{circumflex over (p)}ƒ by triangulation, all the variables needed to calculate the observation estimate {circumflex over (z)}n,j are obtained. For each feature point, {circumflex over (z)}n,j can be calculated by triangulation and observation model, the actual observation of the feature point can be recorded as zn,jm, and the observation residual of the visual feature is {circumflex over (z)}n,j=zn,jm−{circumflex over (z)}n,j.


As can be seen from the expression, the two-dimensional coordinates of the feature points on the image are nonlinear functions of the state vector and the global coordinates of the feature points, and the observation model can be linearized by taking the estimated value {circumflex over (x)}k+1|k of the current state and the estimated value G{circumflex over (p)}ƒ of the position of the feature points obtained by triangulation as linearization points:








z
˜


n
,
j


=



H

x
,
j





x
˜


k
+

1




"\[LeftBracketingBar]"

k





+


H

f
,
j






G



p
~

f



+

n

z
,
j







where {tilde over (z)}n,j is the observation residual, Hx,j and Hƒ, f are the Jacobian matrices of the observation model with respect to the prior value of the state vector and the estimated value of the position feature point. Because this feature point is observed in many states in the history window, there will be many above equations, and the following can be obtained by arranging them into a matrix form:







[





Z
~


n
,

k
+
1














Z
~


n
,

k
+
m






]

=




[




H

x
,

k
+
1













H

x
,

k
-
m






]




x
~


k
+

1




"\[LeftBracketingBar]"

k





+


[




H

f
,

k
+
1













H

f
,

k
-
m






]





G



p
~

f



+


[




n

z
,

k
+
1













n

z
,

k
-
m






]



Δ

z


=




H
x




x
~


k
+

1




"\[LeftBracketingBar]"

k





+


H
f





G



p
~

f



+



n
z






N
T



H
f


=
0



N
T



z


=



N
T



H
x




x
~


k
+

1




"\[LeftBracketingBar]"

k





+


N
T



n
z









wherein, in the last step, the left null space mapping is used, and both sides of the equation are multiplied by the left null space of the H f matrix at the same time, thus making full use of visual observation and avoiding adding the position of the feature point to the state vector. By variable replacement, let {tilde over (z)}k=NT{tilde over (z)}, Hx,k=NTHx, nz,k=NTnz, it can be transformed into the form of {tilde over (z)}k=Hx,k{tilde over (x)}k+1|k+nz,k, where {tilde over (z)}k is the final observation residual and Hx,k is the final visual observation Jacobian matrix.


S3032, the prior of the state vector obtained in step S3023, the covariance matrix of the state vector obtained in step S3024, the visual observation residual obtained in step S3031 and the Jacobian matrix of visual observation obtained in step S3032 are fused by using the MSC-KF algorithm to obtain the vision-main body IMU fusion state estimated value.


S304, the state of the vision-main body IMU fusion state estimated value obtained in step S303 is updated using the velocity observation of the legged odometer in step S2 to obtain the estimated value of the state vector, thereby completing the information fusion and outputting the odometry estimation result of the robot.


S3041, a velocity observation model is calculated and a velocity observation residual is constructed; specifically:


assuming that the extrinsic parameter between the legged odometer and the main body IMU is {oIq Ipo}, then the velocity observation model is:













O


v

m
,
k




=





I
O


R
G
I




R

(





G


v
I



+




G
I


R

T


·



I


ω



·




I


p
O





)


+

n
v












O


ω

m
,
k




=





I
O

R





I

ω


+

n
ω









where, ov m,k and o ωm,k are the body velocity and rotational velocity expressed in the legged odometer coordinate system {O}, oIR represents the rotation matrix from the main body IMU coordinate system {I}, Gv I is the velocity expressed by {I} in {G}coordinate system, GIR is the rotation matrix from the world coordinate system {G} to {I}, Iω is the body velocity in the coordinate system {I}, Iωis an anti-diagonal matrix corresponding to the vector Iω, IPo is the position of {O} in {I}, and nv and nωare the noises of the velocity and rotational velocity observation, and they all adopt the zero-mean Gaussian white noise model.


In order to update the pose in the state vector by using the velocity observation of the legged odometer, it is necessary to align it with the pose to be updated in the state vector in time. In order to improve the operation efficiency of the EKF algorithm, the present disclosure ***linearly interpolates the pose sliding window xcl in the state vector, and considering that the rotation matrix in the pose of the robot is on the three-dimensional standard orthogonal manifold, the linear interpolation is also completed on the corresponding manifold. Considering that the velocity observation acquisition time of the legged odometer is tk, and the time stamps corresponding to the two poses adjacent to it in the pose sliding window are tk1 and tk2 respectively, tk1<tk<tk2, then the linear manifold interpolation for the pose is:










λ
k

=



t
k

-

t

k

1







t

k

2


-

t

k

1















G

I
k


R

=


Exp

(


λ
k

·

Log

(




G

I

k

2



R

·



G

I

k

1



R


)


)





G

I

k

1



R












G


v

I
k




=



(

1
-

λ
k


)






G


v

I

k

1






+


λ
k






G


v

I

k

2














where the timestamps tk1 and tk2 are the timestamps obtained from the image observation; λk is a timestamp coefficient; tk is the observation time stamp of the legged odometer; IkiGR(i=1,2) is a rotation matrix from the world coordinate system to the IMU coordinate system at the time tki(i=1,2), and Gv lki(i=1,2) is the position of the IMU in the world coordinate system at the time tki(i=1,2). Therefore, after the observation of the legged odometer is obtained at the time tk, {IkGR, Gv Ik} can be calculated according to the observation information and the existing information in the state vector, and then the estimated value O{circumflex over (v)} of the velocity Ov m can be calculated by using the following formula:









O


v
^


=




I
O


R
G

I
k





R

(





G


v
I


+




G

I
k



R
T


·



I



ω
^




·




I


p
O





)






where I70 {circumflex over (ω)}is an antisymmetric matrix of I{circumflex over (ω)}, I{circumflex over (ω)} is the estimated value of the angular velocity Iω in the IMU coordinate system. IpO and OIR are the relative pose transformation relationships between the IMU and the legged odometer coordinate system calibrated in advance. From this, the observed residual of the linear velocity of the legged odometer can be obtained O{tilde over (v)}=Ov m−O{circumflex over (v)}, where Ov m is the actually observed linear velocity of the legged odometer.


The estimated value of the angular velocity can be calculated by IMU observation and the bias in the current state:



O{circumflex over (ω)}=OIR I{circumflex over (ω)}


where I{circumflex over (ω)} is the estimated value of the angular velocity Iω in the IMU coordinate system, and OIR is the relative pose transformation relationship between the IMU and the legged odometer coordinate system calibrated in advance. From this, it can be defined that the observation residual of the rotational velocity of the legged odometer is: O{tilde over (ω)}=OωmO{circumflex over (ω)}m is the actually observed rotational velocity of the legged odometer.


According to the fusion updating framework of EKF, in order to update the state vector by the linear velocity and rotational velocity observation of the legged odometer, it is only necessary to linearize the observation model and arrange it into the following form:







z
˜

=


H
x



x
˜

*

+
n






where {tilde over (z)} is an observation residual; {tilde over (x)}* is an error state vector; Hx is a Jacobian matrix of the observation model for the state vector; n is observation noise. The observation residual {tilde over (z)} has been defined and modeled in the first half of this section; the error state vector {tilde over (x)}* is defined in the motion model and state prediction part of the IMU; the observation noise n is an externally set parameter.


S3042, the Jacobian matrix of velocity observation is deduced; specifically:


the Jacobian matrix expression of the observation model for the component to be updated in the state vector can be calculated by using a chain derivative rule. The observation vector custom-character of the legged odometer at the time tk is expressed by the following formula:







Γ
k

=

[







O


v
k











O


ω
k






]





then the Jacobian matrix expression for updating the poses at two adjacent times tk1 and tk2 with tk is:














Γ
~

k






x
~


k

1




=






Γ
~

k







x
~

k








x
~

k







x
~


k

1
















Γ
~

k






x
~


k

2




=






Γ
~

k







x
~

k








x
~

k







x
~


k

2












To calculate the expression











Γ
~

k






x
~


k
i




,

i
=
1

,
2
,




it is necessary to use the legged odometer observation equation to derive all the state quantities in the state vector respectively, and most of the results after derivation are zero vectors. Because the definition of the Jacobian matrix is clear, the specific forms of matrix are not listed here. Only the non-zero terms after derivation are given here:










o



v
~

(

t
k

)





G

I
k



θ
~



=




I
O

R




(




G

I
k


R







G



v

I
k




)














o



v
~

(

t
k

)






b
~


g
,
k




=




I
O

R


·
I


p
O













o



v
~

(

t
k

)





G



v
~


I
k




=




I
O


R
I






G
k

R












o



ω
~

(

t
k

)






b
~


g
,
k




=

-



I
O

R






where, OIR and Ipo are the extrinsic parameter between the IMU and legged odometer coordinate system calibrated in advance; IkGR and GvIk are calculated by the linear interpolation algorithm of the aligned part of the timestamp; the antisymmetric matrix corresponding to any vector x is x. So far, the derivation calculation of










Γ
~

(

t
k

)






x
~

k






has been completed.


Before calculating the












x
~

k






x
~


k

1






and







x
~

k






x
~


k

2





,




two operation symbols need to be explained. The Lie algebra corresponding to the rotation matrix R ∈ custom-character(3) is denoted as ω ∈custom-character(3), where custom-character(3) represents a Lie algebra set corresponding to custom-character(3). Therefore, there are equations that are inverse operations for each other:






R
=


Exp

(
ω
)

=

exp

(

ω


)








ω
=


Log

(
R
)

=


log

(
R
)








wherein, x is the anti-symmetric matrix corresponding to any vector x, and X is the inverse operation thereof to obtain the vector corresponding to the anti-symmetric matrix X; exp( ) and log() represent a matrix exponential operation and a matrix logarithmic operation respectively.


For












x
~

k






x
~


k

1






and







x
~

k






x
~


k

2





,




only non-zero items are listed:










G

I
k



θ
~





G

I

k

1




θ
~



=


Exp

(


λ
k

·




k

1


k

2


θ


)

-


λ
k

·


J
l

(


λ
k

·




k

1


k

2


θ


)

·


J
γ

-
1


(


λ
k

·




k

1


k

2


θ


)













G

I
k



θ
~





G

I

k

2




θ
~



=


λ
k

·


J
l

(


λ
k

·




k

1


k

2


θ


)

·


J
l

-
1


(


λ
k

·




k

1


k

2


θ


)














G



p
~


I
k






G



p
~


I

k

1





=


(

1
-

λ
k


)

·

I
3












G



p
~


I
k






G



p
~


I

k

2





=


λ
k

·

I
3






where k1k2θ==Log(It2GIt2GRT); I3 is a three-dimensional identity matrix.


At this point, the derivation of non-zero items in Jacobian matrix has been completed. Together with the definition of the error state vector and the definition and derivation of the observation residual, the observation model of the legged odometer can be transformed into the form of {tilde over (z)}Hx{tilde over (x)}*×n, and then be updated by the EKF fusion framework.


S3042, the vision-main body IMU state estimated value and covariance matrix obtained in step S303, the velocity observation residual and the Jacobian matrix of velocity observation obtained in step S3041 are fused by the EKF algorithm to obtain the vision-main body IMU fusion state estimated value.


In this application, the term “controller” and/or “module” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components (e.g., op amp circuit integrator as part of the heat flux data module) that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.


The term memory is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc). ***


The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general-purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.


To sum up, the present disclosure effectively overcomes various shortcomings in the prior art and has high industrial utilization value. The high-precision odometry estimation method based on A double-layer filtering framework can efficiently integrate various sensors such as IMU, camera, legged motor feedback, foot end sensor and the like, and effectively solve the problem that the existing method can hardly obtain high-precision pose estimation under the condition that the mobile platform moves rapidly and violently shakes. At the same time, a MSC-KF framework is selected as the main body, and multiple states are linearly interpolated to realize the time alignment of multiple sensors; by considering both the fusion accuracy and algorithm efficiency, the real-time relative pose estimation can be carried out on the onboard platform with limited computing resources.

Claims
  • 1. A high-precision odometry estimation method based on a double-layer filtering framework, comprising the following steps: S1, setting intrinsic parameters of a binocular camera and an inertial measurement unit (IMU), extrinsic parameters between a main body IMU and the binocular camera, extrinsic parameters between the main body IMU and a legged odometer, and extrinsic parameters between a legged auxiliary IMU and a legged joint motor; andacquiring feedback information of the legged joint motor, measurement information of a foot end force sensor, measurement information of the main body IMU and the legged auxiliary IMU, and binocular vision observation information;S2, fusing, by a bottom-layer legged filter, information of a rotation angle and a rotational velocity of the legged joint motor, the measurement information of the legged auxiliary IMU and the foot end force sensor obtained in step S1, comprising: defining a state vector of the bottom-layer legged filter,calculating an average velocity observation of the legged and a variance corresponding to the average velocity observation ,calculating a Kalman gain corresponding to the variance, andupdating the state vector of the bottom-layer legged filter according to the Kalman gain to obtain a velocity observation in a legged auxiliary IMU coordinate system; andS3, inputting the velocity observation in the legged auxiliary IMU coordinate system obtained in step S2 into an upper-layer filter and fusing, by the upper-layer filter, the observation information of the main body IMU, the camera and the legged odometer acquired in S1, comprising: defining a state vector of the upper-layer filter,predicting the state vector using a motion model of the main body IMU to obtain a prior of the state vector,updating, by binocular vision observation, the prior of the state vector to obtain a vision-main body IMU fusion state estimated value, andupdating, by the velocity observation in the legged auxiliary IMU coordinate system, a state of the vision-main body IMU fusion state estimated value to obtain an estimated value of the state vector of the upper-layer filter and completing information fusion, andoutputting an odometry estimation result of a robot.
  • 2. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 1, wherein the step S2 comprises the following sub-steps: S201, defining the state vector of the bottom-layer legged filter, and obtaining a predicted state vector by an observation model of the legged auxiliary IMU;S202, calculating a velocity of a touch point of a single leg in relation to a body coordinate system;S203, setting a possibility of each leg touching the ground as a weight corresponding to the obtained velocity using the velocity of the touch point of the leg in relation to the body coordinate system obtained in step S202, andcalculating a body velocity at a corresponding moment, that is, a weighted average velocity value of a plurality of legs, to obtaining the average velocity observation; andS204, calculating the variance corresponding to the average velocity observation obtained in step S203,calculating the Kalman gain corresponding to the average velocity from the variance, and updating the predicted state vector obtained in step S201 using the Kalman gain, to obtain an updated state vector; wherein a velocity vector in the updated state vector is the velocity observation required by the upper-layer filter.
  • 3. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 2, wherein the step S201 further comprises: defining the state vector of the bottom-layer legged filter as:
  • 4. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 2, wherein step S202 further comprises: defining a state of contact ∈{0,1} for each leg , where 1 means that the leg touches the ground at the time tk;obtaining the velocity of the touch point of the leg in relation to the body coordinate system at the time tk as follows:
  • 5. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 2, wherein step S203 further comprises: setting the possibility of each leg touching the ground as the weight corresponding to the obtained velocity using the velocity of the touch point of the leg in relation to the body coordinate system obtained in step S202,calculating the velocity of the body at the corresponding moment, that is, the weighted average velocity value of the plurality of legs, andobtaining the average velocity observation as follows:
  • 6. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 2, wherein step S204 further comprises: calculating the variance Pkv corresponding to the average velocity observation in S203 as follows:
  • 7. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 1, wherein step S3 further comprises the following sub-steps: S301, defining the state vector of the upper-layer filter;S302, predicting the state vector of the upper-layer filter using the motion model of the main body IMU to obtain the prior and variance of the state vector of the upper-layer filter;S303, updating the prior of the state vector of the upper-layer filter according to binocular vision observation, using a Multi-State Constraint Kalman Filter (MSC-KF) algorithm, to obtain the vision-main body IMU fusion state estimated value; andS304, updating the state of the vision-main body IMU fusion state estimated value obtained in step S303, using the velocity observation of the legged odometer in step S2, to obtain the estimated value of the state vector of the upper-layer filter,completing information fusion by using an Extended Kalman Filter algorithm, andoutputting the odometry estimation result of the robot.
  • 8. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 7, wherein step S301 further comprises: defining the state vector of the upper-layer filter at the time tk:
  • 9. The high-precision odometry estimation method based on the double-layer filtering framework according to claim 7, wherein sub-step S302 further comprises the following sub-steps: S3021, acquiring the rotational velocity and linear acceleration by the main body IMU, andconstructing an observation model of the main body IMU,wherein in the IMU coordinate system, the observation model of the main body IMU at the time tk is expressed as:
  • 10. The high-precision odometry estimation method based on a double-layer filtering framework according to claim 9, wherein sub-step S303 further comprises the following sub-steps: S3031, constructing a visual observation residual, and deriving a Jacobian matrix of visual observation; andS3032, fusing the prior of the state vector obtained in sub-step S3023, the covariance matrix of the state vector obtained in sub-step S3024, the visual observation residual obtained in sub-step S3031 and the Jacobian matrix of the visual observation by using the MSC-KF algorithm, andobtaining the vision-main body IMU fusion state estimated value.
Parent Case Info

The present application is a continuation of International Application No. PCT/CN2021/129572, filed on Nov. 9, 2021, the content of which is incorporated herein by reference in their entirety.

Continuations (1)
Number Date Country
Parent PCT/CN2021/129572 Nov 2021 WO
Child 18646793 US