COMPLEX NETWORK COGNITION-BASED FEDERATED REINFORCEMENT LEARNING END-TO-END AUTONOMOUS DRIVING CONTROL SYSTEM, METHOD, AND VEHICULAR DEVICE

Information

  • Patent Application
  • 20250128720
  • Publication Number
    20250128720
  • Date Filed
    August 23, 2023
    a year ago
  • Date Published
    April 24, 2025
    13 days ago
Abstract
The provided are a federated reinforcement learning (FRL) end-to-end autonomous driving control system and method, as well as vehicular equipment, based on complex network cognition. An FRL algorithm framework is provided, designated as FLDPPO, for dense urban traffic. This framework combines rule-based complex network cognition with end-to-end FRL through the design of a loss function. FLDPPO employs a dynamic driving guidance system to assist agents in learning rules, thereby enabling them to navigate complex urban driving environments and dense traffic scenarios. Moreover, the provided framework utilizes a multi-agent FRL architecture, whereby models are trained through parameter aggregation to safeguard vehicle-side privacy, accelerate network convergence, reduce communication consumption, and achieve a balance between sampling efficiency and high robustness of the model.
Description
TECHNICAL FIELD

The present disclosure belongs to the field of transportation and automated driving, and relates to a complex network cognition-based federated reinforcement learning (FRL) end-to-end automated driving control system, method, and vehicle device under dense urban traffic.


BACKGROUND

Urban driving involves a wide range of interactions between driving subjects, traffic participants and road infrastructure, and is an open problem in transportation. It is difficult to model the environment and subject cognition during dynamic interactions that present high dimensionality and diversity. Although traditional rule-based approaches are good at handling simple driving scenarios, rule-based solutions are difficult to apply when facing complex traffic situations.


On the other hand, end-to-end approaches do not rely on manually formulated rules, but rather on data-driven training methods to obtain competitive results. Deep reinforcement learning (DRL) is a typical data-driven algorithm that generates samples by interacting with the environment. The generated samples are stored in a replay buffer and used for model training through mini-batch sampling. There are drawbacks to the application of DRL due to its characteristics. First, DRL requires a large number of samples, making it more difficult to converge when training large networks compared to supervised learning. If there are not enough samples, the robustness of the model will be limited. Second, using a replay buffer imposes limitations on the network inputs, such as the size of the image inputs. The computational load and communication consumption of DRL algorithms with image inputs will increase linearly with training. In addition, end-to-end DRL as a black-box algorithm makes it difficult to understand the motivation of an agent to make decisions and lacks interpretability. Due to these limitations, DRL is mainly applied to simpler driving scenarios in its early stages.


While it is possible to deconstruct complex environments by dividing complex research scenarios into multiple single objectives. However, single-objective research approaches fall short when it comes to understanding scenarios where multiple elements interact. Despite the positive impact of deconstructing complex environments, simple splitting limits the development of automated driving towards complex environment applications.


SUMMARY

To solve the above technical problem, the present disclosure provides a complex network cognition-based FRL end-to-end autonomous driving algorithm framework federated learning based distributed proximal policy optimization (FLDPPO) under dense urban traffic. FLDPPO realizes the combination of rule-based complex network cognition and end-to-end FRL by designing a loss function. FLDPPO uses dynamic driving suggestions to guide the agent as it learns the rules, enabling it to cope with complex urban driving environments and dense traffic scenarios. Moreover, the proposed framework uses a multi-agent FRL architecture to train the model through parameter aggregation, accelerates network convergence, reduces communication consumption, and achieves a balance between sampling efficiency and high robustness of the model while protecting the privacy of the vehicle side.


In the present disclosure, a technical solution for a complex network cognition-based FRL end-to-end autonomous driving control system includes five parts: a measurement encoder, an image encoder, a complex network cognition module, a reinforcement learning module, and a federated learning module.


The measurement encoder is configured to obtain the state quantity required by the complex network cognition module and the reinforcement learning module. The state quantities required by the complex network cognition module comprise x-coordinate, y-coordinate, heading angle change and speed of the driving agent. The cognition state quantities are handed over to the complex network cognition module as input. The state quantities required by the reinforcement learning module include steering wheel angle, throttle, brake, gear, lateral speed and longitudinal speed. The RL state quantities are given to the reinforcement learning module as part of the inputs after extracting features from the two-layer fully connected network.


The image encoder is configured to obtain the amount of image implicit state required by the reinforcement learning module. The image used is a 15-channel semantic bird's eye view (BEV), iRL∈[0,1]192*192*15. Where, 192 is in pixels and the BEV used is 5px/m. The 15 channels contain a drivable domain, a desired path, a road edge, 4 frames of other vehicles, 4 frames of pedestrians, and 4 frames of traffic signs. The desired path is calculated using the A* algorithm. The semantic BEV is extracted by multilayer convolutional layers to extract implicit features and then passed to the reinforcement learning module as another part of the inputs.


The complex network cognition module is configured to model the driving situation of the driving subject, and to obtain the maximum risk value of the driving subject in the current driving situation according to the state quantity provided by the measurement encoder, and finally to output dynamic driving suggestions based on the risk value through an activation function.


The modeling process of the complex network cognition module constructs a dynamic complex network model with traffic participants and road infrastructure as nodes:







G
t

=


(

P
,
E
,
W
,
Θ

)

t





Where, Gt denotes the dynamic complex network at the moment t, P={p1, p2, . . . , pN} is the set of nodes, and the number of nodes is N; E={e1,2, e1,3, . . . , ei,j} is the set of edges, and the number of edges is








N

(

N
-
1

)

2

,




and ei,j stands for the connectivity between nodes pi and pj; W={w1,2, w1,3, . . . , wi,j} is the set of weights of the edges, wi,j represents the coupling strength between nodes pi and pj; Θ is the active region of the nodes, representing the dynamic constraints on the nodes in the network. Model Θ as a smooth bounded surface:









F
Θ

(

x
,
y
,
z

)

=
0

,


s
.
t
.





(

x
,
y

)


Ω







Where, Ω is the boundary of the representation of the slip surface. Consider a continuous time dynamic network with N nodes on Θ with a node state equation of the form:








X
.

i

=



A
i



X
i


+


B
i



U
i







Where, Xi∈Rm denotes the state vector of node pi, Rm denotes the vector space consisting of m-dimensional real numbers R, Ui∈Rq is the input vector, Rq denotes the vector space consisting of q-dimensional real numbers R, Ai denotes the dynamics matrix, Bi denotes the input matrix. Based on the node state equation, the output vector of node pi can be obtained:







Y
i

=


f
i

(

X
i

)





Where, fi denotes the output function of the node. Then the weight function between nodes pi and pj is:







w

i

j


=

F

(


Y
i

,

Y
j


)





Where, F denotes the weight function between the nodes. A Gaussian function is used here to reveal the static properties between nodes:







S

s

t

a


=



C
a

·
exp




(


-



(

x
-

x
0


)

2


a
x
2



-



(

y
-

y
0


)

2


b
y
2



)






Where, Ssta denotes the static field strength, Cα denotes the field strength coefficients, x0 and y0 denote the coordinates of the risk center O(x0, y0), and αx and by denote the vehicle appearance coefficients, specifically the length and width, respectively. The safety field is characterized by shape anisotropy:






ε
=




a
x
2

-

b
y
2




a
x
2

+

b
y
2



=



ϕ
2

-
1



ϕ
2

+
1









ϕ
=



a
x

/

b
y


=


l
v

/

w
v







Where, ϕ is the aspect ratio, lv denotes the vehicle length, and wv denotes the vehicle width. The safety field is delineated by a series of isofields, and the top view projection is the region covered by a series of ellipses, with the region covered by the smallest center ellipse being the core domain, the region between the smallest center ellipse and the second ellipse being the limited domain, and the region between the second ellipse and the largest ellipse being the extended domain. The size and shape of the region are determined by the isofield lines, are related to the vehicle shape and motion state and are described based on a Gaussian function. The direction of the safety field is aligned with the direction of vehicle motion.


When the vehicle is in motion, the risk center O(x0, y0) of the safety field will be transferred to a new risk center O′(x′0, y′0):






{





x
0


=


x
0

+


k
v





"\[LeftBracketingBar]"


v




"\[RightBracketingBar]"




cos

β









y
0


=


y
0

+


k
v





"\[LeftBracketingBar]"


v




"\[RightBracketingBar]"




sin

β










Where, kv denotes the moderating factor and kv∈{(−1,0)U(0,1)}, kv's positive or negative is related to the direction of motion, and β denotes the angle of the transfer vector kv[{right arrow over (v)}] with the axes in the Cartesian coordinate system. A virtual vehicle, with length l′v and width w′v, is formed under the risk center transfer. The dynamic safety field:







S
dyn

=


C
a

·

exp



(







(

x




x
0



)

2



(

a
x


)

2









(

y




y
0



)

2



(

b
y


)

2



)







Where, Sdyn denotes the dynamic field strength and the new aspect ratio is denoted as ϕ′=α′x/b′v=l′v/′v. As the motion state of the vehicle changes, the shape of the Gaussian safety field changes, thus changing the three fields covered by the safety field: the core region, the limited region and the extended region.


The present disclosure categorizes risk perception into three layers on a planar scale based on different levels of human driving reaction time: the first cognitive domain, the second cognitive domain and the extra-domain space.


The first cognitive domain:







a
x




s

th

1









s

th

1


=


t

c

1


·

v
e






The second cognitive domain:







s

th

1


<

a
x




s

th

2









s

th

2


=


t

c

2


·

v
e






The extra-domain space:







s

th

2


<

a
x






Where, Sth1 denotes the first cognitive domain threshold, obtained from the human driving the first reaction time tc1 and the maximum approach speed ve of the other nodes relative to the self-vehicle. Sth2 denotes the second cognitive domain threshold, obtained from the human driving the second reaction time tc2 and the maximum approach speed ve of the other nodes relative to the self-vehicle.


Establish a risk perception function between nodes under a variable safety field model:







Risk


(


p
i

,


p
j


)


=




"\[LeftBracketingBar]"



S
ij





"\[RightBracketingBar]"




exp

(




-

k
c






"\[LeftBracketingBar]"



v
j





"\[RightBracketingBar]"



cos


θ

i
,
j




)






Where, |{right arrow over (Si,j)}| denotes the field strength of node pi at node pj, kc denotes the risk-adjustment cognitive coefficient, |{right arrow over (vj)}| denotes the scalar velocity of node pj, and θi,j denotes the angle (positive in the clockwise direction) between the velocity vector {right arrow over (vj)} of node pj and the field strength vector Si,j. The risk value Risk(pi, Pj), obtained through the risk perception function, indicates the coupling strength between the nodes, and the higher the risk value, the higher the coupling strength, implying the higher correlation between the nodes.


The activation function is configured to map the risk value, Activate(Risk) represents different activation functions according to different driving suggestions, and the mapped risk value will be used as the basis for guiding the output strategy of the reinforcement learning module:








Activate
go

(
Risk
)

=

4


(

1
+

exp

(


-
3


00
/
Risk

)


)

-
1










Activate
stop

(
Risk
)

=

4


(

1
+

exp

(


-

0
.
2


*
R

i

s

k

)


)

-
1






Where, Activatego(Risk) denotes the activation function when the suggestion is forward, Activatestop(Risk) denotes the activation function when the suggestion is stop, and Risk denotes the current risk value of the self-vehicle. The dynamic risk suggestion Brisk:








B

r

i

s

k


=

B
(



Activate

g

o


(
Risk
)

,

β

g

o



)


,
go








B

r

i

s

k


=

B
(


α
stop

,


Activate
stop

(
Risk
)


)


,
stop




Where, B denotes the beta distribution with αstopgo=1.


The reinforcement learning module is configured to integrate the state quantities output from the measurement encoder and the image encoder, output the corresponding strategies according to the integrated network inputs, and interact with the environment to generate experience stored in the local replay buffer in the federated learning module. When the number of samples reaches a certain threshold, a batch of sample is taken from the local replay buffer for training, and finally the parameters of the trained neural network are uploaded to the federated learning module.


The interaction environment is a CARLA simulator, realizes vehicle control by inputting the control quantities of steering, throttle and brake, where steering ∈[−1,1], throttle ∈[0,1] and brake ∈[0,1]. Based on CARLA's control method, the reinforcement learning action space ∈[−1,1]2, are categorized into steering wheel corner and throttle brake. When outputting the throttle brake, [−1,0] denotes the brake and [0,1] denotes the throttle. The present disclosure outputs the two parameters of the beta distribution by reinforcement learning, and then obtains the policy actions by sampling:







Beta
=

B
(

α
,
β

)


,
α
,

β
>
0





In contrast to Gaussian distributions, are commonly used for model-free reinforcement learning, beta distributions are bounded and do not require mandatory constraints.


The interaction process produces an experience, described by a tuple, containing a previous moment state quantity, an action, a reward function, a next moment state quantity, and a dynamic driving suggestion. Calculate the weighted reward function with the mapped risk value as a weight for termination state-related reward:








r
=


r
speed

+

r
position

+

r

a

c

t

i

o

n


+

Activate


(
Risk




)

*

r
terminal








r
speed

=


1
-



"\[LeftBracketingBar]"


v
-

v
desire




"\[RightBracketingBar]"




v
max









r
position

=



-
0.


5
*
Δ

d

-

Δ

θ






Where, rspeed denotes the speed-related reward function, rposition denotes the position-related reward function, raction denotes the action-related reward function, Y terminal denotes the termination state-related reward function, v denotes the vehicle speed, vdesire denotes the desired speed, and vmax=6 m/s denotes the maximum speed. Δd denotes the vehicle lateral distance from the desired path, and Δθ denotes the angle between the vehicle traveling direction and the tangent line of the desired path. Table 1 describes the values of raction and rterminal in detail, where Δsteering denotes the amount of steering wheel angle change in two frames.













TABLE 1







Reward
Condition
Value









raction
Δsteering ≥ 0.01
−1 − v



rterminal
Run red light
−1 − v




Run stop sign
−1 − v




Collision
−1 − v




Route deviation
−1




Blocked
−1










The training process of the reinforcement learning module performs parameter updating through the following loss function:







θ

k
+
1


=

arg


max
θ




E

τ


π

θ
k




[



ppo

+


exp

+



r

i

s

k



]










exp

=


-

λ
exp


*

H

(


π
θ

(


·

|

i
RL



,

m

R

L



)

)









H

(

π
θ

)

=

-

KL

(


π
θ






(


-
1

,
1

)




)











r

i

s

k


=


λ

r

i

s

k






{


T
-

N
z

+
1

,





T


}



(
k
)









*
KL



(


π
θ

(


·

|

i

RL
,
k




,

m

RL
,
k



)

)





B

r

i

s

k







Where, custom-characterppo denotes the clipped policy gradient loss with advantages estimated using generalized advantage estimation. custom-characterexp denotes the maximum entropy loss, H(πθ(·|RL, mRL)) denotes the entropy of the policy πθ under the image input iRL and the measurement input mRL, and custom-character(−1,1) denotes the uniform distribution. custom-characterexp encourage the agent to explore by converging the action distribution to a uniform distribution, λexp denotes the weight of the maximum entropy loss. custom-characterrisk denotes the loss based on the dynamic risk suggestions, and custom-character{T−Nz+1, . . . , T}(k) denotes the calculation of the KL-divergence of the strategy output by the driving subject Nz=100 steps before the termination state and the dynamic driving suggestions to realize the guidance of the agent, and λrisk denotes the weight of the dynamic suggestions loss.


The federated learning module is configured to receive the neural network parameters uploaded by the reinforcement learning module of each agent, and to aggregate the global parameters based on the plurality of neural network parameters, and finally to send the global parameters to each agent until the network converges. The global parameter aggregation is performed by the following equation:







ϕ
m


=


1
N





n


ϕ
m
n







Where, ϕ*m denotes the global parameters at time m, N denotes the number of agents, and ϕmn denotes the neural network parameters at time m of the nth agent.


Overall, the FLDPPO algorithm realizes the combination of rule-based complex network cognition and end-to-end FRL by designing the loss function. Moreover, using the multi-agent FRL architecture, the model is trained by parameter aggregation. The multi-agent FRL architecture accelerates the network convergence and reduces the communication consumption on the basis of protecting the privacy of the vehicle, and realizes the balance between high robustness and sample-efficient of the model.


The technical solution of the complex network cognition based FRL end-to-end automatic driving control method of the present disclosure includes the following steps:


Step 1: Build an urban dense traffic simulation environment in the CARLA simulator. The simulation environment contains the driving subject, the traffic participants, and the transportation infrastructure.


The driving subject is a plurality of agents, modeled as Markov decision processes, respectively, and using a reinforcement learning module for steering wheel, throttle and brake control. The Markov decision process is described by the tuple (S, A, P, R, γ). S denotes the state set, corresponding to the state quantities acquired by the measurement encoder and the image encoder in the present disclosure, and contains the steering wheel angle, the throttle, the brake, the gear, the lateral and longitudinal speeds, and the 15-channel semantic BEV; A denotes the action set, corresponding to the steering wheel, the throttle, and the brake control quantities of the driving subject in the present disclosure; P denotes the state transfer equation p: S×A→P(S), for each state-action pair (s, α)∈S×A there is a probability distribution p(·|s, α) of entering a new state after adopting an action a, in state s; R denotes the reward function R: S×S×A→R, R(St+1, st, αt) denotes the reward obtained after entering a new state St+1 from the original state st, in the present disclosure, the goodness of performing the action is defined by the reward function; γ denotes the discount factor, γ∈[0, 1], is configured to compute the cumulative reward η(πθ)=Σi=0Tγiri, where T denotes the current moment, γi denotes the discount factor of moment i, and ri denotes the immediate reward of moment i. The solution to the Markov decision process is to find a strategy π: S→A such that the cumulative reward is maximized π*: =argmaxθη(πθ). In the present disclosure, the reinforcement learning module integrates the implicit state quantities output by the measurement encoder and the image encoder and outputs the corresponding optimal control policy;


Step 2: Build a complex network cognition module to model the driving situation of the driving subject, establish a complex network model, and output dynamic driving suggestions based on state quantities provided by the measurement encoder through an activation function. The complex network model represents the dynamic relationship between nodes within the field range through a variable Gaussian safety field based on risk center transfer. The nodes contain driving subjects, traffic participants and transportation infrastructure.


Step 3: Construct an end-to-end neural network, comprising 2 fully connected layers used by the measurement encoder, 6 convolutional layers used by the image encoder and 6 fully connected layers used by the reinforcement learning module. The neural network has two output heads, action head and value head. The action head outputs two parameters of the beta distribution and the value head outputs the value of the action.


Step 4: The driving subjects interact with the CARLA simulation environment and the experiences are stored in their respective local replay buffers. When the number of samples reaches a certain threshold, the samples are sampled from the respective local replay buffer according to the mini-batch, and then the neural network parameters are updated according to the designed loss function.


Step 5: The neural network parameters corresponding to each driving subject are uploaded to the federated learning module. The federated learning module aggregates the global parameters based on the multiple neural network parameters according to the aggregation interval and sends global parameters to each agent until the network converges.


Preferably, in step 1, the traffic participants comprise other vehicles and pedestrians, the number of other vehicles is 100 and the number of pedestrians is 250, and are controlled using CARLA's roaming model.


Preferably, in step 1, the transportation infrastructure contains traffic lights and traffic signs (stops), represented in the image encoder input using a 4-frame semantic BEV.


Preferably, in step 2, the complex network cognition module takes different activation functions according to different driving suggestions:








Activate
go

(
Risk
)

=

4


(

1
+

exp

(


-
3


00
/
Risk

)


)

-
1










Activate
stop

(
Risk
)

=

4


(

1
+

exp

(


-

0
.
2


*
R

i

s

k

)


)

-
1






Preferably, in step 2, the dynamic driving suggestions are represented using a beta distribution:








B

r

i

s

k


=

B
(



Activate

g

o


(
Risk
)

,

β

g

o



)


,
go








B

r

i

s

k


=

B
(


α
stop

,


Activate
stop

(
Risk
)


)


,
stop




Preferably, in step 3, the neural network of the measurement encoder uses the Relu activation function for the 2 fully connected layers, the neural network of the image encoder uses the Relu activation function for the 5 convolutional layers except for the last convolutional layer that spreads the state volume without using the activation function, the neural network of the reinforcement learning module uses the Softplus activation function for the last layer of the action head, the last layer of the value head does not use the activation function. The other fully connected layers use Relu activation function.


Preferably, in step 4, the parameters used in the training process, the learning rate is 0.00001; the total step size is 12288; the mini-batch sampling is 256; the loss function weights λppo, λexp, and λrisk are 0.5, 0.01, and 0.05, respectively; the range of the PPO clip is 0.2; and the parameter γ for the generalized advantage estimation is 0.99 and λ is 0.9.


Preferably, in step 4, the loss function used for training uses the loss custom-characterfrisk based on the dynamic risk suggestions. The guidance of the agent is realized by calculating the KL-divergence of the strategy and the dynamic driving suggestions output by the driving subject Nz=100 steps before the termination state.


Preferably, in step 5, the federated learning module is a multi-agent framework that uses a local replay buffer architecture between agents.


Preferably, in step 5, the aggregation process uses a parameter-averaged aggregation method with an aggregation interval of 256.


The present disclosure also proposes a vehicular device, the vehicular device being capable of executing the contents of the complex network cognition-based FRL end-to-end autonomous driving control system, or complex network cognition-based FRL end-to-end autonomous driving control method.


The Present Disclosure has Advantages as Follows:

(1) The present disclosure proposes FLDPPO, a complex network cognition-based FRL end-to-end automated driving algorithm framework, to realize the combination of rule-based complex network cognition and end-to-end FRL by designing a loss function. FLDPPO uses dynamic driving suggestions to guide agents to learn the rules, enabling agents to cope with complex urban driving environments and dense traffic scenarios.


(2) The proposed framework of the present disclosure uses a multi-agent FRL architecture to train models by the method of parameter aggregation. The multi-agent FRL architecture accelerates network convergence, reduces communication consumption, and achieves a balance between sampling efficiency and high robustness of the model while protecting the privacy of the vehicle side.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is the complex network cognition based FRL framework.



FIGS. 2A-2D show the schematic of risk perception based on dynamic safety field.



FIG. 3 is the schematic of the reward function.



FIG. 4 is the schematic of the neural network.



FIG. 5 is the diagram of the federated learning architecture.



FIG. 6 is the diagram of the FLDPPO framework.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The technical solution of the present disclosure is described in detail below in conjunction with the drawings, but is not limited to the contents of the present disclosure.


The present disclosure provides a complex network cognition-based FRL end-to-end algorithmic framework that enables autonomous driving under dense urban traffic, specifically including the following steps:


(1) The framework of FRL algorithm based on complex network cognition is built in CARLA simulator, as shown in FIG. 1 and FIG. 6. The framework includes a measurement encoder, an image encoder, a complex network cognition module, a reinforcement learning module, and a federated learning module. The measurement encoder is configured to obtain x-coordinate, y-coordinate, heading angle change and speed of the driving agent. The cognition state quantities are handed over to the complex network cognition module as input. The measurement encoder is also used to obtain steering wheel angle, throttle, brake, gear, lateral speed and longitudinal speed. The RL state quantities are given to the reinforcement learning module as part of the inputs after extracting features from the two-layer fully connected network. The image encoder used a 15-channel semantic BEV, iRL∈[0,1]192*192*15. Where, 192 is in pixels and the BEV used is 5px/m. The 15 channels contain a drivable domain, a desired path, a road edge, 4 frames of other vehicles, 4 frames of pedestrians, and 4 frames of traffic signs. The desired path is calculated using the A* algorithm. The semantic BEV is extracted by multilayer convolutional layers to extract implicit features and then passed to the reinforcement learning module as another part of the inputs.


(2) Model the driving situation of the driving subject, as shown in FIGS. 2A-2D, a dynamic complex network model is constructed with traffic participants and road infrastructure as nodes:







G
t

=


(

P
,
E
,
W
,
Θ

)

t





Where, Gt denotes the dynamic complex network at the moment t, P={p1, p2, . . . , pN} is the set of nodes, and the number of nodes is N; E={e1,2, e1,3, . . . , ei,j} is the set of edges, and the number of edges is








N

(

N
-
1

)

2

,




and ei,j stands for the connectivity between nodes pi and pj; W={w1,2, w1,3, . . . , wi,j} is the set of weights of the edges, wi,j represents the coupling strength between nodes pi and pj; Θ is the active region of the nodes, representing the dynamic constraints on the nodes in the network. Model Θ as a smooth bounded surface:









F
Θ

(

x
,
y
,
z

)

=
0

,






s
.
t
.





(

x
,
y

)


Ω






Where, Ω is the boundary of the representation of the slip surface. Consider a continuous time dynamic network with N nodes on Θ with a node state equation of the form:








X
.

i

=



A
i



X
i


+


B
i



U
i







Where, Xi∈Rm denotes the state vector of node pi, Rm denotes the vector space consisting of m-dimensional real numbers R, Ui∈Rq is the input vector, Rq denotes the vector space consisting of q-dimensional real numbers R, Ai denotes the dynamics matrix, Bi denotes the input matrix. Based on the node state equation, the output vector of node pi can be obtained:







Y
i

=


f
i

(

X
i

)





Where, fi denotes the output function of the node. Then the weight function between nodes pi and pj is:







w

i

j


=

F

(


Y
i

,

Y
j


)





Where, F denotes the weight function between the nodes. The present disclosure uses a Gaussian function to reveal the static properties between nodes:







S

s

t

a


=



C
a

·
exp




(


-



(

x
-

x
0


)

2


a
x
2



-



(

y
-

y
0


)

2


b
y
2



)






Where, Ssta denotes the static field strength, Cα denotes the field strength coefficients, x0 and y0 denote the coordinates of the risk center O(x0, y0), and αx and by denote the vehicle appearance coefficients, respectively. The safety field is characterized by shape anisotropy:






ε
=




a
x
2

-

b
y
2




a
x
2

+

b
y
2



=



ϕ
2

-
1



ϕ
2

+
1









ϕ
=



a
x

/

b
y


=


l
v

/

w
v







Where, ϕ is the aspect ratio, lv denotes the vehicle length, and wv denotes the vehicle width. The safety field is delineated by a series of isofields, and the top view projection is the region covered by a series of ellipses as shown in FIGS. 2A-2D, with the region covered by the smallest center ellipse being the core domain, the region between the smallest center ellipse and the second ellipse being the limited domain, and the region between the second ellipse and the largest ellipse being the extended domain. The size and shape of the region are determined by the isofield lines, are related to the vehicle shape and motion state and are described based on a Gaussian function. The direction of the safety field is aligned with the direction of vehicle motion.


When the vehicle is in motion, the risk center O(x0, y0) of the safety field will be transferred to a new risk center O′(x′0, y′0):






{





x
0


=


x
0

+


k
v





"\[LeftBracketingBar]"


v




"\[RightBracketingBar]"




cos

β









y
0


=


y
0

+


k
v





"\[LeftBracketingBar]"


v




"\[RightBracketingBar]"




sin

β










Where, kv denotes the moderating factor and kv∈{(−1,0)U(0,1)}, kv's positive or negative is related to the direction of motion, and β denotes the angle of the transfer vector kv|{right arrow over (v)}| with the axes in the Cartesian coordinate system. A virtual vehicle, with length l′v and width w′v, is formed under the risk center transfer. The dynamic safety field:







S
dyn

=



C
a

·
exp




(


-



(

x
-

x
0



)

2



(

a
x


)

2



-



(

y
-

y
0



)

2



(

b
y


)

2



)






Where, Sdyn denotes the dynamic field strength and the new aspect ratio is denoted as ϕ′=α′x/b′y=l′v/w′v. As the motion state of the vehicle changes, the shape of the Gaussian safety field changes, thus changing the three fields covered by the safety field: the core region, the limited region and the extended region.


The present disclosure categorizes risk perception into three layers on a planar scale based on different levels of human driving reaction time: the first cognitive domain, the second cognitive domain and the extra-domain space.


The first cognitive domain:







a
x




s

t

h

1









s

t

h

1


=


t

c

1


·

v
e






The second cognitive domain:







s

t

h

1


<

a
x




s

t

h

2









s

t

h

2


=


t

c

2


·

v
e






The extra-domain space:







s

t

h

2


<

a
x






Where, sth1 denotes the first cognitive domain threshold, obtained from the human driving the first reaction time tc1 and the maximum approach speed ve of the other nodes relative to the self-vehicle. sth2 denotes the second cognitive domain threshold, obtained from the human driving the second reaction time tc2 and the maximum approach speed ve of the other nodes relative to the self-vehicle.


Establish a risk perception function between nodes under a variable safety field model:







Risk



(


p
i

,

p
j


)


=




"\[LeftBracketingBar]"



S

i

j






"\[RightBracketingBar]"




exp



(


-

k
c






"\[LeftBracketingBar]"



v
j





"\[RightBracketingBar]"




cos



θ

i
,
j



)






Where, |{right arrow over (Si,j)}| denotes the field strength of node pi at node pj, kc denotes the risk-adjustment cognitive coefficient, |{right arrow over (vj)}| denotes the scalar velocity of node pj, and θi,j denotes the angle (positive in the clockwise direction) between the velocity vector {right arrow over (vj)} of node pj and the field strength vector {right arrow over (Si,j)}. The risk value Risk(Pi, Pj), obtained through the risk perception function, indicates the coupling strength between the nodes, and the higher the risk value, the higher the coupling strength, implying the higher correlation between the nodes.


The activation function is configured to map the risk value, Activate(Risk) represents different activation functions according to different driving suggestions, and the mapped risk value will be used as the basis for guiding the output strategy of the reinforcement learning module:








Activate
go

(
Risk
)

=

4


(

1
+

exp



(


-
300

/
Risk

)



)

-
1










Activate
stop

(
Risk
)

=

4


(

1
+

exp



(


-

0
.
2


*
R

i

s

k

)



)

-
1






Where, Activatego(Risk) denotes the activation function when the suggestion is forward, Activatestop(Risk) denotes the activation function when the suggestion is stop, and Risk denotes the current risk value of the self-vehicle. The dynamic risk suggestion Brisk:








B

r

i

s

k


=

B
(



Activate

g

o


(
Risk
)

,

β

g

o



)


,
go








B

r

i

s

k


=

B
(


α
stop

,


Activate
stop




(
Risk
)



)


,
stop




Where, B denotes the beta distribution with αstopgo=1.


(3) Construct the reinforcement learning model of the driving subject. Based on CARLA's control method, the reinforcement learning action space ∈[−1,1]2, are categorized into steering wheel corner and throttle brake. When outputting the throttle brake, [−1,0] denotes the brake and [0,1] denotes the throttle. The present disclosure outputs the two parameters of the beta distribution by reinforcement learning, and then obtains the policy actions by sampling:







Beta
=

B
(

α
,
β

)


,
α
,

β
>
0





In contrast to Gaussian distributions, are commonly used for model-free reinforcement learning, beta distributions are bounded and do not require mandatory constraints.


The reward function setting, as shown in FIG. 3, considers from four aspects: velocity, coordinates, action and termination state. Calculate the weighted reward function with the mapped risk value as a weight for termination state-related reward:







r
=


r
speed

+

r
position

+

r
action

+


Activate
(
Risk
)

*

r
terminal








r
speed

=


1
-



"\[LeftBracketingBar]"


v
-

v
desire




"\[RightBracketingBar]"




v
max







r
position

=



-
0.5

*
Δ

d

-
Δθ






Where, rspeed denotes the speed-related reward function, rposition denotes the position-related reward function, raction denotes the action-related reward function, rterminal denotes the termination state-related reward function, v denotes the vehicle speed, vdesire denotes the desired speed, and vmax=6 m/s denotes the maximum speed. Δd denotes the vehicle lateral distance from the desired path, and Δθ denotes the angle between the vehicle traveling direction and the tangent line of the desired path. Table 1 describes the values of raction and rterminal in detail, where Δsteering denotes the amount of steering wheel angle change in two frames.


Construct an end-to-end neural network, as shown in FIG. 4, comprising 2 fully connected layers used by the measurement encoder, 6 convolutional layers used by the image encoder and 6 fully connected layers used by the reinforcement learning module. The neural network has two output heads, action head and value head. The action head outputs two parameters of the beta distribution and the value head outputs the value of the action.


(4) The driving subjects interact with the CARLA simulation environment and the experiences are stored in their respective local replay buffers. As shown in FIG. 5, when the number of samples reaches a certain threshold, the samples are sampled from the respective local replay buffer according to the mini-batch, and then the neural network parameters are updated according to the designed loss function:








θ

k
+
1


=

arg


max
θ



E

τ
~

π

θ
k




[



ppo

+


exp

+


risk


]








exp

=


-

λ
exp


*

H

(


π
θ

(


·



i
RL



,

m
RL


)

)







H

(

π
θ

)

=

-

KL

(


π
θ






(


-
1

,
1

)




)








risk

=


λ
risk

*


{


T
-

N
z

+
1

,

,
T

}



(
k
)

*

KL

(



π
θ

(


·



i

RL
,
k




,

m

RL
,
k



)





B
risk



)







Where, custom-characterppo denotes the clipped policy gradient loss with advantages estimated using generalized advantage estimation. custom-characterexp denotes the maximum entropy loss, H(πθ(·|iRL, mRL)) denotes the entropy of the policy πθ under the image input iRL and the measurement input mRL, and custom-character(−1,1) denotes the uniform distribution. custom-characterexp encourage the agent to explore by converging the action distribution to a uniform distribution, λexp denotes the weight of the maximum entropy loss. custom-characterrisk denotes the loss based on the dynamic risk suggestions, and custom-character{T−Nz+1, . . . T}(k) denotes the calculation of the KL-divergence of the strategy output by the driving subject Nz=100 steps before the termination state and the dynamic driving suggestions to realize the guidance of the agent, and λrisk denotes the weight of the dynamic suggestions loss.


(5) The federated learning module is configured to receive the neural network parameters uploaded by the reinforcement learning module of each agent, and to aggregate the global parameters based on the plurality of neural network parameters, and finally to send the global parameters to each agent until the network converges. The global parameter aggregation is performed by the following equation:







ϕ
m
*

=


1
N





n


ϕ
m
n







Where, ϕ*m denotes the global parameters at time m, N denotes the number of agents, and ϕmn denotes the neural network parameters at time m of the nth agent.


Overall, the present disclosure proposes FLDPPO, a complex network cognition based FRL algorithmic framework for urban autonomous driving in dense traffic scenarios. The FLDPPO algorithm realizes the combination of rule-based complex network cognition and end-to-end FRL by designing the loss function. The dynamic driving suggestions guide the agents to learn the rules, enabling them to cope with complex urban driving environments and dense traffic scenarios. The present disclosure introduces federated learning to train models by the method of parameter aggregation. The federated learning architecture accelerates network convergence, reduces communication consumption. The multi-agent architecture in the algorithmic framework not only improves the sample efficiency, but the trained models also exhibit high robustness and generalization.


The present disclosure also proposes a vehicular device, the vehicular device being capable of executing the contents of the complex network cognition-based FRL end-to-end autonomous driving control system, or complex network cognition-based FRL end-to-end autonomous driving control method.


A serious of detailed descriptions above are only specific descriptions of the practicable mode of implementation of the present disclosure and are not intended to limit the scope of protection of the present disclosure. Any equivalent mode or modification that does not depart from the technology of the present disclosure is included in the scope of protection of the present disclosure.

Claims
  • 1. A complex network cognition-based federated reinforcement learning (FRL) end-to-end autonomous driving control system, comprising a measurement encoder, an image encoder, a complex network cognition module, a reinforcement learning module, and a federated learning module, wherein: the measurement encoder is configured to obtain state quantities required by the complex network cognition module and the reinforcement learning module, the state quantities required by the complex network cognition module comprise a x-coordinate, a y-coordinate, a heading angle change and a speed of a driving agent, the state quantities are handed over to the complex network cognition module as an input, the state quantities required by the reinforcement learning module comprise a steering wheel angle, a throttle, a brake, a gear, a lateral speed and a longitudinal speed, the state quantities are given to the reinforcement learning module as part of the inputs after extracting features from a two-layer fully connected network;the image encoder is configured to obtain an amount of image implicit state required by the reinforcement learning module, an image used is a 15-channel semantic bird's eye view (BEV), iRL∈[0,1]192*192*15, 192 is in pixels and the BEV used is 5px/m, 15 channels contain a drivable domain, a desired path, a road edge, 4 frames of other vehicles, 4 frames of pedestrians, and 4 frames of traffic signs, wherein the desired path is calculated using a A* algorithm, the semantic BEV is extracted by multilayer convolutional layers to extract implicit features and then passed to the reinforcement learning module as another part of the inputs;the complex network cognition module is configured to model a driving situation of a driving subject, and to obtain a maximum risk value of the driving subject in a current driving situation according to the state quantity provided by the measurement encoder, and finally to output dynamic driving suggestions based on the risk value through an activation function;the reinforcement learning module is configured to integrate the state quantities output from the measurement encoder and the image encoder, output corresponding strategies according to integrated network inputs, and interact with an environment to generate experience samples stored in a local replay buffer in the federated learning module, when the number of experience samples reaches a certain threshold, a batch of sample is taken from the local replay buffer for training, and finally trained neural network parameters are uploaded to the federated learning module; andthe federated learning module is configured to receive the neural network parameters uploaded by the reinforcement learning module of the driving agents, and to aggregate a set of global parameters based on the plurality of neural network parameters, and finally to send the global parameters to the driving agents until a neural network converges, a global parameter aggregation is performed by a following equation:
  • 2. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 1, wherein a modeling process of the complex network cognition module constructs a dynamic complex network model with a traffic participant and a road infrastructure as nodes:
  • 3. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 2, wherein a Gaussian function is used in the complex network to reveal a static property between nodes:
  • 4. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 3, wherein a series of isofield lines are used to delineate the safety field, a top view projection of the series of isofield lines is a region covered by a series of ellipses, with a region covered by a smallest center ellipse being a core domain, a region between the smallest center ellipse and a second ellipse being a limited domain, and a region between the second ellipse and a largest ellipse being an extended domain, size and shape of the region are determined by the isofield lines, are related to a vehicle shape and a motion state, and are described based on the Gaussian function, a direction of the safety field is aligned with a direction of vehicle motion, when a vehicle is in motion, the risk center O(x0, y0) of the safety field will be transferred to a new risk center O′(x′0, y′0):
  • 5. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 4, wherein a risk perception is categorized into three types on a planar scale based on different levels of human driving reaction time: a first cognitive domain, a second cognitive domain and an extra-domain space, wherein: the first cognitive domain:
  • 6. (canceled)
  • 7. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 1, wherein a CARLA simulator is used as an interaction environment, the CARLA simulator realizes vehicle control by inputting control quantities of a steering, a throttle and a brake, wherein steering ∈[−1,1], throttle ∈[0,1] and brake ∈[0,1], based on a CARLA simulator's control method, a reinforcement learning action space ∈[−1,1]2, is categorized into the steering and a throttle-brake, when outputting the throttle-brake, [−1,0] denotes the brake and [0,1] denotes the throttle, the driving control system outputs two parameters of the beta distribution by the reinforcement learning module, and then obtains policy actions by sampling:
  • 8. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 1, wherein in a training process, a parameter updating is performed through following loss functions for the reinforcement learning module:
  • 9. A complex network cognition-based FRL end-to-end autonomous driving control method, comprising the following steps: step 1: building an urban dense traffic simulation environment in a CARLA simulator, wherein the simulation environment contains a driving subject, a traffic participants, and a road infrastructure,the driving subject is a plurality of agents, modeled as Markov decision processes, respectively, and using a reinforcement learning module for a steering wheel, a throttle and a brake control, the Markov decision process is described by a tuple (S, A, P, R, γ), wherein S denotes a state set, corresponding to state quantities acquired by a measurement encoder and an image encoder, and contains a steering wheel angle, a throttle, a brake, a gear, lateral and longitudinal speeds, and a 15-channel semantic BEV; A denotes an action set, corresponding to the steering wheel, the throttle, and the brake control quantities of the driving subject; P denotes a state transfer equation p: S×A→P(S), each state-action pair (s, α)∈S×A has a probability distribution p(·|s, α) of entering a new state after adopting an action α, in a state s; R denotes a reward function R: S×S×A→R, R (St+1, st, αt) denotes a reward obtained after entering the new state st+1 from an original state st, a goodness of performing the action is defined by the reward function; γ denotes a discount factor, γ∈[0, 1], is configured to compute a cumulative reward η(πθ)=Σi=0Tγiri, wherein T denotes a current moment, γi denotes a discount factor of moment i, and ri denotes an immediate reward of moment i, a solution to the Markov decision process is to find a strategy π: S→A maximize the cumulative reward π*: =argmaxθ π(πθ), the reinforcement learning module integrates implicit state quantities output by the measurement encoder and the image encoder and outputs a corresponding optimal control policy;step 2: building the complex network cognition module to model a driving situation of the driving subject, establish a complex network model, and output dynamic driving suggestions based on state quantities provided by the measurement encoder through an activation function, the complex network model represents a dynamic relationship between nodes within a field range through a variable Gaussian safety field based on risk center transfer, the nodes contain the driving subject, the traffic participants, and the road infrastructure;step 3: constructing an end-to-end neural network, comprising 2 fully connected layers used by the measurement encoder, 6 convolutional layers used by the image encoder and 6 fully connected layers used by the reinforcement learning module, the neural network has two output heads, an action head and a value head, the action head outputs two parameters of a beta distribution and the value head outputs a value of the action;step 4: interacting the driving subject with a CARLA simulation environment and storing experiences in respective local replay buffers, wherein when the number of samples reaches a certain threshold, the samples are sampled from the respective local replay buffer according to a mini-batch, and then neural network parameters are updated according to a designed loss function;step 5: uploading the neural network parameters corresponding to each driving subject to a federated learning module, aggregating global parameters based on the multiple neural network parameters according to an aggregation interval, and sending the global parameters to each agent until the network converges;wherein in step 1, the traffic participants comprise other vehicles and pedestrians, the road infrastructure contains traffic lights and traffic signs, represented in the image encoder input using a 4-frame semantic BEV;in step 2, the complex network cognition module takes different activation functions according to different driving suggestions:
  • 10. A vehicular device, wherein the vehicular device is capable of executing contents of the complex network cognition-based FRL end-to-end autonomous driving control system of claim 1, or a complex network cognition-based FRL end-to-end autonomous driving control method, wherein the complex network cognition-based FRL end-to-end autonomous driving control method comprises the following steps:step 1: building an urban dense traffic simulation environment in a CARLA simulator, wherein the simulation environment contains a driving subject, a traffic participants, and a road infrastructure,the driving subject is a plurality of agents, modeled as Markov decision processes, respectively, and using a reinforcement learning module for a steering wheel, a throttle and a brake control, the Markov decision process is described by a tuple (S, A, P, R, γ), wherein S denotes a state set, corresponding to state quantities acquired by a measurement encoder and an image encoder, and contains a steering wheel angle, a throttle, a brake, a gear, lateral and longitudinal speeds, and a 15-channel semantic BEV; A denotes an action set, corresponding to the steering wheel, the throttle, and the brake control quantities of the driving subject; P denotes a state transfer equation p: S×A→P(S), each state-action pair (s, α)∈S×A has a probability distribution p(·|s, α) of entering a new state after adopting an action α, in a state s; R denotes a reward function R: S×S×A→R, R(st+1, st, αt) denotes a reward obtained after entering the new state st+1 from an original state st, a goodness of performing the action is defined by the reward function; γ denotes a discount factor, γ∈[0, 1], is configured to compute a cumulative reward η(πθ)=Σi=0Tγiri, wherein T denotes a current moment, yi denotes a discount factor of moment i, and ri denotes an immediate reward of moment i, a solution to the Markov decision process is to find a strategy π: S→A maximize the cumulative reward π*=argmaxθ η(πθ), the reinforcement learning module integrates implicit state quantities output by the measurement encoder and the image encoder and outputs a corresponding optimal control policy;step 2: building the complex network cognition module to model a driving situation of the driving subject, establish a complex network model, and output dynamic driving suggestions based on state quantities provided by the measurement encoder through an activation function, the complex network model represents a dynamic relationship between nodes within a field range through a variable Gaussian safety field based on risk center transfer, the nodes contain the driving subject, the traffic participants, and the road infrastructure;step 3: constructing an end-to-end neural network, comprising 2 fully connected layers used by the measurement encoder, 6 convolutional layers used by the image encoder and 6 fully connected layers used by the reinforcement learning module, the neural network has two output heads, an action head and a value head, the action head outputs two parameters of a beta distribution and the value head outputs a value of the action;step 4: interacting the driving subject with a CARLA simulation environment and storing experiences in respective local replay buffers, wherein when the number of samples reaches a certain threshold, the samples are sampled from the respective local replay buffer according to a mini-batch, and then neural network parameters are updated according to a designed loss function;step 5: uploading the neural network parameters corresponding to each driving subject to a federated learning module, aggregating global parameters based on the multiple neural network parameters according to an aggregation interval, and sending the global parameters to each agent until the network converges;wherein in step 1, the traffic participants comprise other vehicles and pedestrians, the road infrastructure contains traffic lights and traffic signs, represented in the image encoder input using a 4-frame semantic BEV;in step 2, the complex network cognition module takes different activation functions according to different driving suggestions:
  • 11. The complex network cognition-based FRL end-to-end autonomous driving control system according to claim 7, wherein in a training process, a parameter updating is performed through following loss functions for the reinforcement learning module:
  • 12. The vehicular device according to claim 10, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, a modeling process of the complex network cognition module constructs a dynamic complex network model with the traffic participant and the road infrastructure as nodes:
  • 13. The vehicular device according to claim 12, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, a Gaussian function is used in the complex network to reveal a static property between nodes:
  • 14. The vehicular device according to claim 13, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, a series of isofield lines are used to delineate the safety field, a top view projection of the series of isofield lines is a region covered by a series of ellipses, with a region covered by a smallest center ellipse being a core domain, a region between the smallest center ellipse and a second ellipse being a limited domain, and a region between the second ellipse and a largest ellipse being an extended domain, size and shape of the region are determined by the isofield lines, are related to a vehicle shape and a motion state, and are described based on the Gaussian function, a direction of the safety field is aligned with a direction of vehicle motion, when a vehicle is in motion, the risk center O(x0, y0) of the safety field will be transferred to a new risk center O′(x′0, y′0):
  • 15. The vehicular device according to claim 14, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, a risk perception is categorized into three types on a planar scale based on different levels of human driving reaction time: a first cognitive domain, a second cognitive domain and an extra-domain space, wherein: the first cognitive domain:
  • 16. The vehicular device according to claim 10, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, the CARLA simulator is used as an interaction environment, the CARLA simulator realizes vehicle control by inputting control quantities of a steering, the throttle and the brake, wherein steering ∈[−1,1], throttle ∈[0,1] and brake ∈[0,1], based on a CARLA simulator's control method, a reinforcement learning action space ∈[−1,1]2, is categorized into the steering and a throttle-brake, when outputting the throttle-brake, [−1,0] denotes the brake and [0,1] denotes the throttle, the driving control system outputs two parameters of the beta distribution by the reinforcement learning module, and then obtains policy actions by sampling:
  • 17. The vehicular device according to claim 10, wherein in the complex network cognition-based FRL end-to-end autonomous driving control system, in a training process, a parameter updating is performed through following loss functions for the reinforcement learning module:
Priority Claims (1)
Number Date Country Kind
202310902155.3 Jul 2023 CN national
CROSS-REFERENCE TO THE RELATED APPLICATIONS

This application is the national phase entry of International Application No. PCT/CN2023/114349, filed on Aug. 23, 2023, which is based upon and claims priority to Chinese Patent Application No. 202310902155.3, filed on Jul. 21, 2023, the entire contents of which are incorporated herein by reference.

PCT Information
Filing Document Filing Date Country Kind
PCT/CN2023/114349 8/23/2023 WO