This application claims the benefit of Chinese Patent Application No. 202110364109.3, filed on Apr. 3, 2021, which is hereby incorporated by reference in its entirety.
The present disclosure pertains to the technical field of unmanned aerial vehicles (UAVs) and particularly relates to an air combat maneuvering method.
Autonomous air combat maneuvering decision-making is a process of simulating air combat decisions of pilots in various air combat situations and automatically generating maneuvering decisions for aerial vehicles (including manned and unmanned) based on mathematical optimization, artificial intelligent algorithms and the like.
According to different methods, there may be two different types of methods, traditional and intelligent, for common maneuvering decision-making of UAVs. Traditional methods refer to the process of optimal decision-making by using expert knowledge, formula derivation, influence diagrams, etc. Such methods are more concentrated on priori knowledge or mathematical operation, and often lack a self-optimization process for decision-making. Intelligent methods refer to those methods capable of self-learning and self-optimization by using approaches such as genetic algorithms, Bayes and artificial intelligence, which are used to realize maneuver control of UAVs. Such methods can usually achieve strategy optimization according to situation targets autonomously.
However, when solving the confrontation problem of both parties, these methods have the following disadvantage: since training is performed in a single deterministic environment, policy models obtained through learning by agents would be weak in robustness for being highly adaptive to the current environment and situation. When applied to a new environment or being changed in initial situation, these policy models cannot select rational decision actions accurately. In addition, learning from scratch in a new environment would take a lot of time for training.
To overcome the shortcomings of the prior art, the present disclosure provides an air combat maneuvering method based on parallel self-play, including the steps of constructing a UAV maneuver model, constructing a red-and-blue motion situation acquiring model to describe a relative combat situation of red and blue sides, constructing state spaces and action spaces of both red and blue sides and a reward function according to a Markov process, followed by constructing a maneuvering decision-making model structure based on a soft actor-critic (SAC) algorithm, training the SAC algorithm by performing air combat confrontations to realize parallel self-play, and finally testing a trained network, displaying combat trajectories and calculating a combat success rate. According to the present disclosure, the level of confrontations can be effectively enhanced and the combat success rate of the decision-making model can be increased.
The technical solution for solving the technical problems in the present disclosure includes the following steps:
step S1: constructing a UAV maneuver model;
step S2: defining our UAV as red side and enemy UAV as blue side; initializing both red and blue sides, and constructing a red-blue motion situation acquiring model to describe a relative combat situation between the red and blue sides;
step S3: constructing state spaces Sr,Sb of both red and blue sides, action spaces Ar,Ab of both red and blue sides and a reward function R according to a Markov process;
step S4: constructing a maneuvering decision-making model structure based on a soft actor-critic (SAC) algorithm;
step S5: initializing a plurality of groups of UAVs on both sides, defining experimental parameters, and training the SAC algorithm by allowing the plurality of groups of UAVs on both sides to perform air combat confrontations using the same maneuvering decision-making model and a same replay buffer to realize parallel self-play; and
step S6: randomly initializing both sides to test a trained network, and displaying combat trajectories; randomly initializing the plurality of groups of UAVs on both sides to test the trained network, and calculating a combat success rate.
Further, the constructing a UAV maneuver model may specifically include the following steps:
supposing an OXYZ coordinate system to be a three-dimensional spatial coordinate system for UAVs, where origin O represents the center of a combat area for UAVs, with X axis pointing to the north, Z axis pointing to the east and Y axis pointing in a vertical upward direction;
regarding a UAV as a mass point and establishing equations of motion for the UAV as follows:
where t denotes current time; dT denotes an integration step size of the UAV; [Xt,Yt,Zt], [Xt+dT,Yt+dT,Zt+dT] denote coordinate position components of the UAV at time t and time t+dT, respectively; Vt, Vt+dT denote velocities of the UAV at time t and time t+dT, respectively; pitch angles θt, θt+dT are included angles between velocity vectors of the UAV at time t and time t+dT, and XOZ plane; heading angles φt, φt+dT are included angles between projection vectors of the velocity vectors of the UAV at time t and time t+dT on the XOZ plane, and the positive X axis; dυ denotes an acceleration of the UAV; dθ denotes a pitch angle variation of the UAV; and dθ denotes a heading angle variation of the UAV;
Further, the step S2 may specifically include the following steps:
describing the relative situation of both sides acquired by the red-and-blue motion situation acquiring model with {right arrow over (D)}, d and q, where {right arrow over (D)} denotes a position vector between the red side and the blue side in a direction from the red side to the blue side; d denotes a distance between the red side and the blue side; q denotes a relative azimuth angle, namely an included angle between the velocity vector {right arrow over (V)}r and the distance vector {right arrow over (D)} of the red side; and
denoting the combat situation of the blue side relative to the red side by {right arrow over (D)}r, d and qr and the combat situation of the red side relative to the blue side by {right arrow over (D)}b, d and qb, where {right arrow over (D)}r denotes a position vector between the red side and the blue side in a direction from the red side to the blue side; {right arrow over (D)}b denotes a position vector between the blue side and the red side in a direction from the blue side to the red side; qr denotes a relative azimuth angle of the blue side to the red side; and qb denotes a relative azimuth angle of the red side to the blue side; and {right arrow over (D)}r, {right arrow over (D)}b, d, qr and qb are calculated as follows:
where {right arrow over (R)}r=(Xr,Yr,Zr), {right arrow over (V)}r=(υxr,υyr,υzr), υr, θr and φr are the position vector, velocity vector, velocity, pitch angle and heading angle of the red side, respectively; and {right arrow over (R)}b=(Xb,Yb,Zb), {right arrow over (V)}b=(υxb,υyb,υzb), υb, θb and φb are the position vector, velocity vector, velocity, pitch angle and heading angle of the blue side, respectively.
Further, the step S3 may specifically include the following steps:
defining the state space of the red UAV as Sr=[Xr,Yr,Zr,υr,θr,φr,d,qr] and the state space of the blue UAV as Sb=[Xb,Yb,Zb,υb,θb,φb,d,qb];
defining the action space of the red UAV as Ar=[dυr,dφr,dθr]and the action space of the blue UAV as Ab=[dυb,dφb,dθb]; and
forming the reward function R with a distance reward function Rd and an angle reward function Rq, R=w1*Rd+w2*Ra, where w1,w2 denote weights of a distance reward and an angle reward;
the distance reward function Rd is expressed as:
where Rd1 denotes a continuous distance reward, while Rd2 denotes a sparse distance reward; and Dmin denotes a minimum attack range of a missile carried by the red side, while Dmax denotes a maximum attack range of the missile carried by the red side; and
the angle reward function Rq is expressed as:
R
q1
=−q/180
R
q2=3, if q<qmax
R
q
=R
q1
+R
q2
where Rq1 denotes a continuous angle reward, while Rq2 denotes a sparse angle reward; and qmax denotes a maximum off-boresight launch angle of the missile carried by the red side.
Further, the constructing a maneuvering decision-making model structure based on a SAC algorithm may specifically include the following steps:
generating maneuver control quantities for both red and blue sides by the maneuvering decision-making model based on the SAC algorithm using a SAC method, to allow the red and blue sides to maneuver; and
implementing the SAC algorithm by neural networks including an replay buffer M, one Actor neural network πθ, two Soft-Q neural networks Q100 1 and Qφ2, two Target Soft-Q networks Q100 ′
where the Actor neural network πθ receives an input of a state value str of the red side or a state value stb of the blue side and generates outputs of mean μ(μr,μb) and variance σ(σr,σb); noise τ is generated by sampling from a standard normal distribution; an action atr of the red side or an action atb of the blue side is generated from the mean μ, variance σ and noise τ; the action atr or atb is limited to a range of(−1,1) by using a tanh function, and the process of generating the action is shown below:
μr,σr=πθ(str)
μb,σb=πθ(stb)
a
t
r
=N(μr,σr2)=μr+σr*τ
a
t
b
=N(μb,σb2)=μb+σb*τ
atr=tanh(atr)
atb=tanh(atb)
the Soft-Q neural networks Qθ1 and Qθ2 receive inputs of a state value and an action value and output Q values predicted by the neural networks; the Target Soft-Q neural networks Qφ′
each of the Actor, Soft-Q and Target Soft-Q networks is a fully-connected neutral network having l hidden layers, with n neurons in each hidden layer and an activation function ReLU.
Further, the step S5 may specifically include the following steps:
when initializing a plurality of groups of UAVs on both sides, with initial positions within the combat area, and setting an initial velocity range, an initial pitch angle range and an initial heading angle range; and
the steps of training the SAC algorithm by performing air combat confrontations to realize parallel self-play are as follows:
step S51: defining the number env_num of parallel self-play environments, defining the number batch_size of batch training sample groups, defining a maximum simulation step size N, initializing step=1, initializing env=1, initializing initial situations of both sides, and obtaining an initial state str of the red side and an initial state stb of the blue side;
step S52: randomly generating Actor network weight θ, Soft-Q network weights φ1, φ2 initializing the policy network πθ and the two Soft-Q networks Qφ1, Qφ2, supposing φ′1=φ1, φ′2=φ2, and initializing the Target Soft-Q networks Q100 ′
step S53: inputting a state str of the red side to the Actor network to output a mean μ4 and a variance σr, obtaining an action atr that fits the action space Ar in step S3 from the process of generating the action in step S4, obtaining a new state by the red side after performing the action st+1r, and obtaining a reward value rtr according to the reward function R in step S3; inputting a state stb of the blue side to the Actor network to output a mean μb and a variance σb, obtaining an action atb that fits the action space Ab in step S3 from the process of generating the action in step S4, obtaining a new state by the blue side after performing the action st+1b, and obtaining a reward value rtb according to the reward function R in step S3; and storing tuple <str,atr,st+1r,rtr> and tuple <stb,atb,st+1b,rtb> in the replay buffer M;
step S54: determining whether env is greater than env_num, and if yes, proceeding to step S55; otherwise, incrementing env by 1, and skipping to step S51;
step S55: when the number of experience groups in the replay buffer is greater than batch_size, randomly sampling batch_size groups of experience to update parameters of the Actor and Soft-Q neutral networks in the SAC algorithm, and updating a regularization coefficient α;
step S56: determining whether step is greater than N, and if yes, proceeding to step S57; otherwise, incrementing step by 1, str==st+1r, stb=st+1b, and skipping to step S53; and
step S57: determining whether the algorithm converges or whether training episodes are met, and if yes, ending the training and obtaining the trained SAC algorithm model; otherwise, skipping to step S51.
Further, the step S6 may specifically include the following steps:
step S61: initializing the initial situations of both sides, and obtaining the initial states str, stb of the red and blue sides;
step S62: separately recording the states str, stb, inputting the states str, stb to the Actor neutral network of the trained SAC algorithm model to output actions atr, atb of the red and blue sides, and obtaining new states st+1r, st+1b after performing the actions by both sides;
step S63: determining whether either of both sides succeeds in engaging in combat, and if yes, ending; otherwise, supposing str=st+1r and stb=st+1b, and skipping to step S62;
step S64: plotting combat trajectories of both sides according to the recorded states str, stb;
step S65: initializing the initial situations of n groups of UAVs on both sides, performing steps S62 to S63 on each group of UAVs on both sides, and finally recording whether either of both sides succeeds in engaging in combat, with the number of times of successfully engaging in combat being denoted as num; and
step S66: calculating num/n, namely a final combat success rate, to indicate the generalization capability of the decision-making model.
Further, in the step S5, the initial velocity range may be set as [50 m/s, 400 m/s], and the initial pitch angle range as [−90 °,90°] and the initial heading angle range as [−180°,180°].
The present disclosure has the following beneficial effects:
1. According to the present disclosure, a plurality of battlefield environments are introduced during self-play, and samples and strategies can be shared among the battlefield environments. Thus, maneuvering strategies can be overall optimized.
2. The parallel self-play algorithm proposed in the present disclosure can effectively enhance the level of confrontations and increase the combat success rate of the decision-making model.
The present disclosure is further described below in conjunction with the accompanying drawings and embodiments.
As shown in
step S1: constructing a UAV maneuver model;
step S2: defining our UAV as red side and enemy UAV as blue side; initializing both red and blue UAVs, and constructing a red-and-blue motion situation acquiring model to describe a relative combat situation of the red and blue sides;
step S3: constructing state spaces Sr,Sb of both red and blue sides, action spaces Ar,Ab of both red and blue sides and a reward function R according to a Markov process;
step S4: constructing a maneuvering decision-making model structure based on a SAC algorithm;
step S5: initializing a plurality of groups of UAVs on both sides, defining experimental parameters, and training the SAC algorithm by allowing the plurality of groups of UAVs on both sides to perform air combat confrontations using the same maneuvering decision-making model and a same replay buffer to realize parallel self-play; and
step S6: randomly initializing both sides to test a trained network, and displaying combat trajectories; randomly initializing a plurality of groups of UAVs on both sides to test the trained network, and calculate a combat success rate.
Further, the constructing a UAV maneuver model includes the following specific steps:
The position information of UAVs of both sides is updated according to equations of motion for UAVs, so that maneuvering can be realized. Furthermore, the information of both sides is provided to the both-side situation acquiring model to calculate corresponding situations.
An OXYZ coordinate system is supposed to be a three-dimensional spatial coordinate system for UAVs, where origin O represents the center of a combat area for UAVs, with X axis pointing to the north, Z axis pointing to the east and Y axis pointing in a vertical upward direction.
A UAV is regarded as a mass point and equations of motion for the UAV are established as follows:
where t denotes current time; dT denotes an integration step size of the UAV; [Xt,Yt,Zt], [Xt+dT,Yt+dT,Zt+dT] denote coordinate position components of the UAV at time t and time t+dT, respectively; Vt, Vt+dT denote velocities of the UAV at time t and time t+dT, respectively; pitch angles θt, θt+dT are included angles between velocity vectors of the UAV at time t and time t+dT, and XOZ plane; heading angles φt, φt+dT are included angles between projection vectors of the velocity vectors of the UAV at time t and time t+dT on the XOZ plane, and the positive X axis; dυ denotes an acceleration of the UAV; dθ denotes a pitch angle variation of the UAV; and dφ denotes a heading angle variation of the UAV;
Further, the step S2 includes the following specific steps:
The red-and-blue motion situation acquiring model can calculate a relative situation according to red and blue state information and provide the relative situation to a maneuvering decision-making module based on a deep reinforcement learning method for decision-making.
The relative situation of both sides acquired by the red-and-blue motion situation acquiring model is described with {right arrow over (D)}, d and q, where {right arrow over (D)} denotes a position vector between the red side and the blue side in a direction from the red side to the blue side; d denotes a distance between the red side and the blue side; q denotes a relative azimuth angle, namely an included angle between the velocity vector {right arrow over (V)}r and the distance vector {right arrow over (D)} of the red side.
The combat situation of the blue side relative to the red side is denoted by {right arrow over (D)}r, d and qr and the combat situation of the red side relative to the blue side is denoted by {right arrow over (D)}b, d and qb, where{right arrow over (D)}r denotes a position vector between the red side and the blue side in a direction from the red side to the blue side; {right arrow over (D)}b denotes a position vector between the blue side and the red side in a direction from the blue side to the red side; qr denotes a relative azimuth angle of the blue side to the red side; and qb denotes a relative azimuth angle of the red side to the blue side.
{right arrow over (D)}r, {right arrow over (D)}b, d, qr and qb are calculated as follows:
where {right arrow over (R)}r=(Xr,Yr,Zr), {right arrow over (V)}r=(υxr, υyr, υzr), υr, θr and φr are the position vector, velocity vector, velocity, pitch angle and heading angle of the red side, respectively; and {right arrow over (R)}b=(Xb,Yb,Zb), {right arrow over (V)}b=(υxb, υyb, υzb), υb, θb and φb are the position vector, velocity vector, velocity, pitch angle and heading angle of the blue side, respectively.
Further, the step S3 includes the following specific steps:
The state space of the red UAV is defined as Sr=[Xr,Yr,Zr,υr,θr,φr,d,qr]and the state space of the blue UAV is defined as Sb=[Xb,Yb,Zb,υb,θb,φb,d,qb].
The action space of the red UAV is defined as Ar=[dυr,dφr,dθr] and the action space of the blue UAV is defined as Ab=[dυb,dφb,dθb].
The reward function R is formed with a distance reward function Rd and an angle reward function Rq, R=w1*Rd+w2*Ra, where w1,w2 denote weights of a distance reward and an angle reward.
The distance reward function Rd is expressed as:
where Rd1 denotes a continuous distance reward, while Rd2 denotes a sparse distance reward; and Dmin denotes a minimum attack range of a missile carried by the red side, while Dmax denotes a maximum attack range of the missile carried by the red side.
The angle reward function Rq is expressed as:
R
q1
=−q/180
R
q2=3, if q<qmax
R
q
=R
q1
+R
q2
where Rq1 denotes a continuous angle reward, while Rq2 denotes a sparse angle reward; and qmax denotes a maximum off-boresight launch angle of the missile carried by the red side.
Further, as shown in
Maneuver control quantities for both red and blue sides are generated by the maneuvering decision-making model based on the SAC algorithm using a SAC method, to allow the red and blue sides to maneuver.
The SAC algorithm is implemented by neural networks including an replay buffer M, one Actor neural network πθ, two Soft-Q neural networks Qφ1 and Qφ2, two Target Soft-Q networks Qφ′di 1 and Qφ′
The replay buffer M is an experience replay buffer structure for specially storing experience learned in reinforcement learning.
The Actor neural network πθreceives an input of a state value str of the red side or a state value stb of the blue side and generates outputs of mean μ(μr,μb) and variance σ(σr,σb). Noise τ is generated by sampling from a standard normal distribution. An action atr of the red side or an action atb of the blue side is generated from the mean μ, variance σ and noise τ. The action atr or atb is limited to a range of(−1,1) by using a tanh function, and the process of generating the action is shown below:
μr,σr=πθ(str)
μb,σb=πθ(stb)
a
t
r
=N(μr,σr2)=μr+σr*τ
a
t
b
=N(μb,σb2)=μb+σb*τ
atr=tanh(atr)
atb=tanh(atb)
The Soft-Q neural networks Qφ1 and Qφ2 receive inputs of a state value and an action value and output Q values predicted by the neural networks. The Target Soft-Q neural networks Qφ′
Each of the Actor, Soft-Q and Target Soft-Q networks is a fully-connected neutral network having l hidden layers, with n neurons in each hidden layer and an activation function ReLU.
Further, the step S5 includes the following specific steps:
When initializing a plurality of groups of UAVs on both sides, with initial positions within the combat area, an initial velocity range is set as [50 m/s, 400 m/s], and an initial pitch angle range as [−90°,90°] and an initial heading angle range as [−180°,180°].
The steps of training the SAC algorithm by performing air combat confrontations to realize parallel self-play are as follows:
step S51: defining the number env_num of parallel self-play environments, defining the number batch_size of batch training sample groups, defining a maximum simulation step size N, initialize step=1, initialize env=1, initializing initial situations of both sides, and obtaining an initial state str of the red side and an initial state stb of the blue side;
step S52: randomly generating Actor network weight θ, and Soft-Q network weights φ1, φ2, initializing the policy network πθ and the two Soft-Q networks Qφ1, Qφ2, letting φ′1=φq, φ′2=φ2, and initializing the Target Soft-Q networks Qφ′
step S53: inputting a state str of the red side to the Actor network to output a mean μr and a variance σr, obtaining an action atr that fits the action space Ar in step S3 from the process of generating the action in step S4, obtaining a new state st+1r by the red side after performing the action, and obtaining a reward value rtr according to the reward function R in step S3; inputting a state stb of the blue side to the Actor network to output a mean μb and a variance σb, obtaining an action atb that fits the action space Ab in step S3 from the process of generating the action in step S4, obtaining a new state st+1b by the blue side after performing the action, and obtaining a reward value rtb according to the reward function R in step S3; and storing tuple <str,atr,st+1r,rtr> and tuple <stb,atb,st+1b,rtb> in the replay buffer M;
step S54: determining whether env is greater than env_num, and if yes, proceeding to step S55; otherwise, incrementing env by 1, and skipping to step S51;
step S55: when the number of experience groups in the replay buffer M is greater than batch_size, randomly sampling batch_size groups of experience to update the parameters of the Actor and Soft-Q neutral networks in the SAC algorithm, and update a regularization coefficient α, where each group of data is redefined as <st,at,st+1,r>. Gradient descent is performed on the loss function of the Actor neutral network and the loss function Jq(φi)i=1,2 of the Soft-Q neutral networks with a learning rate lr to update the weights of the Actor neutral network and the Soft-Q neutral networks.
Both Soft -Q functions are defined as minimum output values of the Target Soft-Q networks Qφ′
Qφ,(st,at)=min(Q100 1,(st,at),Q100 2,(st,at))
where Q100 1,(st,at),Q100 2,(st,at) denote output target Q values of the Target Soft-Q networks Q100 ′
The loss function of the Actor neutral network is defined as follows:
J
π(θ)=Es
The loss function JQ(φi)i=1,2 of the Soft-Q neutral networks is defined as follows:
The weights φ′1,φ′2of the Target Soft-Q neutral networks are updated as follows:
φ′1←φ+(1−τ)φ′1
φ′2←φ+(1−τ)φ′2
A regularization coefficient α is updated, and its loss function is as follows:
J(α)=E[−αlog πt(at|st)−αH0]
step S56: determining whether step is greater than N, and if yes, proceeding to step S57; otherwise, incrementing step by 1,str=st+1r, stb=st+1b, and skipping to step S53; and
step S57: determining whether the algorithm converges or whether training episodes are met, and if yes, ending the training and obtaining the trained SAC algorithm model; otherwise, skipping to step S51.
Further, the step S6 includes the following specific steps:
step S61: initializing the initial situations of both sides, and obtaining
str, s1b of the red and blue sides;
step S62: separately recording the states str, stb, inputting the states str, stb to the Actor neutral network of the trained SAC algorithm model to output actions atr, atb of the red and blue sides, and obtaining new states st+1r, st+1b after performing the actions by both sides;
step S63: determining whether either of both sides succeeds in engaging in combat, and if yes, ending; otherwise, letting str=st+1r and stb=st+1b, and skipping to step S62;
step S64: plotting combat trajectories of both sides according to the recorded states str, stb;
step S65: initializing the initial situations of n groups of UAVs on both sides, performing steps S62 to S63 on each group of UAVs on both sides, and finally recording whether either of both sides succeeds in engaging in combat, with the number of times of successfully engaging in combat being denoted as num; and
step S66: calculating num/n, namely a final combat success rate, to indicate the generalization capability of the decision-making model.
In the embodiment, when initializing a plurality of groups of UAVs on both sides, the combat area is x∈[−6 km,6 km],y∈[3km,4km],z∈[−6 km,6 km], and an initial velocity range is [50 m/s,400 m/s], while an initial pitch angle range is [−90°,90°] and an initial heading angle range is [−180°,180°].
The maximum attack range of a missile is 6 km and a minimum attack range is 1 km. The maximum off-boresight launch angle of the missile is 30°, w1=w2=0.5
The SAC algorithm model is constructed as follows: in the Actor neutral network in the SAC algorithm, the number of hidden layers l=2, and in each layer, the number of nodes n=256. The optimization algorithm is Adam algorithm, with discount factor γ=0.99, network learning rate lr=0.0003, entropy regularization coefficient α=1 and target entropy value H0=−3
The number of parallel self-play environments is defined as env_num=[2,4,6,8,10,12,14,16,18,20]; the number of training sample groups is defined as batch_size=128; and the maximum simulation step size is defined as N=800.
After the training is finished, both sides are randomly initialized to test the trained algorithm, and combat trajectories are displayed, as shown in
200 Groups of UAVs on both sides are randomly initialized to test the trained algorithm, and a combat success rate is calculated. The results of the combat success rate varying with the number of parallel self-play environments is calculated as shown in
Therefore, the maneuvering decision-making of UAVs can be effectively realized, and the generalization capability of the model can be improved, so that the model can be more practicable.
Number | Date | Country | Kind |
---|---|---|---|
202110364109.3 | Apr 2021 | CN | national |