ONLINE MODELING METHOD FOR DYNAMIC MUTUAL OBSERVATION OF DRONE SWARM COLLABORATIVE NAVIGATION

Information

  • Patent Application
  • 20210255645
  • Publication Number
    20210255645
  • Date Filed
    July 28, 2020
    4 years ago
  • Date Published
    August 19, 2021
    3 years ago
Abstract
Disclosed is an online dynamic mutual-observation modeling method for unmanned aerial vehicle (UAV) swarm collaborative navigation, which includes: first performing first-level screening for members according to the number of usable satellites received by a satellite navigation receiver of each member, to determine the role of each member in collaborative navigation at the current time, and then establishing a moving coordinate system with each object member to be assisted as the origin, and calculating coordinates of each candidate reference node; and on this basis, performing second-level screening for the candidate reference nodes according to whether mutual distance measurement can be performed with each object member, to obtain a usable reference member set, and preliminarily establishing a dynamic mutual-observation model; and finally, optimizing the model by means of iterative correction, and conducting a new round of dynamic mutual-observation modeling according to an observation relationship in the UAV swarm, its own positioning performance, and role change in collaborative navigation, thus providing an accurate basis for effectively realizing UAV swarm collaborative navigation.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to the field of unmanned aerial vehicle (UAV) swarm collaborative navigation technologies, and in particular, to an online dynamic mutual-observation modeling method for UAV swarm collaborative navigation.


Description of Related Art

A UAV swarm, a new concept proposed at home and abroad in recent years, is an organization mode of multiple UAVs for three-dimensional spatial arrangement and mission assignment to adapt to mission requirements, which deals with the formation, maintenance, and reorganization of formation flying, and also the organization of flight missions; and can be dynamically adjusted according to external conditions and mission demands.


The conventional integrated navigation system model is mainly based on measurement information of a fixed reference coordinate system and fixed performance. However, the relative position and positioning performance of members in the UAV swarm are constantly changing during the flight, and the role of each member as an assisted object node or an assisting reference node in the swarm collaborative navigation is also constantly changing. Therefore, the conventional integrated cooperative model is unable to adapt to the requirements of UAV swarm collaborative navigation.


Therefore, research on a dynamic mutual-observation model and modeling method based on a moving reference coordinate system and in consideration of an observation relationship between members, their own positioning performance, and role change in collaborative navigation can efficiently realize adaptive model description of mutual-observation information during collaborative navigation, thus providing support for autonomous collaboration of the UAV swarm.


SUMMARY OF THE INVENTION
Technical Problem

The technical problem to be solved by the present invention is to provide an online dynamic mutual-observation modeling method for UAV swarm collaborative navigation, which considers an observation relationship between members, their own positioning performance, and role change in collaborative navigation in a moving reference coordinate system; and establishes and optimizes a dynamic mutual-observation model, thus providing an accurate basis for realizing collaborative navigation.


Technical Solution

The present invention adopts the following technical solution to solve the foregoing technical problem:


An online dynamic mutual-observation modeling method for UAV swarm collaborative navigation is provided, including the following steps:


step 1: numbering members in the UAV swarm as 1, 2, . . . , n; performing first-level screening for the members according to the number of usable satellites received by an airborne satellite navigation receiver of each member at the current time, to determine the role of each member in collaborative navigation: setting members which receive less than 4 usable satellites as object members and recording a number set of the object members as A; and setting members which receive not less than 4 usable satellites as candidate reference members and recording a number set of the candidate reference members as B, where A, B⊂{1, 2, . . . , n};


step 2: acquiring an airborne navigation system indication position of an object member i and establishing a local east-north-up geographic coordinate system regarding the object member with the indication position as the origin, where i denotes the member number and i∈A;


step 3: acquiring an airborne navigation system indication position of a candidate reference member j and its positioning error covariance; and putting, after transformation, the airborne navigation system indication position of the candidate reference member j and its positioning error covariance into the local east-north-up geographic coordinate system regarding the object member i and established in step 2, where j denotes the member number and j∈B;


step 4: performing second-level screening for the candidate reference members according to whether each object member and each candidate reference member can measure the distance for each other, to determine the role of each candidate reference member in collaborative navigation: setting a candidate reference member for which mutual distance measurement can be performed with the object member i as a usable reference member for the object member i, and recording a number set of the usable reference members for the object member i as Ci, where Ci⊂B;


step 5: calculating a mutual-observation vector between the object member and its usable reference member, and calculating a vector projection matrix regarding the object member and its usable reference member according to the mutual-observation vector;


step 6: calculating an object position projection matrix and a usable reference position projection matrix regarding the object member and its usable reference member;


step 7: calculating a status mutual-observation matrix between the object member and its usable reference member by using the vector projection matrix obtained in step 5 and the object position projection matrix obtained in step 6;


step 8: calculating a noise mutual-observation matrix between the object member and its usable reference member by using the vector projection matrix obtained in step 5 and the usable reference position projection matrix obtained in step 6; and calculating a mutual-observation noise covariance between the object member and its usable reference member by using the noise mutual-observation matrix;


step 9: establishing a mutual-observation set matrix regarding the object member for all of its usable reference members by using the status mutual-observation matrix obtained in step 7;


step 10: establishing a mutual-observation set covariance regarding the object member for all of its usable reference members by using the mutual-observation noise covariance obtained in step 8;


step 11: establishing a mutual-observation set observed quantity regarding the object member for all of its usable reference members by using the mutual-observation vector obtained in step 5;


step 12: establishing a dynamic mutual-observation model for UAV swarm collaborative navigation according to the mutual-observation set matrix obtained in step 9, the mutual-observation set covariance obtained in step 10, and the mutual-observation set observed quantity obtained in step 11; performing weighted least squares positioning for the object member by using the dynamic mutual-observation model, to obtain a longitude correction, a latitude correction, and a height correction of the position of the object member; and calculating a corrected longitude, latitude, and height;


step 13: calculating position estimation covariance of the object member by using the status mutual-observation matrix obtained in step 7 and the mutual-observation noise covariance obtained in step 8;


step 14: calculating an online modeling error amount by using the object position projection matrix obtained in step 6 and the longitude correction, the latitude correction, and the height correction of the object member obtained in step 12; when the online modeling error amount is less than a preset error control standard of online dynamic mutual-observation modeling, determining that iterative convergence occurs in online modeling, that is, ending online modeling and going to step 15; otherwise, returning to step 5 to make iterative correction on the mutual-observation model; and step 15: determining whether navigation ends; if yes, ending the process; otherwise, returning to step 1 to conduct next-round modeling.


As a preferred solution of the present invention, the mutual-observation vector described in step 5 has the following expression:







r
k
i

=


[




x
k
i






y
k
i






z
k
i




]

=

[





-
Δ




λ
ik



(


R
N

+

h
i


)



cos






L
i














-
Δ








L
ik



(


R
N

+

h
i


)



+

Δ






L
ik



f
2



cos
2



L
i















-
Δ







h
ik


+

Δ






L
ik



f
2


sin






L
i


cos






L
i











]






where rki denotes a mutual-observation vector between the object member i and its usable reference member k; xkicustom-character ykicustom-character zki respectively denote east-direction, north-direction, and up-direction components of rki in the local east-north-up geographic coordinate system regarding the object member i; Δλikcustom-character ΔLikcustom-character Δhik denote difference values respectively in longitude, latitude, and height output by an airborne navigation system and between the object member i and its usable reference member k; RN denotes the radius of curvature in prime vertical of the earth's reference ellipsoid; f denotes the oblateness of the earth's reference ellipsoid; and Li and hi respectively denote the latitude and the height of the object member i output by the airborne navigation system.


As a preferred solution of the present invention, the vector projection matrix described in step 5 has the following expression:







M
k
i

=

[



x
k
i


d
ik





y
k
i


d
ik





z
k
i


d
ik



]





where Mki denotes a vector projection matrix regarding the object member i and its usable reference member k; xkicustom-character ykicustom-character zki respectively denote east-direction, north-direction, and up-direction components of rki in the local east-north-up geographic coordinate system regarding the object member i; rki denotes the mutual-observation vector between the object member i and its usable reference member k; and dik denotes a calculated value of a distance between the object member i and its usable reference member k, and has the following expression: dik=√{square root over (xki2+yki2+zki2)}.


As a preferred solution of the present invention, the object position projection matrix described in step 6 has the following expression:







N
k
i

=

[





-


Δλ
ik



(


R
N

+

h
i


)




sin






L
i






(


R
N

+

h
i


)


cos






L
i






Δλ
ik


cos






L
i








R
N

+

h
i




0



Δ






L
ik






0


0


1



]





where Nki denotes an object position projection matrix regarding the object member i and its usable reference member k; Δλikcustom-character ΔLik denote difference values respectively in longitude and latitude output by the airborne navigation system and between the object member i and its usable reference member k; Li and hi respectively denote the latitude and the height of the object member i output by the airborne navigation system; and RN denotes the radius of curvature in prime vertical of the earth's reference ellipsoid.


As a preferred solution of the present invention, the usable reference position projection matrix described in step 6 has the following expression:







L
k
i

=

[



0




-

(


R
N

+

h
i


)



cos






L
i




0





-

(


R
N

+

h
i


)




0


0




0


0



-
1




]





where Lki denotes a usable reference position projection matrix regarding the object member i and its usable reference member k; Li and hi respectively denote the latitude and the height of the object member i output by the airborne navigation system; and RN denotes the radius of curvature in prime vertical of the earth's reference ellipsoid.


As a preferred solution of the present invention, the status mutual-observation matrix described in step 7 has the following expression:






H
k
i
=M
k
i
N
k
i


where k denotes a status mutual-observation matrix between the object member i and its usable reference member k; Mki denotes a vector projection matrix regarding the object member i and its usable reference member k; and Nki denotes an object position projection matrix regarding the object member i and its usable reference member k.


As a preferred solution of the present invention, the noise mutual-observation matrix described in step 8 has the following expression:






D
k
i
=M
k
i
L
k
i


where Dki denotes a noise mutual-observation matrix between the object member i and its usable reference member k; Mki denotes a vector projection matrix regarding the object member i and its usable reference member k; and Lki denotes a usable reference position projection matrix regarding the object member i and its usable reference member k.


As a preferred solution of the present invention, the mutual-observation noise covariance described in step 8 has the following expression:






R
k
i
=D
k
iσpk2DkiTRF2


where Rki denotes a mutual-observation noise covariance between the object member i and its usable reference member k; Dki denotes a noise mutual-observation matrix between the object member i and its usable reference member k; σRF2 denotes an error covariance of a relative distance measuring sensor, and σpk2: denotes a positioning error covariance of the usable reference member k.


As a preferred solution of the present invention, the online modeling error amount described in step 14 has the following expression:






u
k
i
|N
k
i[δ{circumflex over (λ)}iδ{circumflex over (L)}iδĥi]T|


where uki denotes an online modeling error amount regarding the object member i and its usable reference member k; Nki denotes an object position projection matrix regarding the object member i and its usable reference member k; and δ{circumflex over (λ)}icustom-character δ{circumflex over (L)}icustom-characterδĥi respectively denote a longitude correction, a latitude correction, and a height correction of the position of the object member i.


Advantageous Effect

By using the foregoing technical solution, the present invention achieves the following technical effects compared to the prior art:


1. The present invention considers dynamic changes of navigation performance of members in the UAV swarm during flight, and determines the roles of the members in collaborative navigation by means of dynamic screening, so that members with high positioning performance preferably assist those with low positioning performance, thus solving the problem of poor modeling adaptability in a role-fixed mode.


2. The present invention considers the difference in positioning performance between reference members, and improves the modeling precision by combining positioning errors of the reference members and a measurement error of a distance measuring sensor and further by introducing iterative weight.


3. The present invention has high flexibility, and adapts to UAV swarms of different sizes and a mutual-observation condition of different relative position relationships between the members.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart of an online dynamic mutual-observation modeling method for UAV swarm collaborative navigation in the present invention;



FIG. 2 is a curve chart of iterative modeling in a moving coordinate system regarding an object member and established by the method of the present invention;



FIG. 3 is a curve chart showing a position error during iterative modeling by the method of the present invention; and



FIG. 4 is a curve chart showing longitude, latitude, and height errors during iterative modeling by the method of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The embodiments of the present invention are described in detail below, and examples of the described embodiments are shown in the accompanying drawings. The following embodiments described with reference to the accompanying drawings are exemplary and are only used to explain the present invention, but cannot be construed as limiting the present invention.


The present invention provides an online dynamic mutual-observation modeling method for UAV swarm collaborative navigation, which provides effective support for UAV swarm collaborative navigation and improves flexibility and precision of collaborative navigation modeling. A solution is shown in FIG. 1, and includes the following steps:


(1) The number of members in the UAV swarm is set to n and the members are sequentially numbered as 1, 2, . . . , n, where n is the number of all the members. An error control standard ζ of online dynamic mutual-observation modeling is set.


(2) First-level screening is performed for the members according to the number of usable satellites received by an airborne satellite navigation receiver of each member in the UAV swarm at the current time, to determine the role of each member in collaborative navigation: setting members which receive less than 4 usable satellites as object members and recording a number set of the object members as A; and setting members which receive not less than 4 usable satellites as candidate reference members and recording a number set of the candidate reference members as B, where A,B⊂{1, 2, . . . , n}.


(3) An airborne navigation system indication position of each object member in the classification in step (2) is acquired, and a local east-north-up geographic coordinate system regarding the object member is established with the indication position as the origin. The airborne navigation system indication position of an object member i is recorded as (λi, Li, hi) and a correspondingly established local east-north-up coordinate system is expressed as OiXYZ, where λ denotes the longitude, L denotes the latitude, h denotes the height, and denotes the member number and i∈A.


(4) An airborne navigation system indication position of each candidate reference member in the classification in step (2) and its positioning error covariance are acquired; and are put, after transformation, into the local east-north-up geographic coordinate system regarding the object member and established in step (3). The airborne navigation system indication position of a candidate reference member j is recorded as (λi, Li, hi), where j denotes the member number and j∈B.


(5) Second-level screening is performed for the candidate reference members successively according to whether each object member and each candidate reference member can measure the distance for each other, to determine the role of each candidate reference member in collaborative navigation: setting a candidate reference member for which mutual distance measurement can be performed with the object member i as a usable reference member for the object member and recording a number set of the usable reference members for the object member i as Ci, where Ci⊂B


(6) A mutual-observation vector between the object member and its usable reference member is calculated. The mutual-observation vector between the object member i and its usable reference member k is recorded as rki which has the following expression:







r
k
i

=


[




x
k
i






y
k
i






z
k
i




]

=

[





-


Δλ
ik



(


R
N

+

h
i


)




cos






L
i









-
Δ








L
ik



(


R
N

+

h
i


)



+

Δ






L
ik



f
2



cos
2



L
i










-
Δ







h
ik


+

Δ






L
ik



f
2


sin






L
i


cos






L
i






]






where i and k are member numbers and i∈A, k∈Ci; Δλik, ΔLik and Δhik denote difference values respectively in longitude, latitude, and height output by an airborne navigation system and between the object member i and its usable reference member k; RN denotes the radius of curvature in prime vertical of the earth's reference ellipsoid and is a constant; f denotes the oblateness of the earth's reference ellipsoid and is a constant; Li denotes the latitude of the object member i output by the airborne navigation system and hi denotes the height of the object member i output by the airborne navigation system.


(7) A vector projection matrix is calculated by using the mutual-observation vector between the object member and its usable reference member obtained in step (6). A vector projection matrix regarding the object member i and its usable reference member k is recorded as MI which has the following expression:







M
k
i

=

[



x
k
i


d
ik





y
k
i


d
ik





z
k
i


d
ik



]





where dik denotes a calculated value of a distance between the object member i and its usable reference member k, and has the following expression: dik=√{square root over (xki2+yki2+zki2)}.


(8) An object position projection matrix is calculated. The object position projection matrix regarding the object member i and its usable reference member k is recorded as Nki which has the following expression:







N
k
i

=

[





-


Δλ
ik



(


R
N

+

h
i


)




sin






L
i






(


R
N

+

h
i


)


cos






L
i






Δλ
ik


cos






L
i








R
N

+

h
i




0



Δ






L
ik






0


0


1



]





(9) A usable reference position projection matrix is calculated. The usable reference position projection matrix regarding the object member i and its usable reference member k is recorded as Lki which has the following expression:







L
k
i

=

[



0




-

(


R
N

+

h
i


)



cos






L
i




0





-

(


R
N

+

h
i


)




0


0




0


0



-
1




]





(10) A status mutual-observation matrix between the object member and its usable reference member is calculated by using the vector projection matrix obtained in step (7) and the object position projection matrix obtained in step (8). The status mutual-observation matrix between the object member i and its usable reference member k is recorded as Hki which has the following expression:






H
k
i
=M
k
i
N
k
i


(11) A noise mutual-observation matrix between the object member and its usable reference member is calculated by using the vector projection matrix obtained in step (7) and the usable reference position projection matrix obtained in step (9). The noise mutual-observation matrix between the object member i and its usable reference member k is recorded as A which has the following expression:






D
k
i
=M
k
i
L
k
i


(12) A mutual-observation noise covariance between the object member and its usable reference member is calculated by using the noise mutual-observation matrix obtained in step (11), which has the following expression:






R
k
i
=D
k
iσpkiDkiTRF2


where σRFi denotes an error covariance of a relative distance measuring sensor, and σpk2: denotes a positioning error covariance of the usable reference member k.


(13) A mutual-observation set matrix regarding all the members in the UAV swarm is established by using the status mutual-observation matrix Hki between the object member i and its usable reference member k obtained in step (10). The mutual-observation set matrix regarding the object member i for all of its usable reference members is recorded as Halti which has the following expression:








H
all
i

=

[









H
k
i









]


,

k


C
i






Halti denotes a matrix composed of all Hki serving as row vectors and meeting k∈Ci.


(14) A mutual-observation set covariance regarding all members in the UAV swarm is established by using the mutual-observation noise covariance between the object member and its usable reference member obtained in step (12). The mutual-observation set covariance regarding the object member i for all of its usable reference members is recorded as Ralli which has the following expression:








R
all
i

=

[











0










R
k
i










0











]


,

k


C
i






where Ralli denotes a matrix which is composed of all 14 serving as diagonal elements and meeting k∈Ci and off-diagonal elements equal to 0.


(15) A mutual-observation set observed quantity regarding the members in the UAV swarm is established by using the mutual-observation vector between the object member and its usable reference member obtained in step (6). The mutual-observation set observed quantity regarding the object member i for all of its usable reference members is recorded as Yalli which has the following expression:








Y
all
i

=

[











d
~

ik

-

d
ik










]


,

k


C
i






where dik denotes a calculated value of a distance between the object member i and its usable reference member k, and has the following expression: dik=√{square root over (xki2+yki2+zki2)}; {tilde over (d)}ik; and {tilde over (d)}ik denotes a measured value of the distance between the object member i and its usable reference member k.


(16) A dynamic mutual-observation model for UAV swarm collaborative navigation is created by using the mutual-observation set matrix Halli regarding the object member i for all of its usable reference members obtained in step (13), the mutual-observation set covariance Ralli regarding the object member i for all of its usable reference members obtained in step (14), and the mutual-observation set observed quantity Yalli regarding the object member i for all of its usable reference members obtained in step (15); and weighted least squares positioning is performed for the object member, to obtain a longitude correction δ{circumflex over (λ)}i, a latitude correction δ{circumflex over (L)}i, and a height correction δĥi, of the position of the object member i.


(17) A corrected longitude, latitude, and height are calculated by using the longitude correction δ{circumflex over (λ)}i, the latitude correction δ{circumflex over (L)}i, and the height correction δĥi, of the object member i, which have the following expression:





({circumflex over (λ)}i,{circumflex over (L)}ii)=(λi+δ{circumflex over (λ)}i,Li+δ{circumflex over (L)}i,hi+δĥi)


(18) A position estimation covariance of the object member is calculated by using the status mutual-observation matrix between the object member and its usable reference member obtained in step (10) and the mutual-observation noise covariance between the object member and its usable reference member obtained in step (12). The position estimation covariance of the object member i is recorded as σpi which has the following expression:







σ
pi

=




k
=
1

n




H
k
iT



R
k
i



H
k
i







(19) An online modeling error amount is calculated by using the object position projection matrix obtained in step (8) and the longitude correction δ{circumflex over (λ)}i, the latitude correction δ{circumflex over (L)}i, and the height correction δĥi of the object member i obtained in step (16), which has the following expression:






u
k
i
|N
k
i[δ{circumflex over (λ)}iδ{circumflex over (L)}iδĥi]T|


(20) It is determined whether iterative convergence occurs in online modeling; and if uki<ζ, it is determined that convergence occurs, and online modeling ends and step (21) is performed; otherwise, step (6) is performed to make iterative correction on the mutual-observation model.


(21) It is determined whether navigation ends; and if yes, the process ends; otherwise, step (2) is performed to conduct next-round modeling.


In order to verify the effectiveness of the UAV swarm collaborative navigation method under a dynamic observation condition proposed by the present invention, digital simulation and analysis are conducted. There are eight UAVs in the UAV swarm used in the simulation, and the measurement of a relative distance has a precision of 0.1 m. FIG. 1 is a scheme diagram of the dynamic mutual-observation modeling method for UAV swarm collaborative navigation in the present invention; FIG. 2 is a curve chart of iterative modeling in a moving coordinate system regarding an object member and established by the method of the present invention; FIG. 3 is a curve chart showing a position error during iterative modeling by the method of the present invention; and FIG. 4 is a curve chart showing longitude, latitude, and height errors during iterative modeling by the method of the present invention.


It can be learned from FIG. 2 that, after use of the mutual-observation model for UAV swarm collaborative navigation and the online modeling method provided by the present invention, a calculated position of an object member in the UAV swarm gradually converges from the initial position and approaches the real position. It can be learned from FIG. 3 that, after use of the mutual-observation model for UAV swarm collaborative navigation and the online modeling method provided by the present invention, the position error of the object member gradually reduces and a finally calculated position error is decreased by 4 orders of magnitude as compared with an initial error. It can be learned from FIG. 3 that, after use of the mutual-observation model for UAV swarm collaborative navigation and the online modeling method provided by the present invention, errors in the longitude, latitude, and height directions gradually reduce. In addition, the method of the present invention can adapt to the mutual-observation relationship and constant change of member roles during flight of the UAV swarm, thus achieving a desired application value.


The foregoing embodiment merely describes the technical idea of the present invention, but is not intended to limit the protection scope of the present invention. Any modification made based on the technical solutions according to the technical idea provided by the present invention falls within the protection scope of the present invention.

Claims
  • 1. An online dynamic mutual-observation modeling method for unmanned aerial vehicle (UAV) swarm collaborative navigation, comprising the following steps: step 1: numbering members in the UAV swarm as 1, 2, . . . , n; performing first-level screening for the members according to the number of usable satellites received by an airborne satellite navigation receiver of each member at the current time, to determine the role of each member in collaborative navigation: setting members which receive less than 4 usable satellites as object members and recording a number set of the object members as A; and setting members which receive not less than 4 usable satellites as candidate reference members and recording a number set of the candidate reference members as B, wherein A,B⊂{1, 2, . . . , n};step 2: acquiring an airborne navigation system indication position of an object member i and establishing a local east-north-up geographic coordinate system regarding the object member with the indication position as the origin, wherein i denotes the member number and i∈A;step 3: acquiring an airborne navigation system indication position of a candidate reference member j and its positioning error covariance; and putting, after transformation, the airborne navigation system indication position of the candidate reference member j and its positioning error covariance into the local east-north-up geographic coordinate system regarding the object member i and established in step 2, wherein j denotes the member number and j∈B;step 4: performing second-level screening for the candidate reference members according to whether each object member and each candidate reference member are able to measure the distance for each other, to determine the role of each candidate reference member in collaborative navigation: setting a candidate reference member for which mutual distance measurement is able to be performed with the object member as a usable reference member for the object member i, and recording a number set of the usable reference members for the object member i as Ci, wherein Ci⊂B;step 5: calculating a mutual-observation vector between the object member and its usable reference member, and calculating a vector projection matrix regarding the object member and its usable reference member according to the mutual-observation vector;step 6: calculating an object position projection matrix and a usable reference position projection matrix regarding the object member and its usable reference member;step 7: calculating a status mutual-observation matrix between the object member and its usable reference member by using the vector projection matrix obtained in step 5 and the object position projection matrix obtained in step 6;step 8: calculating a noise mutual-observation matrix between the object member and its usable reference member by using the vector projection matrix obtained in step 5 and the usable reference position projection matrix obtained in step 6; and calculating a mutual-observation noise covariance between the object member and its usable reference member by using the noise mutual-observation matrix;step 9: establishing a mutual-observation set matrix regarding the object member for all of its usable reference members by using the status mutual-observation matrix obtained in step 7;step 10: establishing a mutual-observation set covariance regarding the object member for all of its usable reference members by using the mutual-observation noise covariance obtained in step 8;step 11: establishing a mutual-observation set observed quantity regarding the object member for all of its usable reference members by using the mutual-observation vector obtained in step 5;step 12: establishing a dynamic mutual-observation model for UAV swarm collaborative navigation according to the mutual-observation set matrix obtained in step 9, the mutual-observation set covariance obtained in step 10, and the mutual-observation set observed quantity obtained in step 11; performing weighted least squares positioning for the object member by using the dynamic mutual-observation model, to obtain a longitude correction, a latitude correction, and a height correction of the position of the object member; and calculating a corrected longitude, latitude, and height;step 13: calculating position estimation covariance of the object member by using the status mutual-observation matrix obtained in step 7 and the mutual-observation noise covariance obtained in step 8;step 14: calculating an online modeling error amount by using the object position projection matrix obtained in step 6 and the longitude correction, the latitude correction, and the height correction of the object member obtained in step 12; when the online modeling error amount is less than a preset error control standard of online dynamic mutual-observation modeling, determining that iterative convergence occurs in online modeling, that is, ending online modeling and going to step 15; otherwise, returning to step 5 to make iterative correction on the mutual-observation model; andstep 15: determining whether navigation ends; if yes, ending the process; otherwise, returning to step 1 to conduct next-round modeling.
  • 2. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the mutual-observation vector described in step 5 has the following expression:
  • 3. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the vector projection matrix described in step 5 has the following expression:
  • 4. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the object position projection matrix described in step 6 has the following expression:
  • 5. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the usable reference position projection matrix described in step 6 has the following expression:
  • 6. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the status mutual-observation matrix described in step 7 has the following expression: Hki=mkiNki wherein Hki denotes a status mutual-observation matrix between the object member i and its usable reference member k; Mki denotes a vector projection matrix regarding the object member i and its usable reference member k; and Nki denotes an object position projection matrix regarding the object member i and its usable reference member k.
  • 7. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the noise mutual-observation matrix described in step 8 has the following expression: Dki=MkiLki;wherein Dki denotes a noise mutual-observation matrix between the object member i and its usable reference member k; Mki denotes a vector projection matrix regarding the object member i and its usable reference member k; and Lki denotes a usable reference position projection matrix regarding the object member i and its usable reference member k.
  • 8. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the mutual-observation noise covariance described in step 8 has the following expression: Rki=Dkiσpk2DkiT+σRF2 wherein Rki denotes a mutual-observation noise covariance between the object member i and its usable reference member k; Dki denotes a noise mutual-observation matrix between the object member i and its usable reference member k; σRF2 denotes an error covariance of a relative distance measuring sensor, and σpk2: denotes a positioning error covariance of the usable reference member k.
  • 9. The online dynamic mutual-observation modeling method for UAV swarm collaborative navigation according to claim 1, wherein the online modeling error amount described in step 14 has the following expression: uki|Nki[δ{circumflex over (λ)}iδ{circumflex over (L)}iδĥi]T|wherein uki denotes an online modeling error amount regarding the object member i and its usable reference member k; Nki denotes an object position projection matrix regarding the object member i and its usable reference member k; and δ{circumflex over (λ)}i δ{circumflex over (L)}iδĥi respectively denote a longitude correction, a latitude correction, and a height correction of the position of the object member i.
Priority Claims (1)
Number Date Country Kind
201910699294.4 Jul 2019 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2020/105037 7/28/2020 WO 00