EVENT TRIGGERED INFORMATION EXCHANGE FOR DISTRIBUTED CONTROL OF NETWORKED MULTIAGENT SYSTEMS

Information

  • Patent Application
  • 20240127693
  • Publication Number
    20240127693
  • Date Filed
    June 29, 2023
    10 months ago
  • Date Published
    April 18, 2024
    19 days ago
Abstract
Methods and systems for information exchange of a vehicle in a networked multiagent system are disclosed. The methods and systems include: receiving a last neighbor dataset broadcasted by a neighbor vehicle; determining a current dataset based on the last neighbor dataset and a last vehicle dataset of the first vehicle; identifying a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold; determining a transmission dataset being associated with the current dataset; and in response to the violation, broadcasting the transmission dataset to the neighbor vehicle. The dynamic threshold is defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state. Other aspects, embodiments, and features are also claimed and described.
Description
SUMMARY

The following presents a simplified summary of one or more aspects of the present disclosure, to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.


In some aspects of the present disclosure, methods, systems, and apparatus for information exchange of a vehicle in in a multiagent system are disclosed. These methods, systems, and apparatus can include steps or components for managing and/or directing information exchange among members of a group of agents (e.g., devices, drones, vehicles, etc.) operating in a concerted fashion, in a more efficient and event-driven manner.


For example, in one aspect, a method is provided for information exchange of a first vehicle in a networked multiagent system, comprising: receiving a last neighbor dataset broadcasted by a neighbor vehicle; determining a current dataset based on the last neighbor dataset and a last vehicle dataset of the first vehicle; identifying a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold, the dynamic threshold being defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state; determining a transmission dataset being associated with the current dataset; in response to the violation, broadcasting the transmission dataset to the neighbor vehicle. In some embodiments, such a method may be used to govern communication among multiple agents during a given operation, such as driving, flying, searching for objects, deliveries, etc. in a more efficient manner.


In another aspect, an apparatus is provided, for providing information exchange among vehicles, the apparatus comprising: a processor; and a memory having stored thereon a set of instructions which, when executed by the processor, cause the processor to: receive a last neighbor dataset broadcasted by a neighbor vehicle; determine a current dataset based on the last neighbor dataset and a last vehicle dataset of the first vehicle; identify a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold, the dynamic threshold being defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state; determine a transmission dataset being associated with the current dataset; in response to the violation, broadcast the transmission dataset to the neighbor vehicle.


These and other aspects of the disclosure will become more fully understood upon a review of the drawings and the detailed description, which follows. Other aspects, features, and embodiments of the present disclosure will become apparent to those skilled in the art, upon reviewing the following description of specific, example embodiments of the present disclosure in conjunction with the accompanying figures. While features of the present disclosure may be discussed relative to certain embodiments and figures below, all embodiments of the present disclosure can include one or more of the advantageous features discussed herein. In other words, while one or more embodiments may be discussed as having certain advantageous features, one or more of such features may also be used in accordance with the various embodiments of the disclosure discussed herein. Similarly, while example embodiments may be discussed below as devices, systems, or methods embodiments it should be understood that such example embodiments can be implemented in various devices, systems, and methods.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram conceptually illustrating a multiagent system for information exchange according to some embodiments.



FIG. 2 is a flowchart illustrating an example process for information exchange of a vehicle in a multiagent system according to some embodiments.



FIG. 3 is an example multiagent system according to some embodiments.



FIG. 4 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ε=0.1.



FIG. 5 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ε=0.7.



FIG. 6 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ϕ0=1 and ϕf=0.



FIG. 7 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ϕ0=0.5 and ϕf=0.



FIG. 8 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ϕ0=0.5 and ϕf=0.2.



FIG. 9 shows results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with ϕ0=0.5, ϕf=0, γ1=0.7, γ2=2.5, and ε=0.7.



FIG. 10 shows results for time-varying desired command of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange.



FIG. 11A shows an experimental setup including four crazyflie nano-quadcopters, local positioning system, and a personal computer. FIG. 11B shows a crazyflie nano-quadcopter.



FIG. 12 shows an experimental setup according to some embodiments.



FIG. 13 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.



FIG. 14 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.05.



FIG. 15 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.2.



FIG. 16 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.



FIG. 17 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.05.



FIG. 18 shows experimental results of an example event-triggering approach with sampled data exchange and solution-predictor curve exchange with γ1=0.7, γ2=2.5, ε=0.7, ϕ0=0.5, and ϕf=0.2.





DETAILED DESCRIPTION

The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the subject matter described herein may be practiced. The detailed description includes specific details to provide a thorough understanding of various embodiments of the present disclosure. However, it will be apparent to those skilled in the art that the various features, concepts and embodiments described herein may be implemented and practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form to avoid obscuring such concepts.


Technological advances in the recent decades have enabled low-cost and small-in-size integrated systems with mobility, computing, communication, and sensing capabilities. Simultaneously, theoretical advances in these decades have established a strong foundation on how these integrated systems can work together through a network (hereinafter referred to as networked multiagent systems) to support a wide array of scientific, civilian, and military applications. Applications of networked multiagent systems include, for example, surveillance, reconnaissance, ground and air traffic management, payload and passenger transportation, customized task assignment, rapid internet delivery, and emergency response. One feature of networked multiagent systems is that they utilize inter-agent (i.e., local) information exchange to accomplish a given application with guaranteed closed-loop system stability. Yet, in the design and implementation of networked multiagent systems, it is desirable not only to guarantee closed-loop system stability but also to schedule inter-agent information exchange to prevent potential network overload and decrease wireless communication costs.


For stably scheduling information exchange in networked multiagent systems, in the example methods and systems disclosed herein, a new event-triggered distributed control architecture is disclosed. The new event-triggered distributed control architecture is predicated on a dynamic threshold, which involves an error signal between the state of an agent and the state of its reference model as well as an exponentially decaying term, to schedule inter-agent information exchange. Initial and final values of the exponentially decaying term have the benefit to suppress network utilization at the beginning (when errors between the last shared and current data are high) and toward then end when the final value is chosen not to be zero.


Additionally, the example methods and systems disclosed herein incldues a method entitled solution-predictor curve. In particular, this method approximates the solution trajectory related to information exchange, where every agent stores this curve and distributively exchanges its parameters when an event occurs. Its feature is that each agent utilizes the resulting solution trajectory over the time interval until the next event occurs, where it has the capability to further reduce inter-agent information exchange as compared with the sampled data case.


Further, a system-theoretical stability analysis of the proposed event-triggered distributed control architecture is given, which captures both sampled data and solution-predictor curve cases. In some examples, when the final value of the exponentially decaying term is chosen not to be zero, show boundedness of the closed-loop system trajectories is shown. When its final value is chosen to be zero, asymptotic stability of the closed-loop system trajectories is shown. In addition, practical guidelines are disclosed on the tuning of parameters through illustrative numerical examples. In addition, the efficacy of the theoretical results are demonstrated in laboratory-level experiments. Both numerical examples and experiments show that solution-predictor curve drastically decreases the number of events and hence the network utilization compared to what can be achieved by exchanging sampled data.



FIG. 1 is a block diagram conceptually illustrating a multiagent system for information exchange. For example, the multiagent system 100 can include one or more agents (i.e., follower agents) 110, 110n and one or more leader agents 130, 130n. In some examples, the one or more agents 110, 110n and the one or more leader agents 130, 130n can be connected via a communication network 150 with a connected and undirected graph.


In some examples, the agent 110, 110n can transmit or receive information (e.g., a current dataset of the agent 110, 110n) over a communication network 150. In some examples, the communication network 150 can be any suitable communication network or combination of communication networks. For example, the communication network 150 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, a 5G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, NR, etc.), a wired network, etc. In some embodiments, communication network 150 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links between agents 110, 110n, between agents 110, 110n and leader agents 130, 130n, and/or between leader agents 130, 130n can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, etc.


In further examples, the agent 110, 110n and/or a leader agent 130, 130n can be a vehicle (e.g., a ground vehicle, an aerial vehicle, an underwater vehicle, a ship, a space vehicle, an autonomous vehicle, a motor vehicle, a space vehicle, car, a train, an unmanned aerial vehicle, a rocket, or a missile, etc.) or any other suitable apparatus or means (e.g., a computing/circuit/electrical node, a transceiver, etc.). In some examples the agents may be of a uniform type (e.g., a fleet of drones operating in a concerted fashion for searching a remote area; a group of autonomous vehicles driving in a coordinated way; etc.) or may be of multiple types. And, the agents may be performing a highly coordinated, single goal operation or may be more independently performing operations.


The agent 110, 110n can include any suitable computing device or combination of devices, such as a processor (including an ASIC, DSP, PFGA, or other processing component) desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a computing device integrated into the vehicle, a camera, a robot, a virtual machine being executed by a physical computing device, etc.


In further examples, the agent 110, 110n can include a processor 112, a display 114, one or more inputs 116, one or more communication systems 118, and/or memory 120. In some embodiments, the processor 112 can be any suitable hardware processor or combination of processors, such as a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a digital signal processor (DSP), a microcontroller (MCU), etc. In some embodiments, the display 114 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, an infotainment screen, etc. In some embodiments, the input(s) 116 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, etc.


In further examples, the communications system 118 can include any suitable hardware, firmware, and/or software for communicating information over communication network 150 and/or any other suitable communication networks. For example, the communications system 118 can include one or more transceivers, one or more communication chips and/or chip sets, etc. In a more particular example, the communications system 118 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, etc. to transmit the status of the agent 110, 110n and/or receive the status of the one or more neighboring agents 110, 110n, 130, 130n.


In further examples, the memory 120 can include any suitable storage device or devices that can be used to store status of the agent 110, 110n, status of the one or more neighboring agents 110, 110n, 130, 130n, a solution predictor curve of the agent 110, 110n, one or more solution predictor curves of the one or more neighboring agents 110, 110n, 130, 130n, data, instructions, values, etc., that can be used, for example, by the processor 112 to perform information exchange tasks via communications system 118, etc. The memory 120 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 110 can include random access memory (RAM), read-only memory (ROM), electronically-erasable programmable read-only memory (EEPROM), one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, etc. In some embodiments, the memory 120 can have encoded thereon a computer program for controlling operation of computing device 110. For example, in such embodiments, the processor 112 can execute at least a portion of the computer program to perform one or more data processing tasks described herein, transmit/receive information via the communications system 118, etc. As another example, processor 112 can execute at least a portion of process 200 described below in connection with FIG. 2.


In even further examples, the multiagent system can include one or more leader agents 130, 130n. For example, a leader agent may be a lead drone or vehicle being followed by others, or may be an agent having different resources than the other agents (e.g., more durable, higher/longer power life, better communication with a remote station, etc.), or may simply be designated as the leader. A leader agent 130, 130n can include a processor 132, a display 134, input(s) 136, a communication system 138, and/or a memory 140. In some examples, the processor 132, the display 134, the input(s) 136, the communication system 138, and/or the memory 140 of the leader 130, 130n are substantially similar to those in the agent 110, 110n. In addition, the leader 130, 130n can receive a command (c(t)) which the one or more agents 110, 110n and the one or more leader agents 130, 130n are to follow. In some examples, the one or more agents 110, 110n (i.e., follower agents) may not access or know the command (c(t)). In other examples, the command (c(t)) can be available to the one or more agents 110, 110n.



FIG. 2 is a flowchart illustrating an example process for information exchange. In some examples, the process 200 for information exchange in a multiagent system may be carried out by the agent 110, 110n and/or the leader agent 130, 130n illustrated in FIG. 1. In some examples, the multiagent system can include multiple homogeneous or heterogeneous agents to communicate each other. An agent 110, 110n, 130, 130n in the multiagent system can include a vehicle (e.g., a ground vehicle, an aerial vehicle, an underwater vehicle, a ship, a space vehicle, an autonomous vehicle, a motor vehicle, a space vehicle, car, a train, an unmanned aerial vehicle, a rocket, or a missile, etc.) operating in a group or team. However, it should be appreciated that the agent 110, 110n, 130, 130n can be any other suitable apparatus or means (e.g., a computing/circuit/electrical node, a robot, a power node in a power grid) for carrying out the functions or algorithm described below. Additionally, although the steps of the flowchart 200 are presented in a sequential manner, in some examples, one or more of the steps may be performed in a different order than presented, in parallel with another step, or bypassed.


Before describing the process 200, the notation and some basic definitions from graph theory used throughout the process 200 are explained. In particular, text missing or illegible when filed stands for the set of natural numbers, text missing or illegible when filed stands for the set of integers, custom-character stands for the set of real numbers, custom-character stands for the set of n×1 real column vectors, custom-character stands for the set of n×m real matrices, custom-character (respectfully, custom-character) stands for the set of positive (respectfully, nonnegative) real numbers, custom-character (respectfully, custom-character) stands for the set of n×n positive-definite (respectfully, nonnegative-definite) real matrices, In stands for the n×n identity matrix, 0n stands for the n×1 zero vector, 0nm stands for the n×m zero matrix, and “custom-character” stands for the equality by definition. (·)T is used for the transpose, (·)−1 for the inverse, λ(A) (respectfully, λ(A)) for the maximum (respectfully, minimum) eigenvalue of the matrix A∈custom-character; ||·||2 for the Euclidean norm, and









A


2


=
Δ



(


λ
¯

(


A
T


A

)

)


1
2






for the induced 2-norm of the matrix A∈custom-character.


Graph theory can be adopted in the networked multiagent systems to encode inter-agent information exchanges. Specifically, an undirected graph custom-character is defined by a set Vcustom-character={1, . . . , N} of nodes and a set εcustom-character⊂Vcustom-character×Vcustom-character of edges. If (i, j)∈εcustom-character, then the nodes i and j are neighbors and i˜j indicates the neighboring relation. The degree of anode is defined by the number of its neighbors. Letting di be the degree of node i, the degree matrix of a graph custom-character, custom-charactercustom-charactercustom-character, has the form custom-charactercustom-charactercustom-character diag(d) with d=[d1, . . . , dN]. In some examples, a graph custom-character is connected when there exists a finite path i0i1 . . . iL with ik−1˜ik and k=1, . . . , L between any two distinct nodes. The adjacency matrix of a graph custom-character, custom-character(custom-character)∈custom-character, has the form [custom-character(custom-character)]ij=1 if (i,j)∈εcustom-character and [custom-character(custom-character)]ij=0 otherwise. The Laplacian matrix of a graph, custom-character(custom-character)∈custom-character, then has the form custom-character(custom-character)custom-charactercustom-character(custom-character)−custom-character(custom-character). In some examples, a connected and undirected graph custom-character is considered, where nodes and edges respectively represent agents and inter-agent information exchange.


In some examples, the following lemmas respectively can be stated. Lemma 2.1: The spectrum of a Laplacian matrix of a connected and undirected graph is ordered as 0=λ1(custom-character(custom-character))≤λ2(custom-character(custom-character))≤ . . . ≤λN(custom-character(custom-character)), where 1N is the eigenvector of the zero eigenvalue. Lemma 2.2: Let custom-character=diag([k1, . . . , kN]), kicustom-character, i=1, . . . , N, and consider that at least one diagonal entry of custom-character is nonzero. For a connected and undirected graph custom-character, custom-character(custom-character)custom-charactercustom-character(custom-character)+custom-charactercustom-character is then a positive-definite matrix. In some examples, Lemma 2.1 can be used for the result in Lemma 2.2.


At step 202, the process 200 for information exchange of an agent (e.g., a first vehicle) in a multiagent system can receive a last neighbor dataset broadcasted by a neighbor agent (e.g., a neighbor vehicle). In some examples, the last neighbor dataset is the dataset most recently received from the neighbor agent. In some examples, the networked multiagent system can include N agents, where the dynamics of agent i, i=1, . . . , N, satisfies:






{dot over (x)}
i(t)=ui(t), xi(0)=xi0,   (1)


with xi(t)∈custom-character being the state and ui(t)∈custom-character being the control signal. For the results of this paper, we propose the control signal of agent i given by:






u
i(t)=−γ1(xi(t)−{circumflex over (μ)}i(t)),   (2)


with γ1custom-character being a design parameter. In addition, {circumflex over (μ)}i(t)∈custom-character is the latest broadcast of μi(t)∈custom-character of agent i to its neighbors through an event-triggering approach (described in connection with step 206 below) with μi(t). Thus, the last dataset (e.g., the last vehicle dataset) can include a datset previously transmitted to the agent (e.g., the first vehicle) in response to the violation, which is described further in conenction with step 206 below. Similarly, {circumflex over (μ)}j(t)∈custom-character is the latest broadcast of μj(t)∈custom-character of neighbor agent j (e.g., the neightbor vehicle) to agent i (e.g., the first vehicle) through the event-triggering approach with μj(t).


At step 204, the process 200 determines a current dataset (μi(t)) based on the last neighbor dataset and a last dataset (e.g., a last vehicle dataset) of the agent (e.g., the first vehicle). In some examples, the last dataset ({circumflex over (μ)}i(t)) is a dataset most recently broadcasted to the neighbor agent. In some examples, the current dataset (μi(t)) can be referred to as auxiliary or virtual network, which is used in the even-triggering condition. In further examples, the current dataset (μi(t)) can have a sole purpose to be used in the event-triggering condition. The network can be constructed based on the real multi-agent system network, which represents the actual physical system. In some examples, the trajectory information (e.g., the command (c(t)) is only available to the leader agent(s) in the network. For example, the current dataset can be defined as:





{dot over (μ)}i(t)=−γ2i˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))+ki({circumflex over (μ)}i(t)−c(t))], μi(0)=μi0.   (3)


Here, γ2custom-character is also a design parameter, ki∈{0,1} denotes whether agent i is a leader or a follower (i.e., ki=0 for follower agents and ki=1 for leader agents), c(t)∈custom-character is a command, and {circumflex over (μ)}j(t) for i˜j denotes the latest broadcasted value of μj(t) from agent j to its neighbors through an event-triggering approach (details below). In some examples, the control alw of the current dataset (μi(t)) can depend on the information received by agent i from its neighbors (i˜j).



FIG. 3 graphically illustrates our problem formulation. Specifically, consider 4 agents with a leader agent 302 without loss of generality. Other agents are follower agents 304. Here, a distributed event-triggered control architecture can be established. The distributed event-triggered control architecture is predicated on a dynamical threshold and solution-predictor curves exchange, such that this architecture minimizes inter-agent communication and it also allows all agents to follow a desired command, c(t), available only to the leader agents (i.e., agent 1 in FIG. 3).


In some examples, the first vehicle can be a leader vehicle, and the current dataset can be determined further based on a command. For example, the current dataset can be defined as: μi(t)=−γ2i˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))+{circumflex over (μ)}i(t)−c(t)], where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, {circumflex over (μ)}j(t) is the last neighbor dataset, and c(t) is the command. In further examples, the first vehicle can be a follower vehicle. In the examples, the current dataset can be defined as: μi(t)=−γ2i˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))], where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, and {circumflex over (μ)}j(t) is the last neighbor dataset. In some examples, using the control signal of agent i given by (2), (1) can be rewritten as:






{dot over (x)}
i(t)=−γ1(xi(t)−{circumflex over (μ)}i(t))   (4)


In some exaples, agent i dynamics given by (4) has the latest broadcasted value of μi(t) to its neighbors (i.e., {circumflex over (μ)}i(t)). Yet, agent i has continuous access to its value of μi(t) through (3). Ideal continuous information exchange dynamics can be formalized to be defined as:






{dot over (x)}
mi(t)=−γ1(xmi(t)−μi(t)), xmi(0)=xmi0,   (5)


where xmi(t)∈custom-character. Henceforth, (5) can be referred to as the ideal reference model of agent i.


Referring again to FIG. 2, at step 206, the process 200 identifies a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold. In some examples, the dynamic threshold can be defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state. For example, the event-triggering condition can be defined as:





||{tilde over (μ)}i(t)||2≤ε||ei(t)||2+ϕ(t),   (6)


where ε∈custom-character is a design parameter, {tilde over (μ)}1(t)custom-character{circumflex over (μ)}it(t)−μi(t)∈custom-character is the difference between the latest broadcasted value {circumflex over (μ)}i(t) (i.e., the last vehicle dataset) and the current value of μi(t) (i.e., the current dataset) of agent i, ei(t)custom-characterxi(t)−xmi(t)∈custom-character is the error between the state xi(t) of agent i (e.g., the vehicle state of the first vehicle) and its reference model state xmi(t) (e.g., the reference state). Thus, the event-triggering condition can be rewritten as: ||{circumflex over (μ)}i(t)−μi(t)||2≤ε||ei(t)||2+ϕ(t). In further examples, ϕ(t)∈custom-character is an exponentially decaying term satisfying:





{dot over (ϕ)}(t)=−κ(ϕ(t)−ϕf), ϕ(t)=ϕ0.   (7)


In (7), κ∈custom-character denotes the time constant of ϕ(t) and ϕfcustom-character denotes the final value of ϕ(t). In some examples, the final value to which ϕ(t) converges to, ϕf, in (7) can be practically chosen such that ϕf0. This condition is desirable to keep ϕ(t) exponentially decaying to ϕf from ϕ0. In addition, if the final value of ϕ(t), ϕf, is set to zero, then asymptotic stability is achieved. On the other hand, if ϕf∈(0,ϕ0), then boundedness is achieved.


At step 208, the process 200 determines a transmission dataset being associated with the current dataset. In some examples, two scenarios can be considered for the event-triggering condition given by (6). In some scenarios, the transmission dataset can include sampled data of the current dataset. When the event-triggering condition given by (6) is violated, agent i (e.g., the first vehicle) broadcasts a sampled data (i.e., the transmission dataset {circumflex over (μ)}i(t)) of its value μi(t) (i.e., {circumflex over (μ)}i(t)=μi(tdi) for t∈[tdi,t(d+1)i)) through a zero-order-hold operator to its neighbors. In some examples, the sequences {tdi}d∈N for i=1, . . . , N contain all time instants when (6) is violated.


In other scenraios, the transmission dataset can include a parameter for a solution-predictor curve to estimate the current dataset. When the event-triggering rule given by (6) is violated, agent i broadcasts a solution-predictor curve (i.e., custom-characterdi(t)) of the current dataset (i.e., μi(t)) to its neighbors. In this scenario, the transmission dataset (i.e., {circumflex over (μ)}i(t)) to be broadcasted. This can be expressed as: {circumflex over (μ)}i(t)=custom-characterdi(t) for t∈[tdi,t(d+1)i), where the sequences {tdi}d∈N for i=1, . . . , N contain all time instants when (6) is violated.


For example, in the scenarios, the dynamics of agent i (e.g., the first vehicle) can be given by (3). When an event occurs (i.e., (6) is violated) at time t=tdi, the solution of (3) using the initial condition μi(tdi) and the inputs {circumflex over (μ)}j(tdj) and c(tdi) can be estimated as:











𝒮
di

(
t
)

=



(



μ
ˆ


i

0


-









i
~
j






μ
ˆ

j

(

t
dj

)


+


k
i



c

(

t
di

)





d
i

+

k
i




)



e


-

γ
2





d
i

(

t
-

t
0


)




+









i
~
j






μ
ˆ

j

(

t
dj

)


+


k
i



c

(

t
di

)





d
i

+

k
i








(
8
)







In some examples, (8) predicts the evolution of μi(t) for agent i over time. The parameter (degree) di for agent i can be the number of nighbor agents to agent i. This depends on the network topology. Network topology and leader location have an effect on the total number of events. Every agent in the networked multiagent system stores a function in the form given by (8) for each of its neighbors. Thus, the parameter of the transmission dataset for the solution-predictor of aggent i (e.g., the first vehicle) can include an initial condition μi(tdi) and a solution-predictor parameter defined by the last neighbor dataset {circumflex over (μ)}j(tdj) of the agaent i (e.g, the first vehicle). In further examples, the solution-predictor parameter can be defined further by a degree di and a control signal c(tdi) (when the agent i is a leader agent). When an event occurs, agent i broadcasts to its neighbors {circumflex over (μ)}i0i(tdi), t0=tdi, and the solution-predictor parameter of (Σi˜j{circumflex over (μ)}j(tdj)+kic(tdi))/(ki+di). In some examples, the event-triggering condition can be used for both sampled-data exchanges and solution-predictor curve exchanges. In further examples, the solution-predictor curve can estimate the value of μj(t) for agent i in some extent. Thus, the solution-predictor curve can reduce the number of events to be triggered and the fequecy at which event occur.


At step 210, the process 200 broadcasts the transmission dataset (i.e., {circumflex over (μ)}i(t)) to the neighbor agent (e.g., the neighbor vehicle) in response to the violation. For example, when the agent is agent 1 (302) in FIG. 3, the agent 1 transmits the transmission dataset to the neighbor agent (i.e., agent 2). Similarly, when the agent is agent 2 (304), the agent 2 transmits the transmission dataset to the neighbor agents (i.e., agent 1 and agent 3). In some examples of the process 200, agents can be modeled with a single integrator, which simplifies the expression of the solution-predictor. In further examples of the process 200, agents can be modeled with a linear system, and hence expression of the solution-predictor curve can change.


System-Theoretical Analysis: To establish the stability properties of the event-triggering approach for networked multiagent systems, the aggregated state vector can be defined as x(t)custom-character[x1(t), . . . , xN(t)]∈custom-character. In addition, the aggregated reference model state vector can be defined as xm(t)custom-character[xm1(t), . . . , xmN(t)]∈custom-character, the aggregated error vector as e(t)custom-character[e1(t), . . . , eN(t)]∈custom-character, the aggregated μ(t) vector as μ(t)custom-character1(t), . . . ,μN(t)]∈custom-character, and the aggregated {circumflex over (μ)}(t) vector as {circumflex over (μ)}(t)custom-character[{circumflex over (μ)}1(t), . . . , {circumflex over (μ)}N(t)]∈custom-character. Then, the overall system error dynamics can be written as:












e
˙

(
t
)

=




x
˙

(
t
)

-



x
˙

m

(
t
)


=



-


γ
1

(


x

(
t
)

-


μ
ˆ

(
t
)


)


-

(

-


γ
1

(



x
m

(
t
)

-

μ

(
t
)


)


)


=



-

γ
1




e

(
t
)


-


γ
1




μ
˜

(
t
)






,




(
9
)







where {tilde over (μ)}(t)custom-character{circumflex over (μ)}(t)−μ(t)∈custom-character.


Next, overall dynamics of μ(t) can be rewritten as:











μ
˙

(
t
)

=


-


γ
2

[






μ
ˆ

(
t
)


+

𝒦

(



μ
ˆ

(
t
)

-

c

(
t
)


)


]


=


-


γ
2

[



(


+
𝒦

)



μ
ˆ


-

𝒦


c

(
t
)



]


=

-



γ
2

[






μ
ˆ

(
t
)


-

𝒦


c

(
t
)



]

.








(
10
)







Here custom-charactercustom-character is the Laplacian matrix of the connected and undirected graph, and custom-charactercustom-character and custom-charactercustom-character are defined in Lemma 2.2. Now, the auxiliary ideal dynamics considering continuous inter-agent information exchange can be defined as:





{dot over (μ)}m(t)=−γ2[custom-characterμm(t)−custom-characterc(t)],   (11)


where μm(t)∈custom-character. Let the error between the dynamics given by (10) and (11) be defined as μ(t)custom-characterμ(t)−μm(t)∈custom-character, and the error can be written as:






{dot over (μ)}(t)=−γ2custom-character(t)−γ2custom-character{tilde over (μ)}.   (12)


Now, the following lemma for the upcoming system-theoretical stability analysis of the event-triggered distributed control architecture can be presented. Lemma 4.1: Consider the agent-wise event-triggering rule given by (6). For the overall networked multiagent system, the following inequality then holds:





||{tilde over (μ)}(t)||2≤ε||e(t)||2+√{square root over (N)}ϕ(t)   (13)


Proof: Considering the agent-wise event-triggering rule given by (6) one can write:










[









μ
~

1

(
t
)



2
















μ
~

N

(
t
)



2




]




ε
[








e
1

(
t
)



2















e
N

(
t
)



2




]

+


1
N




ϕ

(
t
)

.







(
14
)







In some examples, all elements of the vectors on both sides of the inequality in (14) are positive. Thus, taking norm of both sides of (14) yields:













[









μ
~

1

(
t
)



2
















μ
~

N

(
t
)



2




]



2







ε
[








e
1

(
t
)



2















e
N

(
t
)



2




]

+


1
N



ϕ

(
t
)





2




ε





[








e
1

(
t
)



2















e
N

(
t
)



2




]



2


+


N




ϕ

(
t
)

.







(
15
)







When applying the fact that ||[||{tilde over (μ)}1(t)||2 . . . ||{tilde over (μ)}N(t)||2]T||2=||[{tilde over (μ)}1(t) . . . {tilde over (μ)}N(t)]T||2 and ||[||e1(t)||2 . . . ||eN(t)||2]T||2=||[e1(t) . . . eN(t)]T||2 in (15), the results in (13) become immediate.


Then, custom-character can be defined as:









𝒪
=

[





γ
1



(

1
-
ε
-

1


σ
¯

1



)




0


0




0




γ
2



ρ
1



(



λ
¯



(

)


-


ρ
¯

1


)




0




0


0




ρ
¯

2




]





(
16
)







where σ1+, ρ1+, and ρ2+ are free parameters to ensure the positive definiteness of custom-character+3×3, which is guaranteed when all its diagonal entries Dk(custom-character) for k=1, 2, 3 given by:












D
1

(
𝒪
)

=


γ
1

(

1
-
ε
-

1


σ
¯

1



)


,




(
18
)















D
2

(
𝒪
)

=


γ
2




ρ
1

(



λ
¯

(

)

-


ρ
¯

1


)



,




(
19
)














D
3

(
𝒪
)

=



ρ
¯

2

.





(
20
)







are positive. Specifically, from (17), D1(custom-character)>0 imposes the condition ε<1−1/σ1. Observe that σ1 can be chosen judiciously large such that 1−1/σ1≈1 holds. In addition, one can always choose ρ1<custom-character such that D2(custom-character) in (18) is positive. Finally, the positiveness of D3(custom-character) in (19) automatically holds.


Now, the system-theoretical stability analysis of the architecture can be presented. Theorem 4.1: Consider a networked multiagent system consisting of N agents with the dynamics of agent i, i=1, . . . , N, given by (1), (2), (3), and the reference model given by (5). In addition, consider the event-triggering rule for scheduling inter-agent information exchange given by (6) and (7) subject to a) sampled data and b) solution-predictor curves. When ε<1/√{square root over (2)} the solution (e(t),μ(t),ϕ(t)) of the closed-loop dynamical system is stable and limt→∞(e(t),μ(t),ϕ(t))=(0,0,0) for ϕf=0, and it is bounded for ϕf∈(0,ϕ0).


Proof. Consider the Lyapunov-like function candidate given by:











𝒱

(

e
,

μ
¯

,
ϕ

)

=



1
2



e
T


e

+


1
2



ρ
1




μ
¯

T



μ
¯


+


1
2



ρ
2



ϕ
2




,




(
20
)







where ρ1+ and ρ2+ are free parameters. Note that V(0,0,0)=0, V(e,μ,ϕ)>0 for (e,μ,ϕ)≠(0,0,0), and V(e,μ,ϕ) is radially unbounded. Taking time-derivative of (20) along system trajectories yields:






{dot over (V)}(e(t),μ(t),ϕ(t))=eT(t)ė(t)+ρ1μT(t){dot over (μ)}(t)+ρ2ϕ(t){dot over (ϕ)}(t).   (21)


Next, using (7), (9), and (12) in (21), one can write






{dot over (V)}(e(t),μ(t),ϕ(t))=−γ1eT(t)e(t)−γ1eT(t){tilde over (μ)}(t)−γ2ρ1μT(t)custom-characterμ(t) −γ2ρ1μT(t)custom-character{tilde over (μ)}(t)−ρ2κϕ2(t)+ρ2κϕ(tf≤−γ1||e(t)||221||e(t)||2||{tilde over (μ)}(t)||2−γ2ρ1λ(custom-character)||μ(t)||222ρ1custom-character||μ(t)||2||{tilde over (μ)}(t)||2−ρ2κϕ2(t)+ρ2κϕ(tf.   (22)


Next, note that the event-triggering rule given by (6) that yields (13) based on Lemma 4 holds for both a) sampled data and b) solution-prediction curve. Hence, using (13) in (22), one can write:






{dot over (V)}(e(t),μ(t),ϕ(t))≤−γ1||e(t)||22−γ2ρ1λ(custom-character)||μ(t)||22−ρ2κϕ2(t)+ρ2κϕ(tf1ε||e(t)||22+√{square root over (N)}γ1ϕ(t)||e(t)||22ρ1ελ(custom-character)||μ(t)||2||e(t)||2+√{square root over (N)}γ2ρ1λ(custom-character)ϕ(t)||μ(t)||2.   (23)


Applying Young's inequality ab≤(1/2)(σ−1a2+σb2) on the fourth, fifth, and sixth terms in (23) yields:












𝒱
˙

(


e

(
t
)

,


μ
¯

(
t
)

,

ϕ

(
t
)


)





-


γ
1

(

1
-
ε

)







e

(
t
)



2
2


-


γ
2




ρ
1

(

)







μ
¯

(
t
)



2
2


-


ρ
2


κ



ϕ
2

(
t
)


+


ρ
2


κ


ϕ

(
t
)



ϕ
f


+


1

2


σ
1








e

(
t
)



2
2


+



σ
1

2



γ
1
2


N



ϕ
2

(
t
)


+


1

2


σ
2








e

(
t
)



2
2


+



σ
2

2



γ
2
2



ρ
1
2



ε
2





λ
¯

2

(

)







μ
¯

(
t
)



2
2


+



σ
3

2







μ
¯

(
t
)



2
2


+


1

2


σ
3




N


γ
2
2



ρ
1
2





λ
¯

2

(

)




ϕ
2

(
t
)




,




(
24
)







with σ1+, σ2+, and σ3+ being free design parameters. Then, it follows:











𝒱
˙

(


e

(
t
)

,


μ
¯

(
t
)

,

ϕ

(
t
)


)





-


γ
1

(

1
-
ε
-

1


γ
1



σ
1



-

1


γ
1



σ
2




)







e

(
t
)



2
2


+


ρ
2


κ


ϕ

(
t
)



ϕ
f


-


(



γ
2



ρ
1




λ
¯

(

)


-



σ
2

2



γ
2
2



ρ
1
2



ε
2





λ
¯

2

(

)


-


σ
3

2


)







μ
¯

(
t
)



2
2


-


(



ρ
2


κ

-



σ
2

2



γ
1
2


N

-


1

2


σ
3




N


γ
2
2



ρ
1
2





λ
¯

2

(

)



)





ϕ
2

(
t
)

.







(
25
)







Next, (27) can be identically rewritten as:












𝒱
˙

(


e

(
t
)

,


μ
¯

(
t
)

,

ϕ

(
t
)


)





-


γ
1

(

1
-
ε
-

1


σ
_

1



)







e

(
t
)



2
2


+


ρ
2


κ


ϕ

(
t
)



ϕ
f


-


γ
2




ρ
1

(



λ
¯

(

)

-


ρ
¯

1


)







μ
¯

(
t
)



2
2


-



ρ
¯

2




ϕ
2

(
t
)




,




(
26
)







where σ1=(2σ1)/(γ1), σ2=(2σ1)/(γ1), ρ1=ρ1/(σ2γ2ε2λ(custom-character)), σ32γ22ρ12ε2λ2(custom-character), and ρ2−1(ρ+(1/2)σ2γ12N+(1/2σ3)Nγ22ρ12λ2(custom-character)).


By letting ξ(t)custom-character[e(t),μ(t),ϕ(t)], (23) can be now compactly rewritten as:






{dot over (V)}(e(t),μ(t),ϕ(t))≤−ξT(t)custom-characterξ(t)+Γξ(t),   (27)


where Γcustom-character[0,0,ρ2κϕf]. It immediately follows from (27) that the closed-loop solution (e(t),μ(t),ϕ(t)) is Lyapunov stable and limt→∞(e(t),μ(t),ϕ(t))=(0,0,0) for ϕf=0 since custom-character is positive-definite and Γ in (27) is zero in this case.


It follows from (28) that there exists a compact set Ccustom-character{ξ(t):ξ(t)≤Γ/custom-character} such that {dot over (V)}(e(t),μ(t),ϕ(t))≤0 outside of this set. Boundedness of the closed loop solution (e(t),μ(t),ϕ(t)) is now immediate.


Parameter Tuning Guidelines: Practical guidelines are provided to tune the parameters of the event-triggering architecture for scheduling inter-agent information exchange in a networked multiagent systems. Here, the direct or indirect effect of each parameter is shown on the total number of events through illustrative numerical examples. Specifically, consider the following parameters: ε from (6), γ1 from (2), γ2 from (3), and ϕ0 and ϕf from (7). In addition, the position of the leader agent in the networked multiagent system can also affect the total number of events.


In some examples, a system including 4 agents is used such that the first agent can communicate with the second agent, the second agent can communicate with the first agent and the third agent, the third agent can communicate with the second agent and the fourth agent, and finally the fourth agent can communicate with the third agent, as illustrated in FIG. 3. In this figure, without loss of generality, the first agent is chosen as the leader. In some of the following numerical simulation and experimental results, the leader agent is altered to show the effect of leader agent location as well.


Each example is run with sampling rate of 10 Hertz in Python on a Windows 10 personal computer. Moreover, four different Python scripts are run in parallel, one for each agent. Time is synchronized through system time and each script shares its calculated values (e.g., shared variables described above) when an event occurs according to FIG. 3. Initial conditions are set as x10=3.2 m, x20=2.3 m, x30=1.4 m, and x40=0.5 m, and the command is given by:










c

(
t
)

=

{



A





if


0

<
t


2

5


,






A
+
B






if


25


<
t


5

0


,






A
-
B






if






50


<
t


7

5


,









(
29
)







where A=(x10+x20+x30+x40)/4 and B=1.2.


Tuning ε: The ε parameter scales the event-triggering threshold as it is an immediate observation from (6). In particular, small values of ε lead to an increase of the total number of events. Note that ε parameter is constrained to the interval ε∈(0,1/√{square root over (2)}) for system stability.


The numerical examples can be run using the event-triggering approach with both event-triggering scenarios: a) sampled data exchange and b) solution-predictor curve exchange. ε∈{0.1,0.7} can be set, and the rest of the parameters can be chosen as γ1=0.7, γ2=2.5, ϕ0=0.5, and ϕf=0. Results are shown in FIGS. 3 and 4 for ε=0.1 and ε=0.7, respectively. When ε=0.1 higher number of events are observed as compared to ε=0.7 for both event-triggering scenarios. On the other hand, performance-wise, both ε=0.1 and ε=0.7 produce almost similar results.


Tuning γ1 and γ2: The parameters γ1 and γ2 respectively regulate the rate of convergence of each agent's dynamics and the corresponding μ(t)-dynamics (see equations (1) and (3)). Both parameters have an indirect effect on the number of events. Based on our observation, γ2 can be selected larger than γ1 to maintain the desired performance, even though selecting γ12 may decrease the number of events yet may lead to undesirable system response. To give some intuition on how the variation of γ1 and γ2 parameters affect the total number of events, simulation results are summarized for γ1∈{0.3,0.7,1.1,1.5}, γ2∈{1.5,2.5,3.5,4.5},ε=0.7, ϕ=0.5, and ϕf=0 in Table 1 and Table 2 for exchanging sampled data and solution-predictor curve, respectively. Note that some of simulation results presented within these tables have undesirable performance despite their low number of events, where we make them bold in these tables. Selection of γ1 and γ2 parameters should be done judiciously to reflect the desired level of network utilization and system performance.









TABLE 1







Summary of the effect of parameters γ1 and γ2


on the number of events for exchanging


sampled data between neighboring agents.













γ2
















1.5
2.5
3.5
4.5







γ1
0.3
484
658
484
677




0.7

360

442
581
431




1.1

282

422
619
619




1.5

229


253

431
629

















TABLE 2







Summary of the effect of parameters γ1 and γ2


on the number of events for exchanging solution-


predictor curves between neighboring agents.













γ2
















1.5
2.5
3.5
4.5







γ1
0.3
233
215
256
329




0.7

211

187
254
284




1.1

197

219
243
300




1.5

199


200

232
284










Tuning ϕ0 and ϕf: The parameters ϕ0 and ϕf from (7) have a direct effect on the number of events. In particular, ϕ0 can be used to adjust the event-triggering threshold initially. Thus, its higher values results in lower number of events and its lower values results in higher number of events. Note that initial system response may be degraded when ϕ0 is set relatively high. To elucidate this point, consider FIGS. 6 and 7, where ϕ0=1 and ϕ0=0.5 respectively with ϕf=0, γ1=0.7, γ2=2.5, and ε=0.7. Immediate observation from both figures is the lower number of events on the system response with ϕ0=1 as compared with the system response for ϕ0=0.5 for both event-triggering scenarios a) sampled data exchange and b) solution-predictor curve exchange.


The role of the parameter ϕf has been explained above (i.e., if ϕf=0 closed-loop system is asymptotically stable, whereas if ϕf∈(0,ϕ0) we have boundedness). In particular, setting ϕf≠0 decrease the number of events. In FIGS. 7 and 8, numerical results are shown for ϕf=0 and ϕf=0.2, respectively, with ϕ0=0.5, γ1=0.7, γ2=2.5, and ε=0.7.


The Effect of Leader Agent Location: The location of the leader agent in the graph indirectly affects the number of events. For example, consider FIG. 7, where the first agent is the leader and FIG. 9, where the second agent is the leader. Observe from these figures that the number of events has increased with changing the leader agent from one to two in the event-triggering scenario, where sampled data is exchanged; whereas, it has decreased when exchanging solution-predictor curve. In general, an exact relationship between the leader agent location in a graph and number of events is difficult to establish.


Here, the effectiveness of exchanging solution-predictor curve instead of sampled data is shown such that the shared information (i.e., number of events) is significantly reduced without sacrificing performance (see FIG. 4-9). Moreover, the method of solution predictor curve exchange can effectively decrease network utilization (number of events) when a time-varying command is applied; see, for example, FIG. 10, where c(t)=Asin(ωt) with A=2.5 m and ω=0.2 rad . This concludes this section. We present experimental studies predicated on the proposed architecture in the next section.


Experiments: The results of the experiments are presented to elucidate the efficacy of the event-triggering architecture.


Experimental Setup: For the experiments, the setup is shown in FIG. 11 that includes three main parts.


A) A personal computer (e.g., with Windows 10) running Python scripts supported by the crazylie opensource libraries equipped with Crazyradio capable of 2.4 GHz radio communication.


B) A crazyflie 2.0 nano-quadcopter (FIG. 11B) that weighs 27 grams equipped with 2.4 GHz ISM band radio for wireless communication, STM32F405 main application MCU (Cortex-M4, 168 MHz, 192 kb SRAM, 1 Mb flash), nRF51822 radio and power management MCU (Cortex-M0, 32 Mhz, 16 kb SRAM, 128 kb flash), 3 axis gyro (MPU-9250), 3 axis accelerometer (MPU-9250), 3 axis magnetometer (MPU-9250), high precision pressure sensor (LPS25H), and 240 mAh LiPo battery (allowing the crazyflie nano-quadcopter a continuous flight of 7 minutes).


C) A local positioning system (LOCO positioning system) having the ability of triangulating 3D position of an objects in space (resembling to a miniature GPS system). This LOCO positioning system is comprised of two subsystems, a set of anchors positioned in the room (resembling satellites in GPS) acting as a reference and one or more tags attached to the crazyflie nano-quadcopter (resembling GPS receivers). Note that the LOCO positioning system determines the position of the object onboard the tag (i.e., onboard the crazyflie nano-quadcopter).


The actual experimental setup is shown in FIG. 12. Four crazyflie nano-quadcopters were used where the crazyflie nano-quadcopters communicate with each other according to the graph topology given in FIG. 3. The x-position of each crazylie is fixed and remains constant throughout the experiment as:






x
1=0.5 m, x2=1.2 m, x3=1.9 m, x4=2.6 m.   (30)


In addition, the z-position of each crazylie is also kept constant throughout the experiment at z=0.4 m. Furthermore, the initial y-positions of the crazylies are given by:





5y10=3.2 m, y20=2.3 m, y30=1.4 m, y40=0.5 m.   (31)


The architecture was used for inter-agent information exchange predicated on event-triggering to synchronize the y-position of agents (crazyflies) to follow a command supplied to the leader agent given by (29). The rest of the parameters were set as ε=0.7, γ10.7, γ2=2.5, and ϕ0=0.5. The experiments were run for ϕf∈{0,0.05,0.2} where ϕf=0.05 corresponds to ϕ(T/4) and ϕf=0.2 corresponds to ϕ(T/2) with T∈custom-character being the total time of the experiments. Note that ϕf≠0 leads to bounded results. In addition, the sampling rate was set as 10 Hertz, and four separate Python scripts were run in parallel, one for each agent (crazyflie) respectively.


Experimental Results: A total of twelve experiments were run. The first agent was set as the leader and run the experiments for ϕf=0 by first utilizing sampled data exchange over the network to be shared between neighboring agents, and then the exchange of solution-predictor curves was used between neighboring agents. Then the experiments were repeated for ϕf=0.05 and ϕf=0.2. Next, the six aforementioned experiments were repeated by setting the second agent as the leader.


The experimental results, where the first agent is selected to be the leader, are summarized in FIG. 13-15. In addition, experimental results, where the second agent is selected to be the leader, are summarized in FIG. 16-18.


Several observations can be made from the experimental results. First, utilizing the event-triggering approach significantly reduces network utilization (number of events). When solution-predictor curves were exchanged between neighboring agents when an event occurs (instead of just sharing a sampled data), an even more drastic decrease occurs in network utilization (number of events) irrelevant of the location of the leader agent in the graph. Note that the experimental results support the numerical examples. For example, FIGS. 7 and 8 that show numerical results where the first agent is the leader with ϕf=0 and ϕf=0.2, respectively can be compared with FIG. 13 and 15 that show the corresponding experimental results. In addition, FIG. 9 that shows numerical results where the second agent is the leader with ϕf=0 can be compared to FIG. 16 that shows the corresponding experimental results.


In the foregoing specification, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims
  • 1. A method for information exchange of a first vehicle in a networked multiagent system, comprising: receiving a last neighbor dataset broadcasted by a neighbor vehicle;determining a current dataset based on the last neighbor dataset and a last vehicle dataset of the first vehicle;identifying a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold, the dynamic threshold being defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state;determining a transmission dataset being associated with the current dataset; andin response to the violation, broadcasting the transmission dataset to the neighbor vehicle.
  • 2. The method of claim 1, wherein the last vehicle dataset comprises a dataset previously transmitted to the first vehicle in response to the violation.
  • 3. The method of claim 1, wherein the first vehicle is a leader vehicle, and wherein the current dataset is determined further based on a command.
  • 4. The method of claim 3, wherein the current dataset is defined as: μi(t)=−γ2[Σi˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))+{circumflex over (μ)}i(t)−c(t)],where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, {circumflex over (μ)}j(t) is the last neighbor dataset, and c(t) is the command.
  • 5. The method of claim 1, wherein the first vehicle is a follower vehicle, and wherein the current dataset is defined as: μi(t)=−γ2[Σi˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))],where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, and {circumflex over (μ)}j(t) is the last neighbor dataset.
  • 6. The method of claim 1, wherein the event-triggering condition is defined as: ||{circumflex over (μ)}i(t)−μi(t)||2≤ε||ei(t)||2+ϕ(t), andei(t)xi(t)−xmi(t),where μi(t) is the current dataset, {circumflex over (μ)}i(t) is the last vehicle dataset, ε is a design parameter, ε||ei(t)||2+ϕ(t) is the dynamic threshold, ei(t) is the error, xi(t) is the vehicle state of the first vehicle, xmi(t) is the reference state, and ϕ(t) is the exponentially decaying term.
  • 7. The method of claim 1, wherein the transmission dataset comprises sampled data of the current dataset.
  • 8. The method of claim 1, wherein the transmission dataset comprises a parameter for a solution-predictor curve to estimate the current dataset.
  • 9. The method of claim 8, wherein the parameter comprises an initial condition and a solution-predictor parameter defined by the last neighbor dataset.
  • 10. The method of claim 9, wherein in response to the first vehicle being a leader vehicle, the solution-predictor parameter is defined as:
  • 11. An apparatus for providing information exchange among vehicles, the apparatus comprising: a processor; anda memory having stored thereon a set of instructions which, when executed by the processor, cause the processor to: receive a last neighbor dataset broadcasted by a neighbor vehicle;determine a current dataset based on the last neighbor dataset and a last vehicle dataset of the first vehicle;identify a violation of an event-triggering condition by comparing a difference between the last vehicle dataset and the current dataset with a dynamic threshold, the dynamic threshold being defined by an exponentially decaying term and an error between a vehicle state of the first vehicle and a reference state;determine a transmission dataset being associated with the current dataset; andin response to the violation, broadcast the transmission dataset to the neighbor vehicle.
  • 12. The agent of claim 11, wherein the last vehicle dataset comprises a dataset previously transmitted to the first vehicle in response to the violation.
  • 13. The agent of claim 11, wherein the first vehicle is a leader vehicle, and wherein the current dataset is determined further based on a command.
  • 14. The agent of claim 13, wherein the current dataset is defined as: μi(t)=−γ2[Σi˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))+{circumflex over (μ)}i(t)−c(t)],where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, {circumflex over (μ)}j(t) is the last neighbor dataset, and c(t) is the command.
  • 15. The agent of claim 11, wherein the first vehicle is a follower vehicle, and wherein the current dataset is defined as: μi(t)=−γ2[Σi˜j({circumflex over (μ)}i(t)−{circumflex over (μ)}j(t))],where μi(t) is the current dataset, γ2 is a design parameter, {circumflex over (μ)}i(t) is the last vehicle dataset, and {circumflex over (μ)}j(t) is the last neighbor dataset.
  • 16. The agent of claim 11, wherein the event-triggering condition is defined as: ||{circumflex over (μ)}i(t)−μi(t)||2≤ε||ei(t)||2+ϕ(t), andei(t)xi(t)−xmi(t),where μi(t) is the current dataset, {circumflex over (μ)}i(t) is the last vehicle dataset, ε is a design parameter, ε||ei(t)||2+ϕ(t) is the dynamic threshold, ei(t) is the error, xi(t) is the vehicle state of the first vehicle, xmi(t) is the reference state, and ϕ(t) is the exponentially decaying term.
  • 17. The agent of claim 11, wherein the transmission dataset comprises sampled data of the current dataset.
  • 18. The agent of claim 11, wherein the transmission dataset comprises a parameter for a solution-predictor curve to estimate the current dataset.
  • 19. The agent of claim 18, wherein the parameter comprises an initial condition and a solution-predictor parameter defined by the last neighbor dataset.
  • 20. The agent of claim 19, wherein in response to the first vehicle being a leader vehicle, the solution-predictor parameter is defined as:
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 63/356,870, filed Jun. 29, 2022, the disclosure of which is hereby incorporated by reference in its entirety, including all figures, tables, and drawings

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH

This invention was made with government support under the Universal Technology Corporation Grant 162642-20-25-C1 awarded by the Air Force Research Laboratory Aerospace Systems Directorate. The government has certain rights in the invention.

Provisional Applications (1)
Number Date Country
63356870 Jun 2022 US