GENERAL AND ROBUST DISTRIBUTED LINEAR FILTERING AND PREDICTION WITH OPTIMAL GAIN

Information

  • Patent Application
  • 20240302486
  • Publication Number
    20240302486
  • Date Filed
    March 03, 2023
    a year ago
  • Date Published
    September 12, 2024
    3 months ago
Abstract
In one aspect of the invention, there is a computer-implemented method including: detecting, by a processor set of a first sensor agent, sensor data from one or more sensors comprised in the first sensor agent; determining, by the processor set, an own series of estimates, based on the sensor data; transmitting, by the processor set, the own series of estimates; receiving, by the processor set, at least one additional series of estimates from additional sensor agents; restoring, by the processor set, in response to detecting that a second sensor agent of the additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent; and outputting, by the processor set, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.
Description
BACKGROUND

Aspects of the present invention relate generally to linear filtering and prediction and, more particularly, to linear filtering and prediction of noisy, time-varying random fields with distributed processing.


The Kalman filter is a widely used solution for linear filtering and prediction of time-varying random fields observed with noisy measurements, in a centralized processing context. The growing need for data privacy and robustness from centralized server failure has led to the development of filtering and predictions algorithms that are subject to distributed processing rather than centralized processing, and thus are outside the context of a centralized Kalman filter.


SUMMARY

In a first aspect of the invention, there is a computer-implemented method including: detecting, by a processor set of a first sensor agent, sensor data from one or more sensors comprised in the first sensor agent; determining, by the processor set, an own series of estimates, based on the sensor data; transmitting, by the processor set, the own series of estimates; receiving, by the processor set, at least one additional series of estimates from one or more additional sensor agents; restoring, by the processor set, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent; and outputting, by the processor set, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.


In another aspect of the invention, there is a computer program product including one or more computer readable storage media having program instructions collectively stored on the one or more computer readable storage media. The program instructions are executable to: detect sensor data from one or more sensors comprised in a first sensor agent; determine an own series of estimates, based on the sensor data; transmit the own series of estimates; receive at least one additional series of estimates from one or more additional sensor agents; restore, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent; and output, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.


In another aspect of the invention, there is a system including a processor set, one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media. The program instructions are executable to: detect sensor data from one or more sensors comprised in a first sensor agent; determine an own series of estimates, based on the sensor data; transmit the own series of estimates; receive at least one additional series of estimates from one or more additional sensor agents; restore, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent; and output, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.





BRIEF DESCRIPTION OF THE DRAWINGS

Aspects of the present invention are described in the detailed description which follows, in reference to the noted plurality of drawings by way of non-limiting examples of exemplary embodiments of the present invention.



FIG. 1 depicts a computing environment in accordance with aspects of the present invention.



FIG. 2 depicts a block diagram of an exemplary environment in accordance with aspects of the present invention.



FIG. 3 depicts a flowchart of an exemplary method in accordance with aspects of the present invention.



FIG. 4 depicts an illustrative portion of a multi-agent network of sensor agents detecting conditions of a random dynamical environment, and generating and sharing estimates and predictions regarding the environment in accordance with aspects of the present invention.





DETAILED DESCRIPTION

Aspects of this disclosure relate generally to a novel approach to distributed linear filtering and prediction and, more particularly, to observing and making consistently accurate predictions about random dynamical systems and environments observed by a multi-agent network of sensor agents. According to aspects of the invention, the sensor agents may be devices configured with one or more processing devices, one or more sensors, and one or more communication components. In various embodiments, the sensor agents may thus be configured to sense and detect at least one form of data from their environment and surroundings, perform processing for linear filtering and prediction to generate estimates of one or more states of interest based on their sensor data, transmit their estimates, and receive estimates transmitted by neighboring sensor agents of the multi-agent network within a transmission range of the communication components. In this manner, implementations of the present invention may provide a generalized and robust system for observing and making predictions about random dynamical systems and environments with consistent accuracy, in applications such as multi-agent control, positioning, navigation, state estimation in electrical power generating infrastructure (e.g., solar farms and wind farms) and electrical power grid distribution infrastructure (e.g., relay stations and power lines), spatio-temporal environment or field monitoring, connected vehicular network for traffic balancing and collision avoidance (for automobiles, bicycles, alternative ground transportation vehicles, aircraft, aquatic vessels, or any types of vehicles in any environment), wildlife monitoring for conservation and scientific research, coordinated scientific exploration of underwater environments or the surfaces of other planets or moons by fleets of robots, and any kind of collaborative object tracking, among other examples.


The sensor agents (hereafter “sensor agents” or “agents”) may be stationary or mobile (e.g., robots) and may include a mix of different types of sensor agents. The sensor agents may be outdoors (e.g., as vehicular traffic), indoors (e.g., in a manufacturing facility) or both (e.g., in electrical power grid infrastructure).


As used herein, “estimates of one or more states of interest” (or “estimates”) may include processed determinations, based on sensor data, generated by sensor agents, of any condition of an environment or system, such as temperature, pressure, or speed and direction of the wind local to each sensor agent, or speeds and directions of one or more vehicles or manufacturing robots, or voltage, current, and temperature of electrical grid transmission lines local to each sensor agent in an electrical power infrastructure control context, or speeds and directions of one or more individual animals in a wildlife conservation science context. The estimates may include both filter estimates and prediction estimates. The filter estimates are estimates that a sensor agent generates of an observed state at a time of observation, based on the sensor data. The prediction estimates are estimates of a predicted future state, based on the filter estimates and predictive modeling. A sensor agent may generate prediction estimates for a future time corresponding to an intended time at which the sensor agent will take a subsequent set of sensor readings, with which to generate a subsequent set of filter estimates.


The network of sensor agents or multi-agent network may be sparsely disbursed, such that individual sensor agents may tend to have a low density of neighboring sensor agents, in some examples. In some examples, each sensor agent may typically be in communication range of only one, two, three, or four other sensor agents. In some examples, each sensor agent may typically be in communication range of any other numbers of sensor agents. Various examples may include any arbitrary number of sensor agents, such as thousands of sensor agents, millions of sensor agents, or any other number of sensor agents. Various examples may use large numbers, such as in the millions, of relatively inexpensive sensor agents, which may have significant noise in their data and thus in the estimates they generate, but systems and methods of this disclosure may be well-suited for compensating for noisy individual estimates and arriving at overall accurate global consensus values. Various examples may also use expensive, sophisticated sensor agents, such as automobiles or aircraft that function as sensor agents of this disclosure, which may employ relatively sophisticated and high-signal-to-noise ratio (SNR) sensors.


Aspects of this disclosure provide novel algorithms that fuse the concepts of consensus and innovations. Aspects of this disclosure introduce a definition of distributed observability, which enables novel algorithms, and which enables a more generalized assumption than the conventional combined assumptions of global observability and connected network. Aspects of this disclosure include directed optimal gain matrices, designed from first principles, such that the mean-squared error of estimation is minimized at each of an arbitrarily large number of sensor agents, and a distributed version of an algebraic Riccati equation is derived for computing the gains.


Aspects of this disclosure are directed to a novel general and robust distributed state estimation algorithm. Aspects of this disclosure may be considered to provide a counterpart to a Kalman filter in a distributed processing context, that is as general and robust in a distributed processing context as a Kalman filter is in a centralized processing context. Aspects of this disclosure are directed to novel frameworks and algorithms that address gaps in conventional distributed filtering and prediction. In an example algorithm of this disclosure, the consensus on the state estimates is treated as innovations, and the example algorithm may use gain matrices in accordance with treating the consensus on the state estimates as innovations. Such optimal gains may yield field estimates with minimum mean-squared error (MMSE) at each agent under an assumption of distributed observability at each agent. Aspects of this disclosure use a new definition of distributed observability that performs equivalently whether the communication graph is directed or undirected and does not require the graph to be connected. Aspects of this disclosure use a System-Observation-Communication Model framework, as described below.


Aspects of this disclosure include a distributed framework for time-series forecasting by a dynamical system that includes a multi-agent network of local sensor agents which may function as filters and predictors. Each of the local sensor agents may have access to local observations only, and may be able to transmit and exchange its own state estimates with its neighboring sensor agents, in various examples. The sensor agents may transmit their estimates and refrain from transmitting the underlying data they collect and on which they base their estimates, in various examples. By doing so, the sensor agents may estimate, track, and predict a global time-varying state of the environment they cover.


The underlying data collected by the sensor agents may have any of a wide range of noise and of SNR. The noise in the data may be carried forward as noise in the estimates generated by the individual agents, and as noise in the global time-varying state. The overall network of sensor agents may compensate for this noise and may exhibit emergent behavior in using the overall collection and interchange of estimates of the many sensor agents to converge on consensus values more consistently accurate than would be possible with any one of the individual sensor agents, in various examples.


Aspects of this disclosure include inventive model design, communication protocols, and update rules, which may guarantee that the predictors come to a consensus asymptotically and perform better than any of the individual local predictor (working on its own). The overall net performance of the network of sensor agents may asymptotically approach a theoretical ideal upper bound of consistently accurate performance (or a lower bound of mean squared error) of a centralized predictor that has access to all the observations of the entire network of sensor agents at all times.


Aspects of this disclosure may use underlying distributed learning algorithms and engineering customized network architectures with sensor agents functioning as smart nodes of a network, as opposed to merely scaling up or speeding up artificial intelligence (AI) processing tasks by parallel data processing. A distributed learning algorithm of this disclosure may comprise an algorithm for distributed detection and state estimation, or distributed detecting and generating of estimates of state, for a system or environment of interest, by a multi-agent network of distributed sensor agents of this disclosure, in various examples. A smart network of sensor agents of this disclosure enables knowledge integration without sharing data, by sensor agents instead sharing estimates and predictions they generate based on data, in various examples, thus naturally enabling compliance with any applicable laws, regulations, and requirements for protecting privacy. A smart network of sensor agents of this disclosure reduces computation and communication bottlenecks and overload typical of a centralized processing system attempting to handle observations and predictions for a large, complex system or environment, in various examples. A smart network of sensor agents of this disclosure provides robustness to failures and attacks, is naturally scalable to larger sensor agent networks, and is naturally extendable to growing network size.


Various examples are directed to a computer-implemented method for providing a distributed model fusion for a network of multiple sensor agents that function as trusted network nodes without using a centralized information and prediction processing fusion center. Providing a distributed model fusion for a multi-agent network may include generating a distributed state estimation over a multi-agent network of sensor agents, wherein each sensor agent in the multi-agent network has access to its own data and can share its local estimates with neighboring sensor agents. Various examples further include sensor agents, in response to detecting an interruption (e.g., wherein one or more of the sensor agents of the multi-agent network are under attack, or experience a temporary connection failure), ensuring that processing state predictions by the remaining connected sensor agents continues without interruption. Various examples further include, in response to one or more of the sensor agents experiencing the interruption and disconnection from the network, the one or more disconnected sensor agents and neighboring connected sensor agents proximate to the one or more disconnected sensor agents reconnecting and re-establishing communication with each other, thereby reconnecting the one or more disconnected sensor agents to the multi-agent network, and the one or more disconnected sensor agents and neighboring connected sensor agents resuming exchanging estimates and parameters with each other. Various examples further include avoiding communication and computational processing bottlenecks and overhead at a centralized information and prediction processing fusion center, or when deployed over a large geographical area.


Implementations of this disclosure are necessarily rooted in computer technology. For example, steps of detecting, by a processor set of a first sensor agent, sensor data from one or more sensors comprised in the first sensor agent; determining, by the processor set, an own series of estimates, based on the sensor data; transmitting, by the processor set, the own series of estimates; receiving, by the processor set, at least one additional series of estimates from one or more additional sensor agents; and restoring, by the processor set, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent, are necessarily computer-based and cannot be performed in the human mind. Further aspects of the present disclosure are beyond the capability of mental effort not only in scale and consistency, such as by millions or an arbitrarily high number of individual sensor agents working collaboratively with each other, but also technically and categorically, and may enable realtime characterization and prediction of states of an arbitrarily large and complex system or environment, in ways definitively beyond the capability of human minds unaided by computers. Further, aspects of this disclosure provide technological improvements and technological solutions to persistent, complex problems and challenges in conventional distributed linear filtering and prediction. For example, aspects of this disclosure may ensure generalized and robust distributed linear filtering and prediction of arbitrarily high size and complexity. Aspects of this disclosure may ensure achieving more accurate and more robust performance, higher security, and higher-performing avoidance of computing and communication bottlenecks and downtime, in a wide range of important and sophisticated applications. Such applications may include monitoring and controlling manufacturing facilities, monitoring, and controlling the flow of vehicular traffic and collision avoidance, and monitoring and controlling electrical power grids to deliver electrical power reliably and efficiently, in ways that may be categorically beyond the capabilities of conventional systems.


As one example application of generalized and robust distributed linear filtering and prediction by multiple sensor agents of this disclosure, in a vehicle traffic load-balancing and collision avoidance application, each of at least some vehicles in a vehicle traffic system (e.g., automobiles on roads of a city) may be configured to function as sensor agents of this disclosure. In this traffic network, each of the car sensor agents may detect, estimate, and predict states for themselves (e.g., their own cars), each other (e.g., each car's proximate cars), and the environment (e.g., road boundaries, stop signs, traffic light states, proximate bicyclists and pedestrians), share those state estimates with each other, and each use their own and each other's state estimates to generate consensus estimates to balance traffic and to avoid collisions, even while some cars continuously leave the traffic system and other new cars continuously join the traffic system, in ways beyond the capabilities of each car's own on-board computers acting alone to load-balance traffic or to avoid collisions, and in ways beyond the accuracy and speed and capability of even an unrealistically and arbitrarily large number of human personnel trying to monitor the same conditions throughout the vehicular traffic of the city and constantly sharing observations with each other and trying to constantly update a common consensus. In another example application of generalized and robust distributed linear filtering and prediction by multiple sensor agents of this disclosure, an electrical power generating and distributing context may include or be attended by a network of sensor agents, potentially in a variety of formats, with some stationary, some implemented as ground-roving robots, some implemented as airborne robots, and some implemented as aquatic robots (e.g., in aquatic wind farms and aquatic solar farms), which regularly detect and generate state estimates to monitor relevant process conditions (e.g., wind speed at each of many locations, solar intensity at each of many locations, electrical power, current, voltage, and transmission efficiency at each of many locations, assurance of human operator safety conditions at each of many locations), regularly share their state estimates with each other to generate consensus estimates, and output their consensus estimates, which electrical infrastructure systems may use, such as to regularly tune operating conditions to efficiently meet system demand, and to provide safety alerts. Because of the distributed nature of the sensor agents in this electrical power infrastructure network, the overall functioning of the multi-agent network and the robust and accurate generating of the consensus state estimates of the relevant states throughout the electrical power infrastructure may continue unabated and with little or no loss of effectiveness and predictive power, and keep the electrical power infrastructure running smoothly and delivering power efficiently and reliably, even if or when even a significant proportion of the individual sensor agents are taken offline, by any effect, whether for maintenance, due to a storm or an accident, or even due to attacks or malicious interference, and in ways categorically beyond the accuracy and speed and capability of even an unrealistically and arbitrarily large number of human personnel trying to monitor the same conditions throughout the electrical power infrastructure and constantly sharing observations with each other and trying to constantly update a common consensus.


To the extent that implementations of the invention might prospectively collect, store, or employ personal information provided by, or obtained from, individuals (for example, any personal information that may be incidentally apparent from observations and predictions by a distributed multi-agent network, such as routes driven in cars functioning as sensor agents), such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information may be subject to consent of the individual to such activity, for example, through “opt-in” or “opt-out” processes as may be appropriate for the situation and type of information. Storage and use of personal information may be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information.


Various aspects of the present disclosure are described by narrative text, flowcharts, block diagrams of computer systems and/or block diagrams of the machine logic included in computer program product (CPP) embodiments. With respect to any flowcharts, depending upon the technology involved, the operations can be performed in a different order than what is shown in a given flowchart. For example, again depending upon the technology involved, two operations shown in successive flowchart blocks may be performed in reverse order, as a single integrated step, concurrently, or in a manner at least partially overlapping in time.


A computer program product embodiment (“CPP embodiment” or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called “mediums”) collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim. A “storage device” is any tangible device that can retain and store instructions for use by a computer processor. Without limitation, the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing. Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits/lands formed in a major surface of a disc) or any suitable combination of the foregoing. A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.



FIG. 1 depicts a computing environment 100 in accordance with aspects of the present invention. Computing environment 100 contains an example of an environment for the execution of at least some of the computer code involved in performing the inventive methods, such as distributed estimate generating code 200. In addition to block 200, computing environment 100 includes, for example, computer 101, wide area network (WAN) 102, end user device (EUD) 103, remote server 104, public cloud 105, and private cloud 106. In this embodiment, computer 101 includes processor set 110 (including processing circuitry 120 and cache 121), communication fabric 111, volatile memory 112, persistent storage 113 (including operating system 122 and block 200, as identified above), peripheral device set 114 (including user interface (UI) device set 123, storage 124, and Internet of Things (IoT) sensor set 125), and network module 115. Remote server 104 includes remote database 130. Public cloud 105 includes gateway 140, cloud orchestration module 141, host physical machine set 142, virtual machine set 143, and container set 144.


COMPUTER 101 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 130. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment 100, detailed discussion is focused on a single computer, specifically computer 101, to keep the presentation as simple as possible. Computer 101 may be located in a cloud, even though it is not shown in a cloud in FIG. 1. On the other hand, computer 101 is not required to be in a cloud except to any extent as may be affirmatively indicated.


PROCESSOR SET 110 includes one, or more, computer processors of any type now known or to be developed in the future. Processing circuitry 120 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 120 may implement multiple processor threads and/or multiple processor cores. Cache 121 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 110. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 110 may be designed for working with qubits and performing quantum computing.


Computer readable program instructions are typically loaded onto computer 101 to cause a series of operational steps to be performed by processor set 110 of computer 101 and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document (collectively referred to as “the inventive methods”). These computer readable program instructions are stored in various types of computer readable storage media, such as cache 121 and the other storage media discussed below. The program instructions, and associated data, are accessed by processor set 110 to control and direct performance of the inventive methods. In computing environment 100, at least some of the instructions for performing the inventive methods may be stored in block 200 in persistent storage 113.


COMMUNICATION FABRIC 111 is the signal conduction path that allows the various components of computer 101 to communicate with each other. Typically, this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up busses, bridges, physical input/output ports and the like. Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths.


VOLATILE MEMORY 112 is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, volatile memory 112 is characterized by random access, but this is not required unless affirmatively indicated. In computer 101, the volatile memory 112 is located in a single package and is internal to computer 101, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 101.


PERSISTENT STORAGE 113 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 101 and/or directly to persistent storage 113. Persistent storage 113 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system 122 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface type operating systems that employ a kernel. The code included in block 200 typically includes at least some of the computer code involved in performing the inventive methods.


PERIPHERAL DEVICE SET 114 includes the set of peripheral devices of computer 101. Data communication connections between the peripheral devices and the other components of computer 101 may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion type connections (for example, secure digital (SD) card), connections made through local area communication networks and even connections made through wide area networks such as the internet. In various embodiments, UI device set 123 may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices. Storage 124 is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage 124 may be persistent and/or volatile. In some embodiments, storage 124 may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer 101 is required to have a large amount of storage (for example, where computer 101 locally stores and manages a large database) then this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers. IoT sensor set 125 is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector.


NETWORK MODULE 115 is the collection of computer software, hardware, and firmware that allows computer 101 to communicate with other computers through WAN 102. Network module 115 may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet. In some embodiments, network control functions and network forwarding functions of network module 115 are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software-defined networking (SDN)), the control functions and the forwarding functions of network module 115 are performed on physically separate devices, such that the control functions manage several different network hardware devices. Computer readable program instructions for performing the inventive methods can typically be downloaded to computer 101 from an external computer or external storage device through a network adapter card or network interface included in network module 115.


WAN 102 is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future. In some embodiments, the WAN 102 may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network. The WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers.


END USER DEVICE (EUD) 103 is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer 101), and may take any of the forms discussed above in connection with computer 101. EUD 103 typically receives helpful and useful data from the operations of computer 101. For example, in a hypothetical case where computer 101 is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module 115 of computer 101 through WAN 102 to EUD 103. In this way, EUD 103 can display, or otherwise present, the recommendation to an end user. In some embodiments, EUD 103 may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on.


REMOTE SERVER 104 is any computer system that serves at least some data and/or functionality to computer 101. Remote server 104 may be controlled and used by the same entity that operates computer 101. Remote server 104 represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer 101. For example, in a hypothetical case where computer 101 is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer 101 from remote database 130 of remote server 104.


PUBLIC CLOUD 105 is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economics of scale. The direct and active management of the computing resources of public cloud 105 is performed by the computer hardware and/or software of cloud orchestration module 141. The computing resources provided by public cloud 105 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 142, which is the universe of physical computers in and/or available to public cloud 105. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 143 and/or containers from container set 144. It is understood that these VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE. Cloud orchestration module 141 manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments. Gateway 140 is the collection of computer software, hardware, and firmware that allows public cloud 105 to communicate through WAN 102.


Some further explanation of virtualized computing environments (VCEs) will now be provided. VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image. Two familiar types of VCEs are virtual machines and containers. A container is a VCE that uses operating-system-level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities. However, programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization.


PRIVATE CLOUD 106 is similar to public cloud 105, except that the computing resources are only available for use by a single enterprise. While private cloud 106 is depicted as being in communication with WAN 102, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network. A hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds. In this embodiment, public cloud 105 and private cloud 106 are both part of a larger hybrid cloud.



FIG. 2 depicts a block diagram of an exemplary environment 205 in accordance with aspects of the present invention. In embodiments, environment 205 includes sensor agents 201A. 201B, 201C, and 201D (“sensor agents 201”), which each implement a sensor agent of this disclosure, and implement example distributed estimate generating code 200 of this disclosure, as introduced above. Sensor agents 201 may be implemented in a variety of configurations for implementing, storing, running, and/or embodying distributed estimate generating code 200. Sensor agents 201 may each comprise one or more instances of computer 101 of FIG. 1, in various examples. Sensor agents 201 may each comprise a processor set 220, one or more sensors of any kind in a sensor set 225, and one or more data transmission and reception components in a network module 215. Sensor agents 201 may each be an implementation of client computer 101, end user device 103, or remote server 104 as described above with regard to FIG. 1. Processor set 220 may be an implementation of processor set 110 as described above with regard to FIG. 1. Network module 115 may be an implementation of network module 115 as described above with regard to FIG. 1. Sensor set 215 may be an implementation of peripheral device set 124 or IoT sensor set 125 as described above with regard to FIG. 1. Sensor set 225 may include any one or more of cameras, video cameras, microphones, motion sensors, infrared detectors, ultraviolet detectors, radars, lidars, thermometers, magnetic sensors, tactile sensors, pressure sensors, X-ray detectors, gamma ray detectors, mass spectrometers, or any other kind of sensors, in various examples. Sensor agents 201 in various examples may be representative of thousands or millions of identical or heterogeneous sensor agents, and may comprise a cloud-deployed computing configuration, comprising processing devices, memory devices, and data storage devices dispersed across any kind of system or environment of arbitrarily large scale, and with various levels of networking connections, such that any or all of the data, code, and functions of distributed estimate generating code 200 may be distributed across this cloud computing environment. Distributed estimate generating code 200, sensor agents 201, and/or environment 205 may thus constitute, comprise, and/or be considered a distributed estimate generating system, and may comprise and/or be constituted of one or more software systems, a combined hardware and software system, one or more hardware systems, components, or devices, one or more methods or processes, or other forms or embodiments. Environment 205 may also comprise a system or environment to be monitored by sensor agents 201, such as a vehicular traffic system or an electrical power generating and/or distributing system as described above, or any other system or environment, in various examples.


In other examples, sensor agents 201 may comprise any of a wide variety of sensor, computing, and processing system configurations, any of which may implement, store, run, and/or embody distributed estimate generating code 200. Distributed estimate generating code 200 may interact via network system 219 with any other proximate or network-connected computing systems to distributed estimate generating code 200.


In embodiments, sensor agents 201 of FIG. 2, and any one or more computing devices or components thereof, comprise distributed estimate generating code 200. In various embodiments, distributed estimate generating code 200 comprises sensor data detecting module 202; estimate determining module 204; estimate transmitting module 206; estimate receiving module 208; and connection restoring module 210, each of which may comprise modules of the code of block 200 of FIG. 1. Such modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular data types that the code of block 200 uses to carry out the functions and/or methodologies of embodiments of the invention as described herein. These modules of the code of block 200 are executable by the processing circuitry 120 of FIG. 1 to perform the inventive methods as described herein. Distributed estimate generating code 200 and/or sensor agents 201 may include additional or fewer modules than those shown in FIG. 2. In embodiments, separate modules may be integrated into a single module. Additionally, or alternatively, a single module may be implemented as multiple modules. Moreover, the quantity of devices and/or networks in the environment is not limited to what is shown in FIG. 2. In practice, the environment may include additional devices and/or networks; fewer devices and/or networks; different devices and/or networks; or differently arranged devices and/or networks than illustrated in FIG. 2.



FIG. 3 depicts a flowchart of an exemplary method 300 in accordance with aspects of the present invention. Steps of the method may be carried out by any of sensor agents 201 in the environment of FIG. 2 and are described with reference to elements depicted in FIG. 2.


At step 310, distributed estimate generating code 200 detects sensor data from one or more sensors comprised in a first sensor agent (e.g., via sensor data detecting module 202 of FIG. 2), in various examples. In embodiments, and as described with respect to FIG. 2, at step 320, distributed estimate generating code 200 determines an own series of estimates, based on the sensor data (e.g., via estimate determining module 204 of FIG. 2). At step 330, distributed estimate generating code 200 transmits the own series of estimates (e.g., via estimate transmitting module 206 of FIG. 2). At step 340, distributed estimate generating code 200 receives at least one additional series of estimates from one or more additional sensor agents (e.g., via estimate receiving module 208 of FIG. 2). At step 350, distributed estimate generating code 200 restores, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of estimates to the second sensor agent and the receiving of the series of estimates from the second sensor agent (e.g., via connection restoring module 210 of FIG. 2). At step 360, distributed estimate generating code 200 outputs, based on the own series of estimates and the additional series of estimates, a series of consensus estimates, e.g., as an output of distributed estimate generating code 200 of FIG. 2, and may also be an output of estimate transmitting module 204 of FIG. 2, and in parallel with, overlapping, prior to, or in any order with steps 330 and 350, in some examples.



FIG. 4 depicts an illustrative portion of a multi-agent network 400 of sensor agents 401 detecting conditions of a random dynamical environment 405 and generating and sharing estimates and predictions regarding the environment in accordance with aspects of the present invention. Sensor agents 401 may be implementations of sensor agents 201 of FIG. 2 and may each perform method 300 of FIG. 3. Multi-agent network 400 provides a generalized and robust system for observing and making predictions about random dynamical systems and environments, such as a vehicular traffic system or an electrical power generating and distributing system as described above, with consistent accuracy.


Multi-agent network 400 transcends various shortcomings of conventional distributed estimation and prediction systems, including stricter conditions for distributed observability and connected network, network tracking capacity for estimating unstable systems, and optimal gain matrix design considering cross error covariances. That is, typical conventional distributed estimation systems require stricter assumptions on the local observation model or the communication network among the agents, for example neighborhood observability or an undirected connected graph. In practical distributed settings, these assumptions become a bottleneck. In other conventional distributed estimation systems, there is an upper limit on the degree of instability on the system dynamics that a given observation-network model can handle with bounded mean-squared error (MSE). Such systems become unusable in distributed process control domains where the underlying systems are inherently unstable and need to be stabilized by appropriate control input. If the estimation algorithm fails to track the unstable system, then the design of a stabilizable control input becomes infeasible. Another substantial gap in conventional distributed estimation systems is in the optimality of the distributed state estimates which leverage the maximum information available from the local observations and the estimates obtained from neighbors. Conventional distributed estimation systems fail to provide the optimal gain matrix design such that the algorithm yields minimum MSE estimates.


Multi-agent network 400 is generalized and robust in part by overcoming conventional shortcomings such as these. Multi-agent network 400 further provides generalized and robust inventive advantages in part by using a System-Observation-Communication Model Framework, in various examples, as described as follows.


Multi-agent network 400 follows a discrete-time, linear, and time-invariant state-space model:











x
k

=


Fx

k
-
1


+

w

k
-
1




,




(

Equation


1

)







where xkcustom-charactern is the dynamic random state vector, at each integer time index k, F∈custom-charactern×n is the state transition matrix, wkcustom-charactern is the system noise at all time t=kT where T is the discrete-time step size and k is the same integer time index. The system noise is white Gaussian noise with zero mean and covariance matrix Qk, i.e., wk˜custom-character(0, Qk). The initial condition of the system at a selected time 0, x0, is also Gaussian, and follows: x0˜custom-character(x0, P0+).


Multi-agent network 400 comprises m sensor agents 401 (e.g., where m may be in the tens, hundreds, thousands, millions, or any number) observe the dynamic random state of environment 405. Environment 405 may comprise any system or environment to be monitored by sensor agents 401, such as a vehicular traffic system or an electrical power generating and/or distributing system as described above, or any other system or environment, in various examples. Each sensor agent 401, of index i, may observe one or more state variables, as an example of detecting step 410 as described above, and make low dimensional measurements zi,kcustom-characterpi, as an example of detecting step 410 and/or of step 320 of determining an own series of estimates based on the sensor data as described above, and such that pi<<n, ∀i=1, . . . m. Sensor agents 401 transmit their own series of estimates, in an example of step 330 as described above, and receive each other's estimates, as an example of step 340 as described above, and thereby share their observations with each other, and with one or more external output nodes, in an example of outputting step 360 as described above. Any of sensor agents 401 may also serve as an output node that can be read by an authorized user or any other authorized device or communication node, such that outputting in accordance with step 360 as described above is not necessarily at all separate from transmitting in accordance with step 330 as described above, in various examples. Sensor agents 401 also re-connect and restore connections with newly introduced and newly re-connecting sensor agents 401, in an example of step 350 as described above. Sensor agents 401 perform these transmitting, receiving, connection restoring, and outputting steps (in examples of steps 330, 340, 350, and 360 as described above) in a communication layer, e.g., a network layer or cyberspace layer, which may use advanced network communication technologies, layers, and protocols, and which may be represented by a linear and time-invariant model:











z

i
,
k


=



H
i



x
k


+

v

i
,
k




,

i
=
1

,


,
m




(

Equation


2

)







where Hicustom-characterpi×n is the measurement matrix and vi,kcustom-characterpi is the measurement noise. The measurement noise, at each sensor agent 401 of index i, is also white Gaussian noise with zero mean and covariance matrix custom-characteri,k, i.e., vi,k˜custom-character(0, custom-characteri,k). The system noise, the measurement noise, and the initial condition {{wk}, {vi,k}, x0}∀i,k≥0 are uncorrelated random sequences.


Sensor agents 401 (hereafter “sensor agents” or “agents” interchangeably) exchange their measurements and current estimates with their neighbors and with any separate output nodes via the network layer (in examples of any of steps 330, 340, 350, and 360 as described above). Formally, the sensor agent communication network is represented by a simple (no self-loops nor multiple edges) and directed graph custom-character=(ν, ε), where ν={i: i=1, . . . , m} is the set of sensor agents and ε={(i,j):∃ an edge j→i} is the set of local communication channels among the sensor agents 401. Multi-agent network 400 may use a directed graph (one-way communications, in examples of any of steps 330, 340, 350, and 360 as described above) in some examples, which means that its algorithm is easily extendable to undirected graphs (two-way communications, in examples of any of steps 330, 340, 350, and 360 as described above) (while the reverse is not always true). The adjacency matrix of custom-character is denoted by custom-character=[aij]∈custom-characterm×m, where










a
ij

=

{




1
,





if






and


edge


j





"\[Rule]"

i






0
,




otherwise
.









(

Equation


3

)







The communication network among multi-agent network 400 may be treated as likely sparse and time-invariant, in various examples. For each agent 401 of index i, open and closed neighborhoods of proximate agents 401 may be defined as:










Ω
i

=

{

j




"\[LeftBracketingBar]"



(

i
,
j

)


ε



}





(

Equation


4

)














Ω
_

i

=


{
i
}



{

j




"\[LeftBracketingBar]"



(

i
,
j

)


ε



}






(

Equation


5

)







As in the case of a classical optimal Kalman filter, each agent 401 of index i in the framework knows the system model, F and Qk, the initial condition statistics, x0 and P0+, the parameters of its own and its neighbors' measurement models, {Hj, custom-characterj,k:j∈Ωi} (which sensor agents 401 may use in determining estimates in accordance with step 420 as described above). In the distributed setting, each agent 401 also knows the communication network model, custom-character along with the adjacency matrix custom-character describing the proximity of nearby fellow sensor agents 401 (which sensor agents 401 may use in performing examples of any of steps 330, 340, 350, and 360 as described above). The time-invariant state-space is chosen for notational simplicity. All the derivations, assumptions and the results in this disclosure also hold for a time-varying state-space model (Fk, Hi,k, custom-characteri,k, Qk), which sensor agents 401 may also use in performing any of the above steps, in various examples.


Further, the present analysis remains the same whether the input to the system uk, the control matrix Ck, and the system noise matrix Φk are excluded or included. A complete time-varying state-space equation may be set forth as:













x
k

=



F

k
-
1




x

k
-
1



+


C

k
-
1




u

k
-
1



+


Φ

k
-
1




w

k
-
1












z

i
,
k


=



H

i
,
k




x
k


+

v

i
,
k




,

i
=
1

,


,

m
.








(

Equation


6

)







Multi-agent network 400 enables distributed observability, in performing any or all of the steps of method 300 as described above. The concept of distributed observability is introduced as a measure of how well internal states of multi-agent network 400 can be inferred from knowledge of its local measurements and interactions among agents 401 in multi-agent network 400. The concept of distributed observability forms part of the foundation of example distributed estimation algorithms and example designs of optimal gain of this disclosure.


Multi-agent network 400 may model a state-space-network representation of the physical system and its observation and communication exchange. The local observability matrix Gi custom-characternpi×n and the global observability matrix G∈custom-character(nΣi=1mpi)×n of multi-agent network 400 may be denoted by:














G
i

=

[




H
i







H
i


F







H
i



F
2













H
i



F

n
-
1






]


,







i

V


;







(

Equation


7

)









G
=

[




G
1






G
2











G
m




]





A connectivity matrix custom-character of multi-agent network 400 may be defined as:










A
~

=


I
m

+
A
+

A
2

+

+

A

m
-
1







(

Equation


8

)







The element [custom-characterq]i,j of the matrix custom-characterq∀q∈custom-character+ gives the number of directed walks of length q from an agent j to another agent i of multi-agent network 400. Then, the connectivity matrix is a non-negative matrix, custom-character≥0, and its elements [custom-character]i,ji,j denote the total number of walks (of any length <m) from node j to node i, where each sensor agent 401 is a node in multi-agent network 400. That is, there is no limit to the information-sharing among sensor agents 401 of multi-agent network 400, and sensor agents 401 may iteratively relay estimates among each other, up to and including each sensor agent 401 receiving estimates from all other sensor agents 401 in multi-agent network 400, or with estimates ultimately shared among any lower proportion of sensor agents 401, in performing any or all of the steps of method 300 as described above, in various examples.


Thus, each sensor agent 401 may relay a transmission of each or at least some series of estimates it receives from its proximate or neighboring additional sensor agents within transmission connectivity range, as an example of performing step 340 as described above. Each sensor agent may determine, based on its own series of estimates and the additional series of estimates it receives from other sensor agents 401, including series of estimates it may receive in relay via its neighboring sensor agents from farther sensor agents that are distributed farther afield, a series of consensus estimates, as an example of performing step 360 as described above. Each sensor agent 401 may thus function as an independent node of multi-agent network 400 that performs generating of consensus estimates, as an example of performing step 360 as described above.


If there exist i, j such that ãi,j=0, that would correspond to a context in which there doesn't exist any path from j to i and would imply that the graph is not connected, or in other words, that one or more of sensor agents 401 have become disconnected from the remaining sensor agents 401 and from multi-agent network 400, and may form a separate multi-agent network, at least temporarily, as an example of the becoming disconnected from the network as described above with reference to step 350. In the definition of distributed observability as described below, which multi-agent network 400 uses, at least some of sensor agents 401 continue to function as multi-agent network 400, and any disconnected sets of sensor agents 401 may continue to function as independent multi-agent networks, and continue to perform methods of this disclosure, including all steps of method 300, regardless of whether all sensor agents 401 are connected with each other. Disconnected sensor agents 401 or groups of sensor agents 401 may also subsequently re-connect and restore connections with a main body or any other portion of sensor agents 401 and restore and resume transmitting and receiving estimates among each other, in examples of step 350 as described above.


The agent communication network, i.e., the directed graph, is connected if the connectivity matrix is a positive matrix, i.e., custom-character≥0. For a fully connected network, Im+custom-character>0. Although conventional distributed estimation typically requires a connected graph of sensor agents 401, a fully connected graph of sensor agents is not a necessary condition for distributed observability as defined herein, and is also not required for example distributed estimation algorithms of this disclosure. In other words, any portion of sensor agents 401 may continue implementing distributed observability and performing any or all methods and method steps of this disclosure, including any or all of method 300 as described above, when or if portions of sensor agents 401 are disconnected from each other, and are not in a fully connected graph.


In other words, any portion of sensor agents 401 may continue to function together as an implementation of multi-agent network 400 without regard for whether and when sensor agents 401 maintain full connectivity among each other. Sensor agents 401 may get disconnected from each other and then re-connect with each other, and adapt accordingly, such as by restoring sharing of estimates with each other in examples of step 350 as described above, and by re-positioning themselves accordingly. Sensor agents 401 may re-position themselves by flying, driving, walking, hopping, swimming, or otherwise propelling themselves to new positions, in various examples. A given sensor agent 401, in response to detecting that a second sensor agent has become disconnected, may re-position itself, such as by re-positioning itself toward a last detected position of the second sensor agent. A sensor agent 401 re-positioning itself may thus also be comprised as part of restoring connectivity in an example of step 350 as described above. A sensor agent 401 re-positioning itself may also include remaining within connectivity or transmission range of at least one other sensor agent of its neighboring sensor agents. A given sensor agent 401 may also, in response to detecting that the second sensor agent has become re-connected, re-positioning itself again, such as to re-position itself away from a detected present position of the second sensor agent, and/or re-positioning itself to a position that increases a local homogeneity of distribution of the first sensor agent, the second sensor agent, and one or more additional proximate sensor agents, to adapt again toward more thorough coverage of the local area. A given sensor agent 401 performing such adaptive re-positioning of itself in response to restoring connectivity with re-connected fellow sensor agents 401 may thus also be comprised as part of performing part of step 350 as described above, in various examples.


For the definition of distributed observability, let the quantity custom-characteri denote the ith row of matrix custom-character and the symbol custom-character denote the face-splitting product of matrices (e.g., a transposed Khatri-Rao product). Distributed observability may be defined as follows, which may be designated Definition 1. If the row rank of the distributed observability matrix custom-characteri, defined as:










O
i

=




A
~

i

G

=



[





a
~


i
,
1








a
~


2
,
1













a
~


i
,
m





]



[




G
1






G
2











G
m




]


=

[






a
~


i
,
1




G
1









a
~


i
,
2




G
2














a
~


i
,
m




G
m





]







(

Equation


9

)







is equal to n, where custom-characteri custom-character(nΣi=1mpi)×n, then the system of environment 405 is distributedly observable by multi-agent network 400, or multi-agent network 400 has distributed observability of environment 405, at or by agent 401 with index i. The requirement of invertibility (or full-rank) of the distributed observability Gramian, custom-characteriTcustom-characteri, is an equivalent alternative of the distributed observability definition, i.e., rank (custom-characteriTcustom-characteri)=n.


Distributed observability as thus defined incorporates an assumption which may be designated Assumption 1. The state-space-network model of Equations 1-3 is distributedly observable at or by all agents 401 in multi-agent network 400. This assumption ensures that the distributed estimator of multi-agent network 400 converges with bounded mean-squared error (MSE), and converges quickly, to consensus estimates of the states of interest. This enables multi-agent network 400 to quickly achieve consensus estimates of the states of interest that accumulate all of the signal from as many of agents 401 provided estimates of a given state of interest, while averaging out all of the noise, thus determining estimates of the states of interest with substantially higher SNR than any one of agents 401 is capable of, all of which may be comprised as parts of processing, determining, generating, and outputting consensus estimates, in example implementations of step 360 as described above. There is no requirement for the system of environment 405 to be stable or for multi-agent network 400 to be connected. Each individual agent 401 may perform its own determining, based on its own time series of estimates and the additional time series of estimates, a time series of consensus estimates. The definition of distributed observability and Assumption 1 as described above form foundations of an example distributed estimation algorithm and example optimal gain design of this disclosure as described as follows.


Some examples may also use a slightly weaker criterion defined as distributed detectability. Distributed detectability suffices for convergence of example distributed estimation algorithms of this disclosure, in various examples. Distributed detectability only requires unstable states in environment 405 to be observable.


An example distributed estimation algorithm is described as follows, as parts of processing, determining, generating, and outputting consensus estimates, in example implementations of step 360 as described above. At time k and agent i, the filter and prediction estimates of the system of environment 405 may be denoted by {circumflex over (x)}i,k+ and {circumflex over (x)}i,k, respectively, and the filter and prediction error covariance matrices by Pi,k+ and Pi,k, respectively. The prediction and filtering updates of the example distributed estimation algorithm by agent i are:










x

?


=

F

?


?






(

Equation


10

)













P

?


=


?

P

?


?

Q

?






(

Equation


11

)













K

?


=


?





?




?









(

Equation


12

)













x

?


=


x

?


+



B

?


(


x

?


-

x

?



)



+



M

?


(


?

-

H

?



)








(

Equation


13

)













P

?


=


P

?


-

K

?




?








(

Equation


14

)










?

indicates text missing or illegible when filed




where custom-characteri,kcustom-charactern×(Σj∈Ωipj+n|Ωi|) is the distributed gain matrix. The covariance matrices Σx,yi and Σyi are derived in the discussion below of optimal gain design. The local consensus weight matrices, Bi,j,kcustom-charactern×n, and the local innovation weight matrices, Mi,j,kcustom-charactern×pj, are obtained from the distributed gain matrix custom-characteri,k.


Equations 9-13 represent the present example distributed estimation algorithm of this disclosure, which any of sensor agents 401 may calculate and determine as parts of processing, determining, generating, and outputting consensus estimates, in example implementations of step 360 as described above, and where the minimized MSE prediction and filter estimates {circumflex over (x)}i,k and {circumflex over (x)}i,k+ fix are conditional:










?

=

𝔼
[


?




{

z

?


}


?


?


{

?

}


?



]





(

Equation


15

)













?

=

𝔼
[


?




{

z

?


}


?


?


{

?

}


?



]





(

Equation


16

)










?

indicates text missing or illegible when filed




The filter update algorithm of Equation 12 functionally fuses the concepts of consensus and innovations by treating the consensus on the state estimates as innovations along with the local innovations of a given agent and its neighbor agents. To represent this functionality in a different useful manner, the filter update algorithm of Equation 12 may be rewritten with the local innovation term yi,k at agent i as:











x

?


=


x

?


+

K

?




[





?

-

H

?


?


?








?







?

-

H

?


?








?








?


?


-

H

?


?









?

-

?







?







?

-

?





]




?





,




(

Equation


17

)








where










K

?


=

[


M
ij


?


?

M

?


?

M

?


?

B

?


?


?

B

?


?


]


,





(

Equation


18

)











{

j

?


?


?

i

?


?


?

j

?


}

=


Ω
_

i






and






{

j

?


?

j

?


}

=

Ω



?

.









?

indicates text missing or illegible when filed




The innovation sequences {yi,k}∀i,k≥0 are Gaussian random vectors, uncorrelated and with zero mean, custom-character[yi,k]=0, ∀i, k≥0. These innovation terms are comprised in the optimal design of the gain matrices, in various examples, as further described below.


Multi-agent network 400 also performs error analysis, which any of sensor agents 401 may also calculate and determine as parts of processing, determining, generating, and outputting own estimates and/or consensus estimates, in example implementations of steps 320 and/or 360 as described above. The predictor and filter error terms, ϵi,kcustom-charactern and ϵi,k+custom-charactern, respectively, may be derived for each agent i, as:










ϵ

?


=


x

?


-


?


?







(

Equation


19

)













ϵ

?


=


x

?


-


?


?







(

Equation


20

)










?

indicates text missing or illegible when filed




The error processes ϵi,k and ϵi,k+ are unbiased, i.e., they are zero mean at all agents 401 and for all time indices, custom-characteri,k]=>0 and custom-characteri,k+]=0, ∀i, k≥0. The error processes follow ϵi,k˜custom-character(0n, Pi,k) and ϵi,k+˜custom-character(0n, Pi,k+). This shows that the distributed prediction estimates {circumflex over (x)}i,k and filtering estimates {circumflex over (x)}i,k+ provided by this example algorithm are unbiased.


Combining Equations 2 and 16, the innovations may be expanded as:










y

i
,
k


=


[





H

?


x
k


+

v

?


-

H

?


x
^








?







H

?


x
k


+

v

?


-

H

?


x
^








?








H
j


?


x
k


+


v
j


?


-


H
j


?


x
^


?










x
^


?


-

x
k

+

x
k

-


x
^


?







⋮⋮







x
^


?


-

x
k

+

x
k

-


x
^


?






]

=





[




H

?







?






H
i






?






H

?







0

n
×
n






⋮⋮





0

n
×
n





]




?



?

ϵ

?


+



[





?


?







?







?


?







?







?


?








ϵ

?


-

ϵ

?







⋮⋮






ϵ

?


-

ϵ

?


?






]




?








(

Equation


21

)










?

indicates text missing or illegible when filed




Where {tilde over (H)}icustom-characterj∈Ωipj+n|Ωi|)×n are the local innovation matrices and the δi,kcustom-characterΣj∈Ωipj+n|Ωi| are the local innovation noise for each agent i. In compact notation, the dynamics of the local innovations are represented by:














y

?


=



?

ϵ

?


+

δ

?




,






i

,

k

0








(

Equation


22

)










?

indicates text missing or illegible when filed




The local innovation noise δi,k are Gaussian random vectors with zero mean, and the variance may be denoted by Δi,k, i.e., δi,k˜custom-character(0, Δi,k). Using Equations 1, 9, 16, and 20 on the predictor and Equations 18 and 19 for the filter error, their dynamics take the form:













ϵ

?


=



?

x

?


+

w

?


-

F

?


?









=


F

ϵ

?


+

w

?










(

Equation


23

)













ϵ

?


=


x

?


-


?


?


-

K

?

y

?









=


ϵ

?


-

K

?


?

ϵ

?


-

K

?

δ

?









=



(


I
n

-

K

?


?



)


ϵ

?


-

K

?

δ

?
















ϵ

?


=



(

F
-

K

?


?

F


)


ϵ

?


+


(


I
n

-

K

?


?



)


w

?


-

K

?

δ

?







(

Equation


24

)










?

indicates text missing or illegible when filed




Given that the predictor and filter errors are zero-mean, the recursive updates of the evolution of their covariances, using Equations 10 and 13, are derived as follows:













P

?


=


𝔼
[

ϵ

?

ϵ

?


]

=

𝔼
[


(



?

ϵ

?


+

w

?



)




(



?

ϵ

?


+

w

?



)

T


]








=



?

P

?


?


+

Q
k









(

Equation


25

)
















P

?


=


𝔼
[

ϵ

?

ϵ

?


]

=

𝔼
[


(


ϵ

?


-

K

?

y

?



)




(

ϵ

?

K

?

y

?


)

T


]








=


P

?


-

K

?




?



-




?

K

?



+

K

?





?

K

?











=


P

?


-

K

?




?











(

Equation


26

)










?

indicates text missing or illegible when filed




The expectations of the cross-terms in Equation 23 are zero. The term custom-characteri,kyi,kT]=Σx,yi is shown below in Equation 25. Equation 24 is obtained by substituting custom-characteri,k with Σx,yiΣyi−1.


The convergence properties of the distributed estimator of Equations 9-13 may be determined by the dynamics of the filter and prediction error processes of Equations 22 and 21. If the error dynamics are asymptotically stable, then the error processes have asymptotically bounded error covariances that in turn guarantee the convergence of the distributed algorithm. If the dynamics of the filter error processes, ϵi,k+∀i, are asymptotically stable, then the dynamics of the prediction error processes, ϵi,k+∀i, are also asymptotically stable.


For the distributed estimator implemented by sensor agents 401 or any subset thereof to converge to consensus estimates as part of outputting consensus estimates in accordance with step 360 with bounded mean-squared error (MSE), the filter error of Equation 22 may be asymptotically stable in various examples, e.g., the spectral radius of the error's dynamics matrix is less than 1: ρ(F−custom-characteri,k{tilde over (H)}iF)<1. Given that the state-space-network model of Equations 1-3 satisfies the distributed observability criteria of Equation 8, it guarantees that there exist gain matrices custom-characteri,k at each agent i such that ρ(F−custom-characteri,k{tilde over (H)}iF)<1. This forms a foundation for a design of optimal gain matrices of various examples of this disclosure, as described as follows.


The optimal gain design may be considered in terms of the stability of the error dynamics. The asymptotic stability of the error dynamics guarantees convergence of the example distributed estimation algorithm of Equations 9-13 and bounded MSE. Aspects of this disclosure further include an aim to design the gain matrices custom-characteri,k such that the MSE is not only bounded but also minimum, in various examples.


Since the zero-mean innovation sequences {yi,k}∀i,k≥0 are Gaussian and uncorrelated, they are independent random vectors. By applying Gauss-Markov theorem on Equation 17, gain matrices of this disclosure that minimize the MSE of the filter and prediction estimates may be given by:










K

i
,
k


=








?








?








(

Equation


27

)








where


















?


=


𝔼
[


(


x

?


-

?


)


y

?


]

=

𝔼
[


(


x

?


-

?

+

?

-

?


)


y

?


]









=


𝔼
[

ϵ

?

y

?


]

=


𝔼
[

ϵ

?


]




(



?

ϵ

?


+

δ

i
,
k



)

T




]






=


P

?


?


+




?


?











(

Equation


28

)








and


















?


=


𝔼
[


y

i
,
k



y

?


]

=

𝔼
[


(



?

ϵ

?


+

δ

i
,
k



)




(



?

ϵ

?


+

δ

i
,
k



)

T


]








=



?

P

?


?


+

Δ

i
,
k


+


?




?



+




?


?



?

.











(

Equation


29

)










?

indicates text missing or illegible when filed




Obtaining Equation 25 uses the fact that custom-character[({circumflex over (x)}i,kxk)yi,kT]=0. custom-characteri,kvj,kT]=0∀j∈Ωi is used to derive the two covariance quantities Σϵii and Δi,k as follows:



















?



=


𝔼
[

ϵ

?


δ
T


?


]







=


[

0

?


?

0

?


?

0

?


?


?




(


P

?


-

P

?


?



)














(


P

?


-

P

?


?


?



)

]

]







(

Equation


30

)
















Δ

i
,
k


=


𝔼
[

δ

?

δ

?


]







=


blkdiag


{

blkdiag


{

R

j
,
k


}




?

[


[


P

?


-

P

?


-

P

?


+

P

?



]


?


]


}










(

Equation


31

)











?

indicates text missing or illegible when filed




where “blkdiag” indicates a block-diagonal matrix. With the two expressions Σϵii and Δi,k in Equations 28 and 29, optimal gain matrices of this disclosure for the distributed estimator at each agent may be reformulated as:










K

i
,
k


=


(


P

?


?


+







?



)




(



?

P

?


?


+

Δ

i
,
k


+


?








?



+







?


?




)


-
1







(

Equation


32

)










?

indicates text missing or illegible when filed




The gain matrices may likely be very sparse at each agent, in some examples. To alleviate challenges in tracking of the complete network error covariances of multi-agent network 400, distributed estimate generating code 200 may use a certifiable optimal distributed filter that performs optimal fusion of estimates under unknown correlations by a particular tight Semidefinite Programming (SDP) relaxation. Further, given that the gain matrices do not depend on the measurements, they all can be pre-computed and stored on each agent 401.


Equations 23 and 24 may be combined to get:










P

i
,
k

+

=



FP

i
,

k
-
1


+



F
T


+

Q
k

-


K

i
,
k










?








(

Equation


33

)










?

indicates text missing or illegible when filed




Equation 31, once substituted with Equations 26-30 to express in terms of system parameters, may yield a recursive iteration of the filter error covariance matrix which is the distributed version of the discrete algebraic Riccati equation for the example distributed estimation algorithm of this disclosure. The initial condition of the covariances are Pi,0+=P0+∀i and Pij,0+=P0+∀i,j∈Ωi.


Under Assumption 1 of the distributed observability, the Riccati equation has an asymptotic solution at each agent which is positive definite when started with a symmetric positive semi-definite matrix. This solution, which may be designated by Pi,∞+ is the fixed point of Equation 31. For the linear time invariant problems (assuming distributed observability), the steady state filter is asymptotically stable: ρ(F−custom-characteri,∞{tilde over (H)}iF)<1, je, the closed loop filter matrix F−custom-characteri,∞{tilde over (H)}iF has all poles inside the unit circle, regardless of whether or not F is asymptotically stable. To save on the storage burden, each sensor agent 401 of index i may use a steady state gain matrix custom-characteri,∞ for some or all of the iterations, in various examples. This may not yield distributed estimates with minimum MSE, but does provide estimates with bounded MSE.


Thus, multi-agent network 400 uses an inventive definition for distributed observability, introduces a new class of distributed state estimation algorithms that treat consensus on neighbors' estimates as innovations, and designs the gain matrices for the distributed estimator such that the algorithm is optimal, i.e., it yields minimum MSE estimates at all agents 401, which any of sensor agents 401 may calculate and determine as parts of processing, determining, generating, and outputting consensus estimates in example implementations of step 360 as described above. Example algorithms, derivations and error analyses of this disclosure resolve challenges related to convergence and optimality of distributed state estimation.


Systems, devices, and methods of this disclosure may further enable sensor agents 401 and multi-agent network 400 to determine and implement advantageous or optimal placement of sensor agents 401 in multi-agent network 400. Systems, devices, and methods of this disclosure may further enable sensor agents 401 and multi-agent network 400 to adapt to failures of one or more of sensor agents 401 or of their communication capabilities by advantageously or optimally rearranging sensor agents 401 to compensate for and adapt to the loss of one or more of sensor agents 401, such as by optimally disbursing some of sensor agents 401 to cover the territory of environment 405 previously covered by the disconnected sensor agents 401, while remaining within connectivity range of multi-agent network 400, that is, of at least one other of the one or more additional sensor agents 401. Systems, devices, and methods of this disclosure may further enable sensor agents 401 and multi-agent network 400 to adapt to one or more disconnected sensor agents 401 coming back online and re-connecting to multi-agent network 400 by restoring sharing estimates between the re-connected sensor agents 401 and by rearranging sensor agents 401 to adapt to the return of the one or more previously disconnected sensor agents 401, such as by returning to a homogenous density or distribution counting the reconnected sensor agents 401 or otherwise re-deploying sensor agents 401 to take advantage of the reconnected sensor agents 401, all of which sensor agents 401 may perform as parts of adapting to and restoring connections among each other as part of implementing step 350 as described above, in various examples.


In embodiments, a service provider could offer to perform the processes described herein. In this case, the service provider can create, maintain, deploy, support, etc., the computer infrastructure that performs the process steps of the invention for one or more customers. These customers may be, for example, any business that uses technology. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.


In still additional embodiments, the invention provides a computer-implemented method, via a network. In this case, a computer infrastructure, such as computer 101 of FIG. 1, can be provided and one or more systems for performing the processes of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure. To this extent, the deployment of a system can comprise one or more of: (1) installing program code on a computing device, such as computer 101 of FIG. 1, from a computer readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the processes of the invention.


The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims
  • 1. A method, comprising: detecting, by a processor set of a first sensor agent, sensor data from one or more sensors comprised in the first sensor agent;determining, by the processor set, an own series of estimates, based on the sensor data;transmitting, by the processor set, the own series of estimates;receiving, by the processor set, at least one additional series of estimates from one or more additional sensor agents;restoring, by the processor set, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the own series of estimates to the second sensor agent and the receiving of the at least one additional series of estimates from the second sensor agent; andoutputting, by the processor set, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.
  • 2. The method of claim 1, further comprising relaying a transmission of the at least one series of estimates from the one or more additional sensor agents.
  • 3. The method of claim 1, further comprising, in response to the detecting that a second sensor agent of the one or more additional sensor agents has become disconnected, re-positioning the first sensor agent.
  • 4. The method of claim 3, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent toward a last detected position of the second sensor agent.
  • 5. The method of claim 4, wherein the re-positioning the first sensor agent further comprises re-positioning the first sensor agent while remaining within connectivity of at least one other sensor agent of the one or more additional sensor agents.
  • 6. The method of claim 1, further comprising, in response to the detecting that the second sensor agent of the one or more additional sensor agents has become re-connected, re-positioning the first sensor agent.
  • 7. The method of claim 6, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent away from a detected present position of the second sensor agent.
  • 8. The method of claim 6, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent to a position that increases a local homogeneity of distribution of the first sensor agent, the second sensor agent, and the one or more additional sensor agents.
  • 9. The method of claim 1, wherein the own series of estimates comprises a time series of filter estimates of observed states, based on the sensor data, and a time series of prediction estimates of predicted future states, based on the sensor data and on predictive modeling.
  • 10. The method of claim 9, wherein the determining the own series of estimates comprises performing a distributed estimation algorithm to generate prediction updates and filtering updates.
  • 11. The method of claim 10, wherein the determining the own series of estimates further comprises performing error analysis, wherein performing error analysis comprises: determining a predictor error and a filter error for the first sensor agent;determining local innovations and local innovation noise for the first sensor agent; andperforming recursive updates of the evolution of covariances, wherein a spectral radius of a dynamics matrix of error is less than 1,wherein the first sensor agent and the one or more additional sensor agents form a distributed estimator that converges to consensus estimates,wherein the consensus estimates have bounded mean-squared error (MSE) and the filter error is asymptotically stable.
  • 12. The method of claim 1, wherein determining the own series of estimates further comprises performing an optimal gain design, wherein performing the optimal gain design comprises applying Gauss-Markov theorem, thereby generating gain matrices.
  • 13. The method of claim 12, wherein the performing an optimal gain design further comprises: using an optimal distributed filter that performs optimal fusion of estimates under unknown correlations by a Semidefinite Programming (SDP) relaxation, which is pre-computed and stored on the first sensor agent; anddetermining a recursive iteration of a filter error covariance matrix which comprises a distributed version of a discrete algebraic Riccati equation for the distributed estimation algorithm.
  • 14. The method of claim 1, further comprising using the outputted series of consensus estimates for at least one application selected from among the group of: multi-agent control, positioning, navigation, state estimation in electrical power grid infrastructure, spatio-temporal environment monitoring, spatio-temporal field monitoring, connected vehicular network for traffic balancing and collision avoidance, wildlife monitoring, and collaborative object tracking.
  • 15. A computer program product comprising one or more computer readable storage media having program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to: detect sensor data from one or more sensors comprised in a first sensor agent;determine an own series of estimates, based on the sensor data;transmit the own series of estimates;receive at least one additional series of estimates from one or more additional sensor agents;restore, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the own series of estimates to the second sensor agent and the receiving of the at least one additional series of estimates from the second sensor agent; andoutput, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.
  • 16. The computer program product of claim 15, wherein the program instructions are further executable to: relay a transmission of the at least one series of estimates from the one or more additional sensor agents;determine, based on the own series of estimates and the additional time series of estimates, a series of consensus estimates.
  • 17. The computer program product of claim 15, wherein the program instructions are further executable to: in response to the detecting that a second sensor agent of the one or more additional sensor agents has become disconnected, re-position the first sensor agent, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent toward a last detected position of the second sensor agent, while remaining within connectivity of at least one other sensor agent of the one or more additional sensor agents; andin response to the detecting that the second sensor agent of the one or more additional sensor agents has become re-connected, re-positioning the first sensor agent, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent away from a detected present position of the second sensor agent, to a position that increases a local homogeneity of distribution of the first sensor agent, the second sensor agent, and the one or more additional sensor agents.
  • 18. A system comprising: a processor set, one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to:detect sensor data from one or more sensors comprised in a first sensor agent;determine an own series of estimates, based on the sensor data;transmit the own series of estimates;receive at least one additional series of estimates from one or more additional sensor agents;restore, in response to detecting that a second sensor agent of the one or more additional sensor agents has become disconnected and then re-connected, the transmitting of the series of own estimates to the second sensor agent and the receiving of the at least one additional series of estimates from the second sensor agent; andoutput, based on the own series of estimates and the additional series of estimates, a series of consensus estimates.
  • 19. The system of claim 18, wherein the program instructions are further executable to: relay a transmission of the at least one series of estimates from the one or more additional sensor agents;determine, based on the own series of estimates and the additional time series of estimates, a series of consensus estimates.
  • 20. The system of claim 18, wherein the program instructions are further executable to: in response to the detecting that a second sensor agent of the one or more additional sensor agents has become disconnected, re-position the first sensor agent, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent toward a last detected position of the second sensor agent, while remaining within connectivity of at least one other sensor agent of the one or more additional sensor agents; andin response to the detecting that the second sensor agent of the one or more additional sensor agents has become re-connected, re-positioning the first sensor agent, wherein the re-positioning the first sensor agent comprises re-positioning the first sensor agent away from a detected present position of the second sensor agent, to a position that increases a local homogeneity of distribution of the first sensor agent, the second sensor agent, and the one or more additional sensor agents.
STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR

The following disclosure(s) are submitted under 35 U.S.C. 102(b)(1)(A): DISCLOSURES: “On observability and optimal gain design for distributed linear filtering and prediction,” Das (i.e. the identical sole inventor of the present disclosure), submitted to the arxiv.org pre-print server, Mar. 7, 2022, 8 pages; listed in and provided with IDS.