Advances in machine learning offer a tremendous opportunity to deploy learning-enabled multi-agent systems (LEMAS) in uncertain and high-stakes environments. Examples of a LEMAS could include a team of self-organizing drones in wildfire prevention or robots in semi-automated warehouses. Guaranteeing safety and robustness of LEMAS is challenging as agents form complex interacting networks where agents make individual decisions with local information. A major challenge is the effect of imperfect learning-enabled components (LECs) on the network structure. For example, a poorly calibrated perception system can result in loss of information between agents, which challenges the network's safety and robustness. Additionally, LEMAS are high-dimensional which makes it difficult to foresee all possible failure scenarios, which is a combinatorial and computational bottleneck. Addressing these challenges requires a fundamental rethinking. This project does that with a novel statistical neurosymbolic approach for the design of safe learning-enabled multi-agent systems. These approaches are neurosymbolic as they use symbolic reasoning over a novel multi-agent logic and statistical, as they provide probabilistic end-to-end safety guarantees. The project’s impacts are new theories and algorithms that will be relevant to any safety-critical LEMAS with significant societal impact on civilian and commercial applications. The project will also have significant impact on wildfire prevention by teams of drones, the primary application domain. The broader impact lies also on the educational agenda involving undergraduate and graduate level education at the University of Southern California. <br/><br/>Each agent within a modern LEMAS uses state-of-the-art learning-enabled components for perception, trajectory prediction, and control. While learning-enabled single-agent systems are fairly well understood, it is unclear how to design safe LEMAS due to their size, complex network structure, and data dependencies between agents. To formalize safety for LEMAS, the project proposes the formal specification language “multi-agent spatio-temporal logic” (MASTL). MASTL enables to jointly reason about the network structure of LEMASs and the reliability of learning-enabled components. The project proposes a combination of statistical and formal verification techniques for MASTL to obtain provable safety guarantees for LEMAS utilizing surrogate models, reachability analysis, and statistical tests. Alongside, the project proposes reinforcement learning techniques under MASTL specifications that result in self-organizing swarm behavior. The techniques proposed in the project use photorealistic simulators that provide cheap yet realistic dataset. Because deployed LEMAS may behave differently in the real world and be subject to data distribution shifts, the project proposes training algorithms that are robust against data corruption and distribution shifts. These algorithms enjoy formal safety guarantees under quantifiable assumptions on the distribution shift. To detect extreme events and unknown unknowns in the environment, the project proposes adaptive and robust predictive runtime monitors. In the last step, the project provides real-time algorithms for self-organizing drone swarms for wildfire prevention and robots in semi-automated warehouses.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.