Graph neural networks (GNNs) represent a family of deep learning methods designed for interrelated data. These data are called graph data because the graph is used to highlight the interconnections or nodes between the data points. GNNS produce representations of the nodes to explore the interrelationships and have provided a solution for a wide range of scientific applications, ranging from social media analysis, neuroscience, healthcare, climate science, finance, aviation, e-commerce and biology. The existing research on GNNs has generated rich theories, systems for designing GNNs architectures, training GNNs with strong empirical performance. As the landscape of GNNs continues to broaden and deepen, the fundamental safety issues of GNNs are less well studied and several important questions largely remain open. To name a few, how can one rigorously and quantitatively measure the safety of GNNs models? How safe are the existing GNNs models in the presence of hazards? How can one make the GNNs safer during the training, adaptation and testing stages? What are the fundamental limits and costs for enforcing the safety of GNNs?<br/><br/>This project investigates the end-to-end safety of graph neural networks, with a systematic effort to examine the safety issues in the entire life cycle of GNNs, taking into consideration various types of dataset shift and realistic external perturbations. The project consists of three integral research thrusts, including: 1) safe graph neural networks training; 2) safe graph neural networks adaptation; and, 3) safe graph neural networks testing. The project seeks to establish new theoretical results in terms of the sensitivity, NP-hardness, confidence, and generalization error bound of safe graph neural networks. It learns realistic perturbations and introduces new discrepancy measures for graphs, which in turn lead to the development of new algorithms for safe GNNs training, adaptation and testing with better efficacy and robustness. The developed theories and algorithms are evaluated on both synthetic and real-world datasets, are integrated into the courses that the investigators teach, and are further disseminated via publications, tutorials and open-source software. The project team actively seeks to engage under-represented students during the course of the project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.