When machine learning (ML) and artificial intelligence (AI) techniques are used in applications involving humans (e.g., recommending personalized items to users, ranking candidates for admission, hiring, and lending), it is critical to ensure safety for both the learning system and humans. From the learner’s perspective, the ML system should prevent manipulated information from disrupting the training procedure (safe training) and remain robust against rare and unexpected events during deployment (safe deployment). From the human standpoint, it is crucial that ML decisions align with social values (safety perception) and prevent the system from evolving toward unsafe states (safe downstream effects). However, achieving such safety assurance is often challenging due to the complex interactions and feedback dynamics between humans and the learning system. For instance, humans who utilize for obtaining loans or job searches may change their behavior, such as changing their profiles, to achieve favorable outcomes. While digital platforms offering on-demand services may steer consumer preferences to benefit their service. Meanwhile, as the users evolves, the learning system needs to update accordingly. Under such intricate human-AI interactions, creating a safe learning environment that supports long-term human well-being remains a significant challenge. This project aims to develop theoretical and algorithmic foundations for building a human-AI ecosystem with long-term safety assurance. The outcomes have the potential to benefit diverse domains, including lending, recruitment, healthcare, admission, and recommendation systems.<br/> <br/>To achieve long-term safety in the human-AI ecosystem, the project explicitly considers the complex interactions between humans and the learning system, with a research agenda comprising the following objectives: 1) Develop an analytical framework to characterize human-AI interactions, which embeds all safety components for both the learner and human agents; 2) Examine the feedback effects between agents and the ML system, developing methods to ensure the long-term safety of both under their dynamic interactions; 3) Establish a causal understanding of human-AI dynamics and design transparent and interpretable interventions to achieve long-term safety. This agenda entails developing new theories and algorithms at the intersection of control theory, reinforcement learning, dynamical systems, and optimization. Beyond theoretical and algorithmic contributions, the project will be validated through various use cases, including recommendation systems, lending, and healthcare.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.