Rapid advances in artificial intelligence (AI) and cyber-physical systems (CPS) are transforming critical engineered infrastructure systems, such as transportation, promising connected and autonomous transportation services, networks, and systems. Despite their potential economic and societal benefits by addressing persistent traffic safety, congestion, and accessibility issues, AI-powered transportation CPS can also be a double-edged sword as they can cause intentional and/or unintentional harm to transportation system users, ultimately breaching public trust in and hindering mass adoption and derived benefits of transportation CPS. Alarmingly important examples are the unintentionally unreliable decisions of AI under complex uncertain situations arising in safety-critical autonomous driving, AI vulnerability to intentional adversarial attacks against transportation CPS elements, and unintentional discrimination of AI decisions against certain transportation CPS user groups. This project aims to develop novel and comprehensive trustworthy AI tools providing the umbrella addressing technical (specifically, AI safety and AI security) and social (specifically, AI fairness) trust issues arising in AI systems embedded in transportation CPS. The research activities are closely integrated with education and outreach objectives, including 1) training an underrepresented workforce that is equipped with the knowledge and skill sets to address AI trustworthiness in transportation CPS; 2) teaching undergraduate and graduate students about trustworthy AI in transportation systems; 3) motivating K-12 and minority students to pursue AI and engineering careers; and 4) building a broader research-education community.<br/><br/>This project consists of three multidisciplinary research thrusts at the nexus of AI safety, AI security, and AI fairness in the context of transportation CPS. Collectively, this three-pronged framework aims to fill knowledge gaps in 1) addressing trustworthiness of a wide spectrum of learning-based AI methods, including graph neural networks, large language models, reinforcement learning, federated learning, and tensor completion; 2) integration of multiple, non-mutually exclusive aspects of AI trustworthiness, such as the AI safety-fairness nexus, which is crucial given that these and other aspects of AI trustworthiness should ideally co-exist to ensure the integrity of transportation CPS; 3) adoption of a holistic, systematic approach to tackle trustworthiness issues arising in the data, modeling, and deployment stages throughout the AI system lifecycle; and 4) addressing AI trustworthiness in transportation CPS at the microscopic (e.g., connected autonomous vehicle control), mesoscopic (e.g., adaptive autonomous traffic signal control), and macroscopic (e.g., autonomous traffic network flow analytics) scales.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.