The growing dependence of organizations on cloud cyberinfrastructure (CI), coupled with the intrinsic on-demand and elastic nature of the cloud CI, have widened the attack surface and made it an attractive target to rapidly evolving cyber threats. The development of fairness-aware Artificial Intelligence (AI) and machine learning (ML) based security solutions can make cloud CI more resilient and trustworthy. However, a key pillar of a successful secure cloud adoption necessitates scientific research workforce training. This project aims to train the future research workforce to develop and use AI-based cloud CI cybersecurity solutions that are fair, ethical, and unbiased. In addition, the project aims to instill the ability of the workforce to adapt and evolve these AI based cybersecurity solutions for cloud CI to improve their trustworthiness and resiliency, as new adversary models are discovered. <br/><br/>The technical innovations of this project address the growing needs for a fairness-aware AI-skilled secure cloud CI research workforce in two-fold. First, the project will develop and integrate seven advanced experiential learning modules, referred to as AI4SecureCI, for secure cloud CI using fair and explainable AI concepts into undergraduate and graduate curriculum, training around 500 diverse participants including faculty and students directly. The developed AI4SecureCI modules will include the concepts of network security, authorization and automated access control, online malware detection, classifying malware threats, adversarial attacks and defenses, bias and fairness, and explainable AI, relevant to cloud CI. These modules will include the (1) lecture materials to provide conceptual knowledge for AI4SecureCI, and (2) hands-on lab exercises to provide practical experience. To support hands-on labs and enable wider adoption of the modules, the team will utilize ready-to-use datasets created from their own cloud CI security research and public security datasets, and free-tier cloud services such as AWS Educate. Second, the project will ensure broader adoption, via student boot-camps and series of faculty workshops of developed advanced AI4SecureCI and computational data-driven methods, into underrepresented groups of CI users and contributors to foster research advancements for evolving cloud CI security threat vectors. The advances made under this project, both in terms of research, modules developed, as well as training material will be made publicly available on a project website. The team will collaborate closely with the NSF ACCESS program to enhance the dissemination of knowledge and expertise within the CI community by incorporating the AI4SecureCI modules into the ACCESS Knowledge Base.<br/><br/>"<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.