Artificial Intelligence (AI) and AI-powered tools have gained momentum in both development and usage and are becoming increasingly prevalent in the workplace. However, many AI practitioners are not aware of the cybersecurity and privacy risks associated with building AI-based systems, such as adversarial attacks on machine learning models, or privacy and ethics risks associated with using AI-based systems for decision-making around social issues. This project's goal is to raise AI workers' awareness of security and privacy risks by developing and evaluating a comprehensive 12-workshop experiential training program. The workshop series will provide knowledge and skills needed to build AI systems that are not only technically sound from an AI perspective but also secure, ethical, and privacy-preserving. Versions of the materials that have been improved after the evaluation will be made available to the wider community, giving the project the potential to widely increase the AI workforce's technical knowledge and cybersecurity awareness.<br/><br/>The workshops will follow an experiential learning model and will be designed, organized, and delivered by experts to achieve the learning objectives. These objectives are grouped around five main modules designed to cover a wide space of security and privacy concerns around AI models: Fundamentals and Threats; Adversarial Attacks and Robustness; Privacy, Ethics, and Trust; Secure Development and Data Governance; and Case Studies. Each module will be covered in two to three two-hour workshop sessions. Each workshop starts with one hour of a webinar or a panel of experts debating or discussing the workshop's key topics, followed by one hour of experiential learning component. That second component will consist of either a demo using real-world examples or a hands-on activity asking participants to complete a lab activity, which builds on the materials covered in the first hour. Participants will then present a demonstration of their lab work to other participants or write a reflection on what they learned. The project team will run the workshop series three times, with an evaluation and iteration cycle after each series to improve the materials. Together, the work will lead to a better understanding of both building more trustworthy AI-based systems and how to incorporate security, privacy, and ethics training as part of technical curricula.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.