The increasing use of AI-enabled technologies in social domains poses safety and security risks that future cybersecurity professionals will need to address. Importantly, many of these risks go beyond narrow notions of robustness against attack to include concerns about misuse, unintended consequences, fairness across users, and other possible harms that happen when technologies are broadly used in society. This raises the need for future cybersecurity professionals to be aware of these harms and to consider ethics and responsibility of the systems they secure. This project's goal is to help students develop an ethical and responsible mindset towards cybersecurity topics, focusing on technologies that use artificial intelligence (AI) in security and privacy use cases. To do this the project team will develop a series of case studies of AI-enabled technologies designed for social problems, helping students reason about aspects of the design that can raise or mitigate security and privacy risks. The cases will be designed to appear in a series of undergraduate and graduate cybersecurity-related courses, with the idea that repeated exposure to these topics will make ethical considerations an integral part of the security curriculum, and that seeing them in different contexts will reinforce student learning and prepare students to transfer these mindsets and skills across contexts as they enter the security workforce.<br/><br/>Using prior research on situated learning and perspectival thinking, the project team will create a series of four role-play-based case studies that highlight different social and ethical risks that often arise around AI technologies designed for security-related problems. Role-play based methods are an effective pedagogical technique for learning and perspective-taking; the team plans to combine them with a concept map-based assessment approach that is well-suited for the kind of longitudinal assessment that will be needed for a multi-course curriculum effort. Further, the concept map elicitation will be grounded in common security standards and rubrics, tightening the connection between the role-playing exercises and the security workforce development goals of the project. Topics will be chosen to highlight different areas of cybersecurity, different social contexts, and different risks; the current plan includes incorporating cases that involve automated video monitoring for surveillance, privacy of smart home devices, and algorithmic evaluation of candidates for jobs. Overall, the project will develop a validated curriculum for increasing ethical responsibility in the future cybersecurity workforce; the curriculum development and assessment methods are also intended to be easily adaptable to other educational domains and problems.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.