Generative large language models (LLMs) are transforming the field of security with their ability to provide fast and comprehensive insights and information. LLMs are now being applied to several important security tasks, including platform-specific incident response, secure programming, binary analysis, and penetration testing. With automation changing the practice of security as we know it, this project seeks to refresh security education in order to better align what we teach students to how security is now practiced. The project's novelties include creating a new curriculum that embraces the use of LLMs throughout a student's cybersecurity education in a way that prepares them to meet the demands of future security workforce. In doing so, the project's broader significance and importance will be to enable a broader pipeline of students who can leverage the use of modern tools such as LLMs to solve cybersecurity problems, including those with less technical backgrounds and those from underrepresented groups.<br/><br/>The project's approach for redesigning security curricula centers around the planned restructuring of security curricula around LLMs. The new curriculum addresses different aspects of applying LLMs to solving security problems: what to ask for, which model to ask, how to ask for it, whether to task for it, and whether the result is trustworthy. By weaving this into traditional security courses in computer, network, web, and cloud security, students will gain important insight and practical skills that they can then use to contribute to the modern security workforce. To ensure the results of this project are broadly accessible, all curricula and lab exercises will be made available via public repositories.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.