Education DCL: EAGER: Developing Experiential Cybersecurity and Privacy Training for AI Practitioners

Information

  • NSF Award
  • 2335700
Owner
  • Award Id
    2335700
  • Award Effective Date
    11/1/2023 - a year ago
  • Award Expiration Date
    10/31/2025 - 9 months from now
  • Award Amount
    $ 299,905.00
  • Award Instrument
    Standard Grant

Education DCL: EAGER: Developing Experiential Cybersecurity and Privacy Training for AI Practitioners

Artificial Intelligence (AI) and AI-powered tools have gained momentum in both development and usage and are becoming increasingly prevalent in the workplace. However, many AI practitioners are not aware of the cybersecurity and privacy risks associated with building AI-based systems, such as adversarial attacks on machine learning models, or privacy and ethics risks associated with using AI-based systems for decision-making around social issues. This project's goal is to raise AI workers' awareness of security and privacy risks by developing and evaluating a comprehensive 12-workshop experiential training program. The workshop series will provide knowledge and skills needed to build AI systems that are not only technically sound from an AI perspective but also secure, ethical, and privacy-preserving. Versions of the materials that have been improved after the evaluation will be made available to the wider community, giving the project the potential to widely increase the AI workforce's technical knowledge and cybersecurity awareness.<br/><br/>The workshops will follow an experiential learning model and will be designed, organized, and delivered by experts to achieve the learning objectives. These objectives are grouped around five main modules designed to cover a wide space of security and privacy concerns around AI models: Fundamentals and Threats; Adversarial Attacks and Robustness; Privacy, Ethics, and Trust; Secure Development and Data Governance; and Case Studies. Each module will be covered in two to three two-hour workshop sessions. Each workshop starts with one hour of a webinar or a panel of experts debating or discussing the workshop's key topics, followed by one hour of experiential learning component. That second component will consist of either a demo using real-world examples or a hands-on activity asking participants to complete a lab activity, which builds on the materials covered in the first hour. Participants will then present a demonstration of their lab work to other participants or write a reflection on what they learned. The project team will run the workshop series three times, with an evaluation and iteration cycle after each series to improve the materials. Together, the work will lead to a better understanding of both building more trustworthy AI-based systems and how to incorporate security, privacy, and ethics training as part of technical curricula.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Jeremy Epsteinjepstein@nsf.gov7032928338
  • Min Amd Letter Date
    8/23/2023 - a year ago
  • Max Amd Letter Date
    8/23/2023 - a year ago
  • ARRA Amount

Institutions

  • Name
    Loyola University of Chicago
  • City
    CHICAGO
  • State
    IL
  • Country
    United States
  • Address
    820 N MICHIGAN AVE
  • Postal Code
    606112147
  • Phone Number
    7735082471

Investigators

  • First Name
    David
  • Last Name
    Chan-Tin
  • Email Address
    chantin@cs.luc.edu
  • Start Date
    8/23/2023 12:00:00 AM
  • First Name
    Mohammed
  • Last Name
    Abuhamad
  • Email Address
    mabuhamad@luc.edu
  • Start Date
    8/23/2023 12:00:00 AM

Program Element

  • Text
    Secure &Trustworthy Cyberspace
  • Code
    8060

Program Reference

  • Text
    SaTC: Secure and Trustworthy Cyberspace
  • Text
    EAGER
  • Code
    7916