Collaborative Research: SaTC: EDU: Education on Securing AI System under Adversarial Machine Learning Attacks

Information

  • NSF Award
  • 2414365
Owner
  • Award Id
    2414365
  • Award Effective Date
    8/15/2024 - 4 months ago
  • Award Expiration Date
    7/31/2027 - 2 years from now
  • Award Amount
    $ 146,000.00
  • Award Instrument
    Standard Grant

Collaborative Research: SaTC: EDU: Education on Securing AI System under Adversarial Machine Learning Attacks

Artificial Intelligence (AI) has reached groundbreaking milestones in recent years. Its usage has spanned critical application domains, such as computer vision, audio perception, and natural language processing. However, these breakthroughs come with substantial security challenges. The machine learning (ML) models serving as the computational cores of AI systems are inherently vulnerable to attacks. By exploiting vulnerabilities in AI systems, adversaries can make the models produce incorrect predictions, leading to serious consequences such as misinterpreting traffic signs for autonomous vehicles or generating incorrect responses in speech recognition systems. Current AI-related educational efforts are limited on teaching the security perspective of ML. To bridge this gap, this project aims to develop comprehensive educational modules to prepare students and future engineers to address these ML security vulnerabilities and achieve trustworthy AI. By creating a practice-in-the-loop learning experience, students can obtain hands-on experiences with the security vulnerabilities of ML models and corresponding solutions. <br/><br/>This project will develop a comprehensive educational program that focuses on three key perspectives of AI security. First, this project will create a practice-in-the-loop learning experience for students to understand the security of ML in computer vision, such as image recognition and object detection. Educational modules will be developed to cover various ML models for vision sensing and their security vulnerabilities and solutions. Second, this project will extend the interactive learning experience for students to understand the security problems of ML in voice assistant systems, such as speech recognition and speaker identification. The educational modules will be developed to introduce ML models for audio data processing and security vulnerabilities in voice assistant AI systems. Third, this project will develop software-based labs and training projects to enhance students’ understanding. The outcomes of this project, such as teaching slides, software labs, and training projects, will enable various undergraduate student training and outreach activities. They will also be disseminated online and through academic publications, ensuring diverse communities can readily access and employ the educational resources.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    ChunSheng Xincxin@nsf.gov7032927353
  • Min Amd Letter Date
    8/5/2024 - 4 months ago
  • Max Amd Letter Date
    8/5/2024 - 4 months ago
  • ARRA Amount

Institutions

  • Name
    Temple University
  • City
    PHILADELPHIA
  • State
    PA
  • Country
    United States
  • Address
    1805 N BROAD ST
  • Postal Code
    191226104
  • Phone Number
    2157077547

Investigators

  • First Name
    Yan
  • Last Name
    Wang
  • Email Address
    y.wang@temple.edu
  • Start Date
    8/5/2024 12:00:00 AM

Program Element

  • Text
    CYBERCORPS: SCHLAR FOR SER
  • Code
    166800
  • Text
    Secure &Trustworthy Cyberspace
  • Code
    806000

Program Reference

  • Text
    SaTC: Secure and Trustworthy Cyberspace
  • Text
    AI Education/Workforce Develop
  • Text
    UNDERGRADUATE EDUCATION
  • Code
    9178
  • Text
    GRADUATE INVOLVEMENT
  • Code
    9179
  • Text
    SCIENCE, MATH, ENG & TECH EDUCATION