CRII: RI: Secure Multi-Agent Reinforcement Learning Algorithms

Information

  • NSF Award
  • 2105007
Owner
  • Award Id
    2105007
  • Award Effective Date
    5/15/2021 - 3 years ago
  • Award Expiration Date
    4/30/2023 - a year ago
  • Award Amount
    $ 174,903.00
  • Award Instrument
    Standard Grant

CRII: RI: Secure Multi-Agent Reinforcement Learning Algorithms

Recent years have witnessed significant advances in reinforcement learning (RL), an area of machine learning that achieved great success in solving various sequential decision-making problems. Advances in single‐agent RL algorithms sparked new interest in multi-agent RL (MARL). The goal of this project is to build robust, secure algorithms for autonomous systems that are built using MARL. The project team will investigate a novel threat that can be exploited simply by designing an adversarial plan for an agent acting in a cooperative multi‐agent environment so as to create natural observations that are adversarial to one or more of its allies. For example, in connected autonomous vehicles, one compromised vehicle can<br/>drastically disrupt security, causing confusion and mistakes that result in poor performance and even harm to humans who rely on these systems. The project team will build a robust MARL algorithm to such adversarial manipulations. The educational plan for this project includes developing a suit of tutorials on analyzing the security and robustness of MARL algorithms, designed for use in a graduate course or as a tool for MARL researchers. The project team will also contribute to educational outreach by involving graduate and undergraduate students from underrepresented groups.<br/><br/>The project is built upon three overarching objectives (1) study how attackers can exploit MARL vulnerabilities,(2) develop a more robust MARL algorithm by training each agent using the counterfactual reasoning about other agents’ behaviors, and (3) create a novel online formal verification method to satisfy the security and safety requirements during the execution of our proposed MARL algorithm. More specifically, for the first objective, the project team will prove the feasibility of using a compromised agent to attack its allies in MARL systems through its actions. The second objective will reverse‐engineer the attack strategies to develop a robust MARL algorithm that models the agents’ behaviors during training and correlates their actions using counterfactual reasoning. In the third objective, an online formal verification model will be developed to detect any deviations in agents’ behaviors using a predefined set of security and safety specifications.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Rebecca Hwarhwa@nsf.gov7032927148
  • Min Amd Letter Date
    5/12/2021 - 3 years ago
  • Max Amd Letter Date
    5/12/2021 - 3 years ago
  • ARRA Amount

Institutions

  • Name
    Wake Forest University
  • City
    Winston Salem
  • State
    NC
  • Country
    United States
  • Address
    1834 Wake Forest Road
  • Postal Code
    271098758
  • Phone Number
    3367585888

Investigators

  • First Name
    Sarra
  • Last Name
    Alqahtani
  • Email Address
    sarra-alqahtani@wfu.edu
  • Start Date
    5/12/2021 12:00:00 AM

Program Element

  • Text
    Robust Intelligence
  • Code
    7495

Program Reference

  • Text
    ROBUST INTELLIGENCE
  • Code
    7495
  • Text
    CISE Resrch Initiatn Initiatve
  • Code
    8228