Collaborative Research: CIF: Medium: Understanding Robustness via Parsimonious Structures.

Information

  • NSF Award
  • 2212457
Owner
  • Award Id
    2212457
  • Award Effective Date
    10/1/2022 - a year ago
  • Award Expiration Date
    9/30/2025 - a year from now
  • Award Amount
    $ 900,000.00
  • Award Instrument
    Standard Grant

Collaborative Research: CIF: Medium: Understanding Robustness via Parsimonious Structures.

Modern machine learning methods, and in particular deep networks have led to significant advances in several areas of science and engineering, including computer vision, speech and language processing, robotics, and beyond. At the same time, deep networks have been shown to be extremely sensitive to small adversarial perturbations to their inputs or training set. Because of this, models based on deep networks can exhibit significant vulnerabilities to imperceptible attacks. Recent work has proposed many ad-hoc methods for defending deep networks against such adversarial attacks, which have been subsequently broken by stronger attacks. While stronger and provably correct defenses continue to be developed, a mathematical framework for understanding why deep networks can be fooled into making wrong predictions and how to design and train networks with guarantees of robustness remains elusive. This project aims to answer the following questions: Is it possible to detect when a network has been attacked or when a dataset has been poisoned and reconstruct the original uncorrupted data? If yes, under what conditions on the distribution of the data and the network architecture? If not, how can network architectures and learning algorithms be designed that yield provably robust networks? <br/><br/>This project has the following research goals (1) derive conditions on the input data and the attack type under which one can determine the attack type and reconstruct the original signal; (2) study the fundamental limits of robustness guarantees against poisoning attacks, especially in the asymptotic regime where the adversary can poison a constant fraction of the training samples; (3) study the robustness of non-linear predictors that exploit sparsity and local stability of the computed representations allowing for provable guarantees for robustness; (4) study the role of symmetry as a form of parsimony and show that it increases the adversarial robustness.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    James Fowlerjafowler@nsf.gov7032928910
  • Min Amd Letter Date
    7/21/2022 - a year ago
  • Max Amd Letter Date
    7/21/2022 - a year ago
  • ARRA Amount

Institutions

  • Name
    Johns Hopkins University
  • City
    BALTIMORE
  • State
    MD
  • Country
    United States
  • Address
    3400 N CHARLES ST
  • Postal Code
    212182608
  • Phone Number
    4439971898

Investigators

  • First Name
    Rene
  • Last Name
    Vidal
  • Email Address
    rvidal@jhu.edu
  • Start Date
    7/21/2022 12:00:00 AM
  • First Name
    Soledad
  • Last Name
    Villar
  • Email Address
    svillar3@jhu.edu
  • Start Date
    7/21/2022 12:00:00 AM
  • First Name
    Jeremias
  • Last Name
    Sulam
  • Email Address
    jsulam1@jhu.edu
  • Start Date
    7/21/2022 12:00:00 AM

Program Element

  • Text
    Comm & Information Foundations
  • Code
    7797

Program Reference

  • Text
    Machine Learning Theory
  • Text
    MEDIUM PROJECT
  • Code
    7924
  • Text
    SIGNAL PROCESSING
  • Code
    7936
  • Text
    WOMEN, MINORITY, DISABLED, NEC
  • Code
    9102