Collaborative Research: SaTC: CORE: Small: Towards Secure and Trustworthy Tree Models

Information

  • NSF Award
  • 2413046
Owner
  • Award Id
    2413046
  • Award Effective Date
    1/1/2024 - a year ago
  • Award Expiration Date
    5/31/2026 - a year from now
  • Award Amount
    $ 279,999.00
  • Award Instrument
    Standard Grant

Collaborative Research: SaTC: CORE: Small: Towards Secure and Trustworthy Tree Models

Tree models are an important type of machine learning algorithm used in various applications such as finance, healthcare, and traffic management. They are particularly advantageous due to their simplicity and interpretability, making them well-suited for decision-making tasks, compared to complex neural networks that can be difficult to understand. However, despite their benefits, tree models are not immune to security and privacy concerns. Malicious actors can tamper with tree models or steal intellectual property, posing threats to the integrity and confidentiality of machine learning systems. Further, although there are studies of similar attacks on neural networks, differences between how neural networks and tree models work may affect how well those existing findings apply to tree models. Together, these issues mean there are a number of open questions around enhancing the security and trustworthiness of tree models. This project aims to develop novel strategies to address these questions and develop more robust and trustworthy AI-based systems, and develop both tools and educational opportunities through the work to make the findings widely available and impactful. <br/><br/>Specifically, this project addresses the need for robust model authentication, watermarking for intellectual property tracing, machine unlearning for data privacy, and defense against backdoor attacks for tree models. The technical aims are organized around four tasks: a) Pursuing model identification by embedding unique signatures to generate differently embedded models; b) Developing novel methodologies of robust watermarking for tree models, for the purpose of tracing intellectual property; c) Designing novel algorithms for machine unlearning in tree models by exploiting tree reconstruction, residual-stable split, and combination of tree techniques; and d) Investigating the implications of backdoor attacks against tree models by leveraging the insights from the above tasks on tweaking tree models without significantly impacting the accuracy. These research efforts will contribute to the advancement of tree model security and trustworthiness, ensuring that these models can be reliably deployed in real-world applications while mitigating the risk of malicious attacks, unauthorized access, and privacy breaches.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Nan Zhangnanzhang@nsf.gov7032920000
  • Min Amd Letter Date
    3/19/2024 - 10 months ago
  • Max Amd Letter Date
    5/29/2024 - 8 months ago
  • ARRA Amount

Institutions

  • Name
    Tufts University
  • City
    SOMERVILLE
  • State
    MA
  • Country
    United States
  • Address
    169 HOLLAND ST
  • Postal Code
    021442401
  • Phone Number
    6176273696

Investigators

  • First Name
    Yingjie
  • Last Name
    Lao
  • Email Address
    yingjie.lao@tufts.edu
  • Start Date
    3/19/2024 12:00:00 AM

Program Element

  • Text
    Secure &Trustworthy Cyberspace
  • Code
    806000

Program Reference

  • Text
    SaTC: Secure and Trustworthy Cyberspace
  • Text
    SMALL PROJECT
  • Code
    7923
  • Text
    UNDERGRADUATE EDUCATION
  • Code
    9178
  • Text
    REU SUPP-Res Exp for Ugrd Supp
  • Code
    9251