Collaborative Research: SHF: Small: Efficient and Scalable Privacy-Preserving Neural Network Inference based on Ciphertext-Ciphertext Fully Homomorphic Encryption

Information

  • NSF Award
  • 2412357
Owner
  • Award Id
    2412357
  • Award Effective Date
    1/1/2024 - a year ago
  • Award Expiration Date
    3/31/2026 - a year from now
  • Award Amount
    $ 240,062.00
  • Award Instrument
    Standard Grant

Collaborative Research: SHF: Small: Efficient and Scalable Privacy-Preserving Neural Network Inference based on Ciphertext-Ciphertext Fully Homomorphic Encryption

Along with the evolution of artificial intelligence, privacy-preserving machine learning has emerged as an important and promising technique for protecting user-data privacy in cloud applications. Among the existing approaches, fully homomorphic encryption (FHE) based methods allow machine learning algorithms to be computed on encrypted data, while no original data information is leaked. This project addresses ciphertext-ciphertext FHE that preserves the privacy of both the user and model providers. This project aims to improve the hardware efficiency of ciphertext-ciphertext FHE-based neural network inference by orders of magnitude through algorithm-hardware co-optimization. This project yields a novel framework for ensuring the root of trust in cloud computing and cryptosystems to meet the future needs of both commercial products and national defense.<br/><br/>This project develops efficient and scalable hardware architectures for privacy-preserving neural network inference based on ciphertext-ciphertext FHE. This project leverages scheme switching - using arithmetic-based schemes for linear functions and Boolean logic-based schemes for non-linear functions - to accelerate the neural network computations. Research thrusts include: a) Designing efficient fundamental hardware building blocks with high scalability over word-length of modulus and degree of polynomial for privacy-preserving neural network, i.e., polynomial multipliers, by employing novel reconfigurable and pipelining framework and exploiting special primes to perform fast modular reduction; b) Further improving the efficiency of polynomial multiplier designs by utilizing a divide and conquer strategy based on a novel parallel filter technique; c) Developing a reconfigurable and neural network friendly FHE architecture using scheme switching; and d) Designing an efficient accelerator of privacy-preserving neural network inference with ciphertext-ciphertext operations via scheme switching that protects the privacy of both the user and model.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Almadena Chtchelkanovaachtchel@nsf.gov7032927498
  • Min Amd Letter Date
    2/21/2024 - 11 months ago
  • Max Amd Letter Date
    2/21/2024 - 11 months ago
  • ARRA Amount

Institutions

  • Name
    Tufts University
  • City
    SOMERVILLE
  • State
    MA
  • Country
    United States
  • Address
    169 HOLLAND ST
  • Postal Code
    021442401
  • Phone Number
    6176273696

Investigators

  • First Name
    Yingjie
  • Last Name
    Lao
  • Email Address
    yingjie.lao@tufts.edu
  • Start Date
    2/21/2024 12:00:00 AM

Program Element

  • Text
    Software & Hardware Foundation
  • Code
    779800

Program Reference

  • Text
    SMALL PROJECT
  • Code
    7923
  • Text
    COMPUTER ARCHITECTURE
  • Code
    7941