SHF: Small: Design Methodologies for Interpretable Differentiable Logic Networks

Information

  • NSF Award
  • 2416541
Owner
  • Award Id
    2416541
  • Award Effective Date
    7/1/2024 - a year ago
  • Award Expiration Date
    6/30/2027 - a year from now
  • Award Amount
    $ 600,000.00
  • Award Instrument
    Standard Grant

SHF: Small: Design Methodologies for Interpretable Differentiable Logic Networks

Artificial Intelligence (AI) is driving advancements in a multitude of fields. An important vehicle for implementing AI frameworks is machine learning. The most popular machine learning model is the artificial neural network. It takes inspiration from the human brain. Despite the key role neural networks have played in the rapid AI advancements in the last decade, how they make decisions is not easily interpretable by humans. This is a key shortcoming that prevents its wider use in some application domains, such as medical and legal, where understanding the reasoning behind decisions is very important. This project offers a novel approach, called differentiable logic networks, to tackle this problem. In addition to being interpretable, these networks are also highly energy- and memory-efficient. Thus, they can be placed on energy-constrained devices, such as smartwatches and smartphones, bringing AI closer to us. The project results will be transferred to the industry through various active engagements. It will train a new generation of graduate and undergraduate students in this emerging field. The research outcomes will be included in an undergraduate course on Machine Learning. This project also will provide education in microelectronics to high school students through the Princeton Laboratory Learning Program. Broad dissemination of research to the academic and industrial communities will be achieved through published papers, posters, and seminars. In addition, various tools and models will be distributed online for the benefit of other researchers. <br/><br/>Differentiable logic networks consist of layers of logic operators trained through gradient-based optimization. Their decisions are easily interpretable because of their reliance on logic rules. They primarily consist of a network of two-input neurons that perform binary logic operations. All the connections among neurons are single bit with unit weight. They achieve accuracies competitive with those of traditional neural networks. Differentiable logic networks differ from neural networks in three major ways: (i) they transform inputs into binary values, (ii) instead of matrix multiplications, they perform logic operations, and (iii) connections in network layers are very sparse, primarily because each neuron accepts only two inputs. They have two-fold training objectives: (i) determine which Boolean function each neuron should implement and (ii) establish how the neurons should be connected. Both problems are discrete, making direct use of common neural network training optimizers, like gradient descent, unsuitable for the task. This project will lead to synthesis methodologies that address this problem along various axes: (i) relaxation of the discrete search space to make the synthesis approach continuous and differentiable, (ii) freeze, discretize, and prune network layers progressively from inputs to outputs to make the network very compact, (iii) tackling of the vanishing gradient problem, (iv) development of a new normalization method for such networks, and (v) efficient exploration of the design space.<br/><br/>This project is co-funded by the Software and Hardware Foundations (SHF) and Discovery Research PreK-12 (DRK12) programs. DRK12 is an applied research program that supports STEM education PreK-12.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Hu, X. Sharonxhu@nsf.gov7032928910
  • Min Amd Letter Date
    6/6/2024 - a year ago
  • Max Amd Letter Date
    6/6/2024 - a year ago
  • ARRA Amount

Institutions

  • Name
    Princeton University
  • City
    PRINCETON
  • State
    NJ
  • Country
    United States
  • Address
    1 NASSAU HALL
  • Postal Code
    085442001
  • Phone Number
    6092583090

Investigators

  • First Name
    Niraj
  • Last Name
    Jha
  • Email Address
    jha@princeton.edu
  • Start Date
    6/6/2024 12:00:00 AM

Program Element

  • Text
    Discovery Research K-12
  • Code
    764500
  • Text
    Software & Hardware Foundation
  • Code
    779800

Program Reference

  • Text
    Microelectronics and Semiconductors
  • Text
    SMALL PROJECT
  • Code
    7923
  • Text
    DES AUTO FOR MICRO & NANO SYST
  • Code
    7945