Optimization-based Implicit Deep Learning, Theory and Applications

Information

  • NSF Award
  • 2309810
Owner
  • Award Id
    2309810
  • Award Effective Date
    7/15/2023 - a year ago
  • Award Expiration Date
    6/30/2026 - a year from now
  • Award Amount
    $ 95,575.00
  • Award Instrument
    Continuing Grant

Optimization-based Implicit Deep Learning, Theory and Applications

The past decade has seen remarkable success in deep learning. However, a significant challenge in today's era is to ensure interpretability and reliability in these models. In various applications, deep neural networks (DNNs) need to provide guarantees on their outputs, such as maintaining a self-driving car within its lane. On the other hand, many of these tasks can be formulated as optimization problems, where optimization algorithms offer interpretable and reliable solutions. Unfortunately, these models do not leverage data and thus fall short of state-of-the-art deep learning models. This research will address enhancing interpretability and reliability in deep learning methods and improve public safety when such learning methods are applied. In addition, the project will provide valuable educational opportunities for students involved. Participants will gain knowledge in inverse problems, optimization, and machine learning, which are transferable skills applicable in academia, government, and industry.<br/> <br/>The project aims to develop a framework that combines the interpretability and reliability of optimization algorithms with the design and training of DNNs. The primary focus is on implicit networks, a type of DNNs that determines their outputs implicitly through fixed point or optimality conditions, rather than a fixed number of computations like traditional DNNs with a set number of layers. This integration of optimization algorithms into implicit networks is referred to as implicit learning-to-optimize (L2O) networks. Implicit L2O networks have the potential to overcome the limitations of traditional DNNs, including their lack of reliability and interpretability. However, training and designing implicit L2O models present additional challenges that hinder their widespread adoption. To address these challenges, the research aims to develop a universal implicit L2O framework.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Yuliya Gorbygorb@nsf.gov7032922113
  • Min Amd Letter Date
    7/14/2023 - a year ago
  • Max Amd Letter Date
    7/14/2023 - a year ago
  • ARRA Amount

Institutions

  • Name
    Colorado School of Mines
  • City
    GOLDEN
  • State
    CO
  • Country
    United States
  • Address
    1500 ILLINOIS ST
  • Postal Code
    804011887
  • Phone Number
    3032733000

Investigators

  • First Name
    Samy
  • Last Name
    Wu Fung
  • Email Address
    swufung@mines.edu
  • Start Date
    7/14/2023 12:00:00 AM

Program Element

  • Text
    COMPUTATIONAL MATHEMATICS
  • Code
    1271

Program Reference

  • Text
    Machine Learning Theory
  • Text
    COMPUTATIONAL SCIENCE & ENGING
  • Code
    9263