Stochastic Nash Evolution

Information

  • NSF Award
  • 2407055
Owner
  • Award Id
    2407055
  • Award Effective Date
    8/15/2024 - 11 months ago
  • Award Expiration Date
    7/31/2027 - a year from now
  • Award Amount
    $ 300,000.00
  • Award Instrument
    Standard Grant

Stochastic Nash Evolution

This project develops a new framework for the Nash embedding theorems in order to align the foundations of mathematics with cutting edge scientific applications, especially in AI. In the 1950s, Nash amazed the mathematical world by unifying two distinct ways of thinking about space. In two papers, he established that an abstractly defined space with a notion of length (an intrinsic Riemannian manifold) can be realized as the solution of a system of nonlinear differential equations (an extrinsic embedded manifold). These theorems are strikingly original. For example, a counterintuitive conclusion is that it is possible to crumple the surface of the globe into an arbitrarily small region without any change in length. In a remarkable development in the past decade, these theorems are now known to lie at the foundation of outstanding scientific challenges, especially the description of turbulence in fluids and the description of big data with deep learning. This project tackles both theory and practice. On one hand, a rigorous mathematical framework is developed for the Nash embedding theorems using probability theory, shedding new light on the underlying concepts and techniques. On the other hand, algorithms and models are developed that align the theory with scientific applications. The project contributes to the training of personnel in STEM fields through the mentoring of Ph.D students.<br/><br/>The technical core of this project is the rigorous analysis of Riemannian Langevin equations (RLE). The RLE provides a unified model in geometric deep learning, random matrix theory, and the isometric embedding problem (and related nonlinear PDE). In each setting, the goal of this project is to rigorously construct Gibbs measures in tandem with the development of fast optimization and sampling algorithms. Regarding mathematical foundations, the primary focus is on new intrinsic constructions of Brownian motion on Riemannian manifolds and the construction of stochastic flows with critical regularity. This framework is then extended to turbulence and other h-principles in PDE, replacing Nash's iterative scheme with RLE in each case. Matrix models, especially the deep linear network (DLN), provide the bridge between geometry and algorithms. On one hand, the Riemannian geometry of DLN is used to guide the analysis of (nonlinear) deep learning. On the other hand, the use of stochastic gradient descent is used to develop numerical schemes for sampling Gibbs measures for nonlinear PDE.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Dmitry Golovatydgolovat@nsf.gov7032922117
  • Min Amd Letter Date
    8/2/2024 - a year ago
  • Max Amd Letter Date
    8/2/2024 - a year ago
  • ARRA Amount

Institutions

  • Name
    Brown University
  • City
    PROVIDENCE
  • State
    RI
  • Country
    United States
  • Address
    1 PROSPECT ST
  • Postal Code
    029129100
  • Phone Number
    4018632777

Investigators

  • First Name
    Govind
  • Last Name
    Menon
  • Email Address
    menon@dam.brown.edu
  • Start Date
    8/2/2024 12:00:00 AM

Program Element

  • Text
    OFFICE OF MULTIDISCIPLINARY AC
  • Code
    125300
  • Text
    APPLIED MATHEMATICS
  • Code
    126600

Program Reference

  • Text
    EXP PROG TO STIM COMP RES
  • Code
    9150