This project develops a new framework for the Nash embedding theorems in order to align the foundations of mathematics with cutting edge scientific applications, especially in AI. In the 1950s, Nash amazed the mathematical world by unifying two distinct ways of thinking about space. In two papers, he established that an abstractly defined space with a notion of length (an intrinsic Riemannian manifold) can be realized as the solution of a system of nonlinear differential equations (an extrinsic embedded manifold). These theorems are strikingly original. For example, a counterintuitive conclusion is that it is possible to crumple the surface of the globe into an arbitrarily small region without any change in length. In a remarkable development in the past decade, these theorems are now known to lie at the foundation of outstanding scientific challenges, especially the description of turbulence in fluids and the description of big data with deep learning. This project tackles both theory and practice. On one hand, a rigorous mathematical framework is developed for the Nash embedding theorems using probability theory, shedding new light on the underlying concepts and techniques. On the other hand, algorithms and models are developed that align the theory with scientific applications. The project contributes to the training of personnel in STEM fields through the mentoring of Ph.D students.<br/><br/>The technical core of this project is the rigorous analysis of Riemannian Langevin equations (RLE). The RLE provides a unified model in geometric deep learning, random matrix theory, and the isometric embedding problem (and related nonlinear PDE). In each setting, the goal of this project is to rigorously construct Gibbs measures in tandem with the development of fast optimization and sampling algorithms. Regarding mathematical foundations, the primary focus is on new intrinsic constructions of Brownian motion on Riemannian manifolds and the construction of stochastic flows with critical regularity. This framework is then extended to turbulence and other h-principles in PDE, replacing Nash's iterative scheme with RLE in each case. Matrix models, especially the deep linear network (DLN), provide the bridge between geometry and algorithms. On one hand, the Riemannian geometry of DLN is used to guide the analysis of (nonlinear) deep learning. On the other hand, the use of stochastic gradient descent is used to develop numerical schemes for sampling Gibbs measures for nonlinear PDE.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.