Partial differential equations (PDEs) arise naturally in the modeling and study of many natural, industrial, and financial phenomena. While there exist numerous techniques for approximating solutions to many types of PDEs, it is the case that certain problems of practical interest exhibit various types of pathology, which render many standard techniques either highly inefficient or unusable. Such situations often arise in the setting of nonlinear problems which are posed in high dimensions or exhibit singularities, such as in the study of solid-fuel combustion optimization, oil pipeline corrosion predictions, and high-frequency financial trading. This project intends to provide means of circumventing the aforementioned issues through the rigorous study and development of so-called structure-preserving deep neural networks (DNNs). While it has been experimentally observed that DNNs provide highly capable methods of approximating solutions to a large class of problems, it is the case that the theoretical justification of such observations is still in its early stage. To that end, this project will provide a much-needed theory for certain classes of high-dimensional nonlinear PDEs via a two-pronged approach: namely, explicit randomized methods will be constructed which demonstrate desired properties while simultaneously developing theoretical tools representing and studying a large class of objects via DNNs. This approach requires the use of numerous tools from applied mathematics, functional analysis, stochastic analysis, and novel DNN computations. This unique intersection of techniques will serve as the basis for the project's educational and training components, which aim to increase the presence of women, minorities, and other underrepresented groups in mathematical research. This goal will be accomplished through the training and mentoring first-generation and underrepresented students at both graduate and undergraduate levels.<br/><br/>This project aims to address the question of whether or not it can be rigorously proven that there exist DNNs to approximate solutions to a large class of high-dimensional PDEs without suffering from the curse of dimensionality (CoD). The demonstration that DNNs are able to represent solutions to certain classes of high-dimensional nonlinear PDEs with a prescribed accuracy while not suffering from the CoD will fill a gap in the existing theory regarding machine learning algorithms. While the current focus is on studying the expressibility of DNNs with regard to solutions of PDEs, it is the case that this work will also serve as the foundation for extending such studies to other types of problems. This project will extend the existing theory of multilevel Picard (MLP) approximation methods to more general high-dimensional nonlinear PDEs, focusing on preserving inherent qualitative structures. The study of MLP approximations will also result in novel theoretical results regarding various stochastic fixed-point equations. Finally, the proposed work will provide explicit details on how to construct CoD-free DNN representations of various mathematical objects while also exploring theoretical issues related to popular activation functions. These ideas will be utilized in proving DNN-representation results and will provide a deeper understanding of how activation functions affect optimality.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.