Modern machine learning methods, and in particular deep networks have led to significant advances in several areas of science and engineering, including computer vision, speech and language processing, robotics, and beyond. At the same time, deep networks have been shown to be extremely sensitive to small adversarial perturbations to their inputs or training set. Because of this, models based on deep networks can exhibit significant vulnerabilities to imperceptible attacks. Recent work has proposed many ad-hoc methods for defending deep networks against such adversarial attacks, which have been subsequently broken by stronger attacks. While stronger and provably correct defenses continue to be developed, a mathematical framework for understanding why deep networks can be fooled into making wrong predictions and how to design and train networks with guarantees of robustness remains elusive. This project aims to answer the following questions: Is it possible to detect when a network has been attacked or when a dataset has been poisoned and reconstruct the original uncorrupted data? If yes, under what conditions on the distribution of the data and the network architecture? If not, how can network architectures and learning algorithms be designed that yield provably robust networks? <br/><br/>This project has the following research goals (1) derive conditions on the input data and the attack type under which one can determine the attack type and reconstruct the original signal; (2) study the fundamental limits of robustness guarantees against poisoning attacks, especially in the asymptotic regime where the adversary can poison a constant fraction of the training samples; (3) study the robustness of non-linear predictors that exploit sparsity and local stability of the computed representations allowing for provable guarantees for robustness; (4) study the role of symmetry as a form of parsimony and show that it increases the adversarial robustness.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.