Machine learning has become a highly successful and practical tool for understanding data, enabling new technologies, and aiding human decision-making. However, its increased use in applications that impact people has also led to a number of concerns. These include concerns about the fairness of decisions made, concerns about incentives generated and the effect of strategic behavior on accuracy of these systems, and concerns about the impact of classification decisions on societal welfare. This project aims to develop theoretical frameworks that advance the foundations for machine learning systems that address these concerns. In particular, the high-level goal of this work is to be able to provide clean guarantees both to those using these systems and to those affected by the decisions they make.<br/><br/>Specifically, this project is centered around three main research directions. The first is to advance the understanding of fairness in machine-learning and algorithmic contexts, with emphasis on the interaction between fairness conditions and biased training data, and on implementing fairness conditions in multi-stage decision systems. The second direction involves strategic classification, which is the problem of making classification decisions on agents that have the ability to modify their observable features to a limited extent, and who may do so if it leads to a decision they prefer. This work will tackle a number of fundamental problems in the design of algorithms with provable accuracy guarantees in such settings, especially for the challenging case of online sequential decision-making. The third direction involves impacts on societal welfare, and development of learning algorithms that combine classic accuracy goals with goals that involve incentivizing societally-beneficial behaviors.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.