Retrieval-Enhanced Machine Learning (REML) refers to a subset of machine learning models that make predictions by utilizing the results of one or more retrieval models from collections of documents. REML has recently attracted considerable attention due to its wide range of applications, including knowledge grounding for question answering and improving generalization in large language models. However, REML has mainly been studied from a machine learning perspective, without focusing on the retrieval aspects. Preliminary explorations have demonstrated the importance of retrieval on downstream REML performance. This observation has motivated this project in order to provide an alternative view to REML and study REML from an information retrieval (IR) perspective. In this perspective, the retrieval component in REML is framed as a search engine capable of supporting multiple, independent predictive models, as opposed to a single predictive model as is the case in the majority of existing work. <br/><br/>This project consists of three major research thrusts. First, the project will develop novel architectures and optimization solutions that provide information access to multiple machine learning models conducting a wide variety of tasks. Next, the project will study training and inference efficiency in the context of REML by focusing on the utilization of retrieval results by downstream machine learning models and the feedback they provide. Third, the project will study approaches for responsible REML by examining data control for content providers in REML and fairness and robustness across multiple downstream models. Without loss of generality, the project will primarily focus on a number of real-world language tasks, such as open-domain question answering, fact verification, and open-domain dialogue systems.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.