This project will develop improved methods and concepts to guide the development and application of new digital technologies that could be used in peer review processes for evaluating scientific publishing and funding outcomes. The research team will seek to compare perceptions of peer review decisions assisted by algorithms to those made by humans. The focus of the research is on the ethics and value of using algorithms in peer review. On the one hand, algorithmic peer review serves an instrumental purpose, purportedly offering the ability to make more efficient decisions. On the other hand, algorithms can produce biased and discriminatory decisions, which can raise ethical concerns about their use. This study will expand knowledge of how algorithms relate to the norms, values, and institutional imperatives that dictate how science as a human and machine endeavor should be conducted.<br/><br/>The research team will carry out a factorial survey based on experiments in which research participants are presented with vignettes regarding human and algorithmic peer review decision making. The participants will be asked to assess the legitimacy of each scenario in light of bias and efficiency. The team will employ various techniques, principally multi-level econometric methods, to analyze data drawn from the survey and other sources, including sociology publications. The team will use an institutionalist approach to frame and delineate key concepts and relationships and to formulate research hypotheses that are empirically meaningful and theoretically appealing. The project will offer insights on measuring both pragmatic and moral legitimacy as they pertain to peer review. The project will also provide an adaptable survey tool and approach for gauging perceptions about algorithmic versus human peer review decisions and other scholarly communication activities. Accordingly, it addresses foundational issues in the philosophy of science, sociology of science and technology, and science communication.<br/><br/>This project is jointly funded through the ER2 program by the Directorate for Social, Behavioral and Economic Sciences and the Directorate for Computer and Information Science and Engineering.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.