Computers can now generate text that closely resembles human writing in a wide variety of domains, from essays to poetry to computer code and even to movie scripts. However, there is no way to know today exactly how these technologies will change the ways that we teach, work, and learn. Thus, people tend to imagine different possible futures, and these imagined futures can shape their thinking about generative AI. This project studies the futures that college educators imagine around generative AI by examining discussions of classroom policies. For both introductory writing courses and introductory computer programming courses, the project team will analyze discussions among educators about how and why to form policies around using or prohibiting generative AI tools. Analyzing these discussions can help reveal the futures being imagined around generative AI and how those imagined futures are influencing our actions in the present. This project will also share results of the researchers' analysis with instructors who participate in the research, helping them to build a better sense of the space of possible policies they might use in their own classrooms.<br/><br/>The project includes two main lines of research activity. First, discussions about educational policies around generative AI will be collected from a range of online sources, including opinion pieces (e.g., in the Chronicle of Higher Education), social media discussions (e.g., in academic Reddit groups), and others. The researchers will analyze these discussion data using computational topic modeling to identify textual patterns indicative of latent suppositions and beliefs about possible sociotechnical futures. Second, researchers will conduct a series of qualitative interviews with instructors of two types of introductory college courses: courses on writing and composition, and courses on computer programming. These interviews will ask instructors directly about the futures that instructors imagine around generative AI, as well as how those imagined futures relate to their own course policies. The interviews will also include a reflexive component, where preliminary results from the above computational analysis are shared with participants. Doing so both serves as a member check on the results, i.e., comparing the research team's interpretations to those of the instructors themselves, and offers instructors an opportunity to reflect upon and contrast their own policies and imagined futures with those of other instructors. The results of this project will help lay a foundation for future research examining beliefs and policies about generative AI in a variety of application domains.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.