This award supports research to develop a computational architecture (called PACE) to model embodied and co-creative behavior between humans and embodied intelligent machines. Humans collaborate creatively (co-create) in many settings—for example, dancing with a partner, playing pretend with a child, or brainstorming in an engineering design meeting. As common as this experience is, it rarely defines our interactions with intelligent machines. This project explores improvisational, collaborative, and co-creative dance as a form of non-contact physical interaction between co-creative partners. The project is based upon creative sensemaking theory, which casts the creative practice as a dynamic social process in which individuals alternate between different cognitive states as they make sense of and respond to the actions of others within new environments. This project will promote the progress of science by studying human co-creative practice between advanced dancers to better understand human co-creativity in action and to inform the design and development of co-creative AI technology. The project team will use that understanding to develop AI agents that can co-create with humans. Project outcomes will include novel machine algorithms capable of replicating aspects of human-human co-creativity, as well as public performances of human dancers engaging in co-creative dance with the embodied machine intelligence. This work will inform the long-term development of more effective interfaces in a wide range of applications that involve integrated mind, machine, and motor function—such as physical therapy, design brainstorming, or future experiences with robots in the home. This project also supports K-12, undergraduate, graduate, and public education through outreach and mentorship, and via public human-AI performances.<br/><br/>The goal of this project is to develop a modular, reusable system for building embodied co-creative AI. The project team will use contemporary dance as an application domain, as its practitioners are formally trained in exploring, expressing, and collaborating through embodied non-contact physical interactions. The project team will first conduct a qualitative analysis of video and gesture data from human dancer dyads in improvisation sessions. This analysis will be focused on understanding how dancers build understanding through interaction with each other and the environment (i.e. through a process called participatory sensemaking). The team will then formalize ther findings within an architecture for creating virtual, embodied agents that can sense, learn, and generate movement during improvisation. To demonstrate and refine this architecture, the team will develop a co-creative AI that approaches expert-level participatory sensemaking in contemporary dance, and train this agent to create a curated improvisational partner. The agent will be evaluated in rehearsal and in performance. The main contributions of this project will be a) the first open-access annotated dyadic movement dataset; b) a better understanding of how human dancers co-create; c) a interface for dancers to train and improvise with AI; and d) an architecture for making co-creative AI in motor-related domains based on empirical studies of co-creativity.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.