A major challenge in deaf science education is the lack of standard signs in American Sign Language (ASL) for many scientific concepts. For example, one student might fingerspell a term, another might use a sign they created, and a third might use a different sign altogether. This variation can make it difficult for students to engage effectively in class without a shared understanding of scientific terminology. Collaborative problem-solving activities are known to improve the understanding of complex concepts, but traditional support methods mainly benefit hearing students. This makes it more challenging for deaf students who use sign language to participate fully. Additionally, deaf individuals are significantly underrepresented in scientific fields, which adds to their marginalization. To address these issues, the project will develop a new artificial intelligence tool designed to revolutionize collaborative learning for deaf students in science, helping them to better understand and communicate in university biology classes. The tool will use augmented reality, signed animations, and sign recognition to provide real-time information about the signs used in classroom conversations. <br/><br/>The primary hypothesis of the research is that artificial intelligence-driven technology can significantly improve the collaborative experience and learning outcomes for deaf students. The project focuses on establishing common ground, which is particularly challenging in science courses where standard ASL signs are lacking. The team uses augmented reality to visualize scientific lexicon representations, including signing avatars and English captions. These aids complement existing learning strategies, such as parallel visual processing and the creation of new terms. This project will assist students in learning new terminology introduced by teachers or emerging from classroom conversations. It caters to the diverse needs of the deaf community in terms of language fluency, hearing ability, and use of assistive technologies by providing flexible, non-invasive learning supports. In support of the project goals, the team will convene co-design sessions, conduct prototype testing, and implement an experimental study to assess the impact of the tool. The project team includes experts in ASL scientific lexicons, learning sciences, human-computer interaction, and artificial intelligence. The goal is to improve inclusive education strategies, focusing on collaborative learning in science. The project contributes to human-computer interaction by identifying design principles for intelligent support to signing learners. It advances artificial intelligence through state-of-the-art sign recognition and generation systems, adaptive to learner variability, and incorporating facial expressions and prosodic features. In learning science, the project explores the relationship between adaptive scaffolds for lexical alignment, collaborative processes, and learning outcomes. In terms of deaf education, the project develops interventions supporting collaborative learning among deaf students. Acknowledging the diverse experiences within North American deaf communities, the initiative works to understand these nuances. If successful, the technology could generalize to other learning scenarios involving collaborating deaf students. This work will also support professional and scientific opportunities for deaf scientists, students, and trainees.<br/><br/>This project is funded by the Research on Innovative Technologies for Enhanced Learning (RITEL) program that supports early-stage exploratory research in emerging technologies for teaching and learning.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.