The broader impact of this I-Corps project is the development of a speech learning technology using visual-acoustic biofeedback for individuals interested in altering their speech patterns. When individuals are unable to pronounce certain sounds in a normal way or communicate successfully, there are negative impacts to participation in social and professional settings. Speech-language pathologists who treat speech disorders report that evidence-based treatment and resources for some sound pronunciations are scarce. Very few empirically-tested interventions are available that allow visualization of the acoustic speech signal, an evidence-based method to enhance learning. This software can also be implemented remotely via videoconferencing or telepractice, with the potential for significant impact on service providers, clinicians, and individuals. Ultimately, providing access to more efficient technology can improve communication and quality of life for many individuals. While some speech sounds have high salience in clinical and language learning contexts, there is a wide range of sounds that would broaden the impact of this technology in future stages of development. An additional application includes gender-affirming voice training.<br/><br/>This I-Corps project utilizes experiential learning coupled with a first-hand investigation of the industry ecosystem to assess the translation potential of the technology. The solution is based on the development of a technology that provides visual-acoustic biofeedback to enhance learning and ultimately help individuals change their speech patterns. This technology is integrated in software that allows learners to view resonant frequencies of the vocal tract and compare them with a visual target representing the desired pronunciation. The technology uses the form of a real-time linear predictive coding spectrum of the speech signal. Manipulating certain characteristics of speech, such as resonant frequencies of the vocal tract, is a complex concept. This software can be used across a wide range of devices and via videoconferencing. The technology includes features to enhance an individual’s speech learning process, which includes adaptive difficulty in stimulus prompts, progress tracking, and gamification. Lastly, this speech visualization software will be integrated with a machine learning classifier developed to extend the utility of the technology.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.