Successful oral communication in a second language largely depends on listening skills in the second language. Recognizing words in conversations is particularly challenging for second language learners, because of the wide range of pronunciation variation that they might encounter: in everyday conversations, words are often less clearly pronounced compared to their "standard" form. This doctoral dissertation research project investigates how native and nonnative listeners comprehend reduced speech in real time. This common occurrence of "reduced speech" poses significant challenges for nonnative listeners. However, little is known about nonnative speech comprehension in real life situations although the answer to this question is relevant for both theories of nonnative speech processing and for language teaching. This project also benefits society by providing education in language sciences research and by making the collected data accessible to the global research community, supporting further studies into how listening skills in nonnative listeners develop alongside increasing experience with the language.<br/><br/>While native listeners can often quickly reconstruct the full forms of words in their mind during the processing of reduced words, nonnative listeners are thought to rely heavily on the pronounced segments they hear (the acoustic signal) and reconstruct the full form less quickly (or not at all) from the reduced segments and syllables. This doctoral dissertation project uses eye-tracking technology to examine the real-time processing mechanisms at play. Tracking participants’ eye movements on a screen displaying various competing options while listening to reduced words allows for examination of lexical competition (which word candidates are selected), to indicate whether reduced words are reconstructed and when. This reveals how native and nonnative listeners reconstruct reduced words in real time, and which alternative candidate words learners activate. This research also analyzes the influence of additional background factors including perceptual processing, vocabulary size, familiarity with, as well as exposure to casual speech, and time spent in countries where the target language is spoken. The results are used to refine cognitive models of both native and nonnative speech processing and have practical implications for language instruction.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.