The goal of this EArly-concept Grant for Exploratory Research (EAGER) grant is to enhance pilot training by developing methods to optimize sensory-motor interactions, thereby reducing training duration through accelerated learning. Traditional pilot training mainly relies on dominant sensory cues like vision and equilibrium. This research explores how secondary sensory cues, such as the senses of touch and audition, can be used to improve training efficiency. Additionally, the project seeks to leverage insights into pilot neurophysiology to better integrate these sensory cues in real time. By applying principles from flight dynamics, control theory, human-machine interaction, and neurophysiology, this project aims to create a comprehensive framework for optimizing sensory-motor interactions. This research supports the national interest by advancing science and promoting public welfare through improved training methods that enhance safety and efficiency in aviation. This study is particularly relevant as it supports the strategic transition towards Single Pilot Operations, which are projected for future passenger airplanes and next-generation rotorcraft aimed at Urban Air Mobility, commonly referred to as air taxis. Broader impacts include developing digital assistant systems that adapt to the cognitive workload of operators, reducing training costs, and improving safety in complex environments.<br/><br/>The specific objective of this research project is to develop methods for optimizing sensory-motor interaction strategies to minimize pilot training duration. This involves creating a pilot training platform that uses multiple synthetic actors for neuroadaptive multimodal cueing. In this setup, a human pilot collaborates with an intelligent agent to control a simulated vehicle, functioning as a symbiotic organism. The vehicle can be any machine capable of moving across regions of physical space such as airplanes, helicopters, or drones. The pilot receives multimodal cues through five synthetic actors: virtual and augmented reality goggles for visual cues, a motion-base platform for proprioceptive cues, spatial audio headphones for auditory cues, full-body haptic suits for haptic feedback, and active control inceptors for additional haptic cues. Real-time neuroadaptation enables the intelligent agent to adjust these cues and its control authority based on the pilot’s cognitive and physiological states and the performance of the pilot-vehicle system. This approach integrates tools from flight dynamics, control theory, human-machine interaction, and neurophysiology to develop a framework for optimal sensory-motor interaction. The project introduces several novel aspects, including the simultaneous use of multiple synthetic actors in sensory-motor interaction, the application of secondary sensory cues, and the adaptive modification of multimodal cues based on real-time data. The anticipated outcomes include advancements in digital assistant systems, improved training methods, and enhanced operational safety.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.