This EArly-concept Grant for Exploratory Research (EAGER) project aims to revolutionize telemanipulation, a vital component of human-robot systems requiring precise, contact-intensive, and safety-critical manipulations. Traditional methods involve a human operator controlling a dexterous robot hand to interact with external objects, often mimicking human hand movements. These teleoperation approaches, however, need to adequately address the challenges arising from the physical interactions of the robotic hand with objects, resulting in indirect and counter-intuitive control challenges for human operators. This research incorporates the outcomes of robot-object physical interactions, potentially transforming human telemanipulation technology in essential societal sectors such as healthcare, disaster response, exploration, and manufacturing. Moreover, the project will incorporate advanced machine learning, sensor fusion, and human-computer interaction topics into curricula, promoting youth participation in STEM fields. Successful outcomes promise significant societal benefits, including enhanced efficiency and safety in critical operations, new standards for teleoperated systems, and fostering innovation in human-robot collaboration.<br/><br/>The research project aims to establish a framework for end effects-oriented dexterous telemanipulation centered on three primary research objectives. First, a learning-based robot control policy will be developed to interpret human commands using end-effect-based task features, improving task learning. Second, a safety-aware, multi-modal perception system will be created to optimize sensory inputs, ensuring intuitive and safe operation. Third, control and feedback mechanisms will be integrated within telemanipulation to provide precise and intuitive bi-directional interactions between the operator and the robot, reducing latency and enhancing task performance. The technical approach includes advanced machine learning techniques, such as Markov game modeling and learning-based control policies, to enable robots to learn and execute end-effect commands. Additionally, the project will enhance the understanding of manipulation interactions by developing a multi-modal perception system that fuses human sensory inputs, creating an intuitive and cohesive operator experience. By integrating principles from robotics, cognitive science, and artificial intelligence, the project aims to minimize latency and errors in human-robot collaboration. These results will significantly advance dexterous telemanipulation, human-robot systems, and safety-aware multi-modal sensing.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.