This project will study how making common telepresence robots more expressive and interactive affects people's willingness to use them and their opinions of remote collaborators. Many telepresence robots take the form of a screen on a mobile platform, giving remote attendees a physical body that can increase feelings of presence and interaction compared to a normal videoconference. However, the limited non-verbal expressiveness of these platforms is a major barrier to their use. Thus, the research goal is to increase the embodiment and social interaction capabilities of these platforms by adding a hand and arm to support common non-verbal interactions such as pointing, gesturing, and touch. To do this, the researchers will first build a simple hand and arm to support these non-verbal interactions and add it to an existing telepresence robot. They will then develop software to run on the robot to execute the gestures and ensure the safety of people nearby, as well as a user interface that maps gestures by the remote user onto gestures the hand and arm are able to execute. They will then test the usability of the system and its effects on social presence through studies that include both one-on-one and small group icebreaking conversations. This work will lead toward more natural interfaces for telepresence robots and a better understanding of how people accept them and interact with them. This, in turn, should lead to social benefits by making remote interaction more effective, saving time, effort, and fuel costs around travel while supporting not just remote meetings but other remote services such as medical diagnosis and caregiving. The team will also use the research both for their own classes and for outreach at events designed to encourage children to explore science careers.<br/><br/>The work sits at the intersection of telerobotics, haptics, and social psychology. Because gestures, pointing, handshakes, and other non-verbal communication are an important part of human interaction that current telepresence platforms do not support, the work focuses on the development of a lightweight arm that can implement those gestures without the complexity, fragility, and expense of arms that fully mimic human motion. To make this tradeoff, the research team will develop a 3D-printed, 5-fingered hand with 3 degrees of freedom and simple connections that allow the fingers to bend naturally enough to recreate the intended gestures. The control software will use a forward and inverse kinematics approach to model hand configurations and use an open-loop controller that works along with the human operator to implement the gestures; bump, force, and optical sensors will be used to address safety concerns. For the human remote operator, the team will develop interfaces that (a) add cameras to the robot to provide a fuller view of the remote environment needed for effective gestures, and (b) use motion-tracking hardware to detect the remote user's arm motion and translate it into the space of possible motions of the robot arm, focusing on the specific targeted social gestures. They will evaluate the system through a series of between-subjects user studies, having participants as either the remote or local user interacting with a version of the robot with or without the arm. Participants will interact with trained experimental confederates both to reduce variability and to ensure that the target non-verbal gestures are experienced in both the with- and without-arm conditions. The team will measure perceived social connectedness with conversation partners and acceptability of the robot using standard scales, as well as asking questions about the particulars of the experience both to gain deeper insight into the reasons why the arm is effective (if it is) and to guide the design of future systems.