More Americans are acquiring disabilities earlier in life, requiring assistance to age gracefully. Due to unbalanced demographics and the high cost of institutional care, learning-enabled robotic systems are an inevitable choice for assisting Individuals with Disabilities to age in their own homes. It is only a matter of time before learning-enabled assistive robots will be mainstream because of their ability to serve a diverse spectrum of our society's needs. In preparation for this future, it is important to ensure that the complex algorithms enabling these robots make safe decisions and are perceived as safe by their users. This project designs safe Learning from demonstrations (LfD) algorithms, a class of robot learning algorithms that can enable robots to learn tasks from demonstrations of end-users who have no knowledge about robotics or programming. The project will use aging-in-place as the use case for safe-LfD. Prior work indicate a general openness among stakeholders - elderly individuals wanting to age-in-place and their caregivers - to embrace robots and similar technologies as home aids. Since each elderly individual has somewhat unique care needs and each home is different, LfD fits naturally into the long-term vision of having robots as home-based support. This would allow elderly individuals or their caregivers to train robots with new care tasks or modify existing ones without the need for experts. However, LfD algorithms are not ready for prime time, despite intense interest in the field and hundreds of research papers with impressive results in simulation and controlled lab settings that do not scale up in learning real robotics tasks in the wild. The state-of-the-art LfD algorithms do not learn safe task policies. This project is a step toward removing algorithmic roadblocks for realizing LfD-enabled assistive robots for high-impact applications.<br/><br/>The project designs two safety attributes for LfD algorithms: (a) cognizance, which is the ability to filter out human demonstrations that may lead to learning of an unsafe policy, and (b) informedness, which involves sample-efficient learning of closed-loop policies that guarantee stability at the goal amid runtime changes in the environment. Cognizance is achieved through simultaneously extracting safety attributes of various tasks from human demonstrations and identifying demonstrations that deviate from those attributes. Informedness is realized through extending optimal control theory in LfD settings. All demonstrations are visual and therefore involve high dimensional state space. In addition to the two quantitative metrics, the project will generate a validated questionnaire as a qualitative metric of safety to gauge end-users' perceived safety after using a safe-LfD enabled robot. Rigorous validation studies will be conducted both in a simulated home and community settings. The project will generate one-of-a-kind data from the community-based usage of LfD-enabled robots. This will mobilize LfD research toward solving challenging realistic problems that do not appear in simulation or controlled lab settings.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.