The World Health Organization defines self-care as "the ability of individuals, families, and communities to promote health, prevent disease, maintain health, and to cope with illness and disability with or without the support of a healthcare provider". Individuals recovering from medical treatments tend to perform self-care alone and post-operative patients are often expected to maintain a balance between rest and physical activity. While doctors often want the patients to start moving as soon as possible, moving too much or too intensely can increase recovery time and risk injury. The goal of this project is to guide patients through the recovery process using lightweight sensing and behavior modeling using situational and context-aware voice-based guidance. The team will develop machine learning methods to optimize outcomes and help patients manage their condition. The project will unify multimodal data on activity and behavioral sensing on a watch with an artificial intelligence (AI)-driven intervention agent. This should advance aspects of at-home healthcare for an underserved portion of the population and, more generally, contribute to basic science in adaptive, real-time, in situ multimodal interactions. The team will apply advances in basic science to post-operative care procedures for surgical excision for skin cancers on the head and neck. <br/><br/>In this project, the team proposes to solve these challenges using situated and context-aware voice-based guidance. The system will use watch-based multimodal sensing to guide the patient through three care procedures with technical demands of increasing complexity: maintaining appropriate activity levels, pain management, and wound care. The agent will recognize the patient’s actions and behaviors, and pro-actively intervene only as needed. It will augment doctors' understanding of patients' situations with a meaningful and appropriate level of explanation when a problem occurs. Where there is repetition, it will adapt to users’ changing needs as they develop familiarity with the associated procedure. The proposed work will contribute to science through (i) the development of new machine learning models that use sparse, multimodal, in situ training data to identify user actions, motion and body pose; (ii) development of generalizable physical behavior modeling; (iii) advances in knowledge regarding how state-of-the-art sensing and natural language processing combine to better support at-home patient recovery; and (iv) identifying effective strategies to determine the level of support and intervention the user needs during post-operative recovery and adapting system performance accordingly.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.