This project is an ExpandAI Capacity building pilot (CAP), which focuses on establishing and growing AI related activities at Meharry Medical College by conducting important use-inspired research in trustworthy, ethical, explainable and responsible artificial intelligence (AI) for the mitigation of the problem of algorithmic bias in AI-powered medical systems. Existing clinical AI methods are limited by attendant algorithmic and societal biases during systems design and development resulting in misdiagnosis and ultimately poor treatment and health disparities. Through enhanced understanding of the nature of “black box” AI algorithms, researchers at Meharry Medical College (MMC) will develop interpretable methods and tools for secure, private and reliable bias-free clinical decisions. The project is a collaborative effort between Meharry’s School of Dentistry, School of Graduate Studies, and School of Applied Computational Sciences (SACS) to provide American graduate students and medical professionals, with advanced training in AI and machine learning (ML) techniques to prepare them with skills for future AI-powered career pathways. Research capacity building plans include training of diverse faculty in the use of cutting-edge AI/ML techniques. On the educational side, short courses, summer school and tutorial series will be established to cover important topics such as overview of ethics in AI/ML, explainable AI methods, algorithmic bias and mitigation strategies and data privacy. <br/><br/>Several research themes were identified as potential growth areas for AI technologies across the Meharry campus including (i) explainable AI in medical systems, (ii) ethical and responsible AI for medical systems. For example, explainable AI systems are ones in which humans better understand the reasoning behind decisions made by AI systems other than predictive accuracy and statistical performance. Many practitioners, clinicians, researchers, and patients are reluctant to use AI unless it is explainable, verifiable, and trustable. Ethical and responsible AI deals with establishment of well-defined guidelines and legal regulations for ethical use of AI tools and technologies. The ethical use of AI tools is critical for preservation of fundamental human rights, health and public safety. Educational capacity building is planned through several initiatives including (i) training activities to disseminate awareness among medical professionals, graduate students, staff, and faculty; (ii) short course and tutorial series to cover topics such as overview of ethics in AI/ML, explainable AI methods, algorithmic bias and mitigation; (iii) new course focusing on Trustworthy AI in medical systems for the graduate programs; (iv) outreach to high school educators and summer academies for K-12 students. By providing a broad swath of diverse faculty, professionals, educators and students with AI knowledge and immersive learning experiences, AI literacy and skill development, will be significantly enhanced. The ExpandAI Program supports AI-powered education and workforce development, infrastructure and research at Minority Serving Institutions to strengthen and diversify U.S. research and education pathways and provide historically marginalized communities with new opportunities in STEM careers.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.