The broader impact of this I-Corps project is based on the development of an artificial intelligence technology to enhance the efficiency and effectiveness of online security measures. The technology analyzes emotional weighting in natural language to detect violent motivations within social media content in real-time. By identifying violent intentions early, the goal is to prevent harm and protect individuals and communities. Real-time analysis also has the potential to enhance safety and security, enabling law enforcement agencies and security personnel to respond swiftly to threats. Social media platforms can use this technology to automatically flag and remove harmful content, maintaining a safer online environment. Lastly, identifying violent language can also help direct users to mental health resources or crisis intervention services. This solution could improve how security threats are identified and managed and provide a scalable solution to address the pressing need for improved social media security while contributing to a safer digital space by proactively addressing violent motivations.<br/><br/>This I-Corps project utilizes experiential learning coupled with a first-hand investigation of the industry ecosystem to assess the translation potential of the technology. The solution is based on research that identifies the moral and emotional motivations that drive violent behavior. Previous research demonstrates accurate detection of users with strong moral motivations and their intended targets, thus the ability to identify violent actors via these specific motivators. This technology is based on an artificial intelligence (AI)-driven solution to detect nuanced indicators of violent motivation online. By analyzing text, images, and videos, this technology goes beyond traditional sentiment analysis, creating a more proactive approach to detecting the propensity for violent behavior online and deterring actual violent behavior in the real-world. By developing an application that analyzes social media content based on these research findings, this solution could address a critically important gap in current social media security measures.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.