The broader impact/commercial potential of this Partnerships for Innovation - Technology Translation (PFI-TT) project is in defending those falsely accused with fake multimedia evidence, and prevent individuals from deflecting legitimate accusations based on genuine evidence. The system developed will prevent cyber threats such as impersonation, fraud, social engineering attacks, and the submission of falsified evidence, thus safeguarding individuals, organizations, and the legal system. The proposed project has the potential to protect against cyber harassment, privacy invasion, and prevent the dissemination of false information by identifying harmful or misleading audio and video deepfake content. It will have a significant impact in a number of areas, including the media, politics, business, and social networks, by maintaining the integrity of digital multimedia. If digital forgeries are left unchecked, trust in the courts, the media and the government will necessarily erode, and as deepfakes and other forged multimedia become more prevalent it will become imperative to have a trustworthy source of truth to assist these entities with the identification of fakes. Finally, the project has the potential to drive innovation in artificial intelligence (AI) and computer vision, leading to the development of more robust deepfake detection methods.<br/><br/>The proposed project attacks the rising problem of forged multimedia in the legal system by offering the development of a trustworthy, AI-based, Deep Forgery Detector (DFD). It has a sound foundation in Neuro-symbolic AI, combining deep models with a symbolic approach in order to enable abstracting, reasoning, and explainability. The DFD will also enable single and multimodal data authenticity analysis to identify any tampering, or manipulation, such as fully or partially AI generated content. Additionally, the DFD’s hybrid nature, with its anomaly-detection and signature-based approach, will help to detect both known and unknown forgeries, further improving its generalizability. The report generated by the DFD will outline the underlying facts used for determination, using text and visual evidence, and will provide an authoritative answer by analyzing both the visual and audio aspects at file- and frame-levels. This project will leverage attack-resistant algorithms, improving their capabilities through iterative development, enabling the DFD to capture traces of anti-forensic processing and making it an attack-aware detector driven by game-theoretic and decoy mechanisms. Lastly, the DFD will also offer the users a context-aware interaction through a dialog/chatbot feature in order to elaborate its decision-making process and personalize the contents of the generated report.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.