Researchers and the public have been alarmed by a fact that user privacy of training data in machine learning (ML) models has been exploited in many ways, leading to a rapidly expanding field of federated learning(FL). In FL, the learning of ML models is performed directly on user devices, while the aggregated model is composed with a help of a central server. As data never leave user devices, this new paradigm offers a key promise to protect data privacy. It, unfortunately, poses new challenges in both security and privacy. On one hand, malicious users can compromise security by injecting backdoors into the model updates, thus poisoning the aggregated model. On the other hand, there is a risk of privacy leakage as an untrusted server can inverse the model update to expose private data. This project develops a principled and systematic FL framework that simultaneously offers both privacy and security protection against threats from malicious users and servers. As part of this project, novel protocols will be developed to ensure verifiability, execution integrity, model confidentiality, and protection against adversarial attacks. The success of the project holds significant potential in expanding machine learning to new application scenarios, especially, when no trust is assumed among the stakeholders. The findings may also benefit other fields, such as zero-knowledge proof, distributed machine learning, and distributed ledger technology. The project involves students at all levels, with an emphasis on attracting students from underrepresented groups and K-12 students.<br/><br/>The focus of the project is to develop a principled and systematic FL framework with three jointly key components: 1) a lightweight secure aggregation and backdoor inspection mechanisms in which each user is responsible for both securely aggregating their values and an attestation of an attack-free model, 2) a succinct non-interactive argument of knowledge (SNARK) attestation that minimizes non-arithmetic operations to maintain both high accuracy and communication-efficiency, and 3) a blockchain-based FL architecture to tight together security measures at various stages in the training process, offering privacy and security protection for the entire training process. By shifting a task of proving that model is free-of-attack to users, coupling of Blockchain for transparency, this project provides a first step towards a secured and privacy protection of distributed learning systems. The success of this novel approach will significantly impact the design of FL for many real-life applications.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.