Tree models are an important type of machine learning algorithm used in various applications such as finance, healthcare, and traffic management. They are particularly advantageous due to their simplicity and interpretability, making them well-suited for decision-making tasks, compared to complex neural networks that can be difficult to understand. However, despite their benefits, tree models are not immune to security and privacy concerns. Malicious actors can tamper with tree models or steal intellectual property, posing threats to the integrity and confidentiality of machine learning systems. Further, although there are studies of similar attacks on neural networks, differences between how neural networks and tree models work may affect how well those existing findings apply to tree models. Together, these issues mean there are a number of open questions around enhancing the security and trustworthiness of tree models. This project aims to develop novel strategies to address these questions and develop more robust and trustworthy AI-based systems, and develop both tools and educational opportunities through the work to make the findings widely available and impactful. <br/><br/>Specifically, this project addresses the need for robust model authentication, watermarking for intellectual property tracing, machine unlearning for data privacy, and defense against backdoor attacks for tree models. The technical aims are organized around four tasks: a) Pursuing model identification by embedding unique signatures to generate differently embedded models; b) Developing novel methodologies of robust watermarking for tree models, for the purpose of tracing intellectual property; c) Designing novel algorithms for machine unlearning in tree models by exploiting tree reconstruction, residual-stable split, and combination of tree techniques; and d) Investigating the implications of backdoor attacks against tree models by leveraging the insights from the above tasks on tweaking tree models without significantly impacting the accuracy. These research efforts will contribute to the advancement of tree model security and trustworthiness, ensuring that these models can be reliably deployed in real-world applications while mitigating the risk of malicious attacks, unauthorized access, and privacy breaches.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.