In conventional models of computation, algorithms have access to the complete data set throughout the computation. However, in many modern real-world scenarios, data arrives as a continuous high-volume stream and the processing algorithms do not have enough memory to store the entire data set. The data stream model is a well-studied abstract computational model for handling computations over continuous, high-volume data. This model has become pivotal in algorithm development for large datasets and has significant applications in fields such as data mining, network monitoring, and security. The goal of this project is to study several new and underexplored directions in data stream computing. By involving graduate and undergraduate students in research and mentoring them, the project will contribute to training the next generation of scientists and engineers.<br/> <br/>This project concentrates on three major research themes: (1) Initiate a study of a new data stream model known as the `right to forget' model. This study is motivated by modern considerations arising due to the explosive growth of data generation as well as privacy concerns. (2) Explore a new and emerging notion of randomized computations known as pseudodeterministic computations in the context of streaming algorithms. (3) Investigate the Delphic set streaming model where each item in the stream is succinctly represented as a set. This investigation is motivated by the recent discovery that connects data streaming algorithms to model counting algorithms--two seemingly disparate research topics. Each of these directions represents a strategic step towards advancing the field of data stream computations, addressing contemporary challenges, and unlocking new possibilities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.