The computer industry is in the midst of a revolution in data-storage technology that is forcing a major reevaluation of the algorithms used for moving data through the computer. Every new generation of storage hardware has required a new theoretical understanding of these algorithmic building blocks. Such theoretical improvements have had a profound impact on other fields of computer science, including databases and file systems, and even networks, operating systems, and machine learning. This project aims to develop the algorithmic solutions needed to exploit this seismic shift in storage technology.<br/><br/>The team considers three impacts on algorithm performance that arise from new hardware technology such as nonvolatile memories and increased parallelism: (1) there are smaller gaps between levels in latency and in bandwidth, (2) many-core technologies introduce sharing effects on caches, and (3) memory hierarchies do not adhere to a standard strictly nested model. The team is investigating: (a) algorithmic problems in parallel-cache allocation and high-bandwidth-memory scheduling and allocation; (b) data structural problems that arise from different I/O cost models, including those that factor in the computational cost and/or the cost of durability; (c) extensions to the streaming and semi-streaming models, where algorithms have some amount of sequentially accessible working memory in addition to the traditional small pool of randomly accessible memory; (d) new I/O-efficient algorithms for directed graphs. The team is continuing community-building efforts to span systems and algorithms, including founding, steering, and/or running two new conferences and organizing workshops on the theory of non-volatile memory and storage.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.