Classically, an algorithm is called "efficient" if its running time is polynomial in the input size (i.e., as the input size doubles, the runtime is multiplied by some constant term). As the data becomes large, however, many such algorithms are no longer efficient in practice. For example, a quadratic-time algorithm (whose runtime grows fourfold after doubling the input size) can easily take hundreds of CPU-years on inputs of gigabyte size. For even larger inputs, the running time of a practically efficient algorithm must be effectively linear in the input size. For many problems such algorithms exist; for others, despite decades of effort, no such algorithms have been discovered yet. A recently developed theory of "fine-grained complexity" attempts to provide an explanation to this phenomenon, by identifying natural assumptions that imply that some of the existing algorithms cannot be improved. The goal of this project is to make progress on some of the key directions in this area, by developing new algorithms where possible, and showing limitations otherwise.<br/><br/>The project will focus on approximate algorithms, because such algorithms are often very useful in practice, and their existence is often not precluded by the existing hardness results. On a high-level, the project will investigate the following topics: (1) approximate algorithms, limitations, and limitations-inspired algorithms, and (2) new hardness assumptions for improving existing hardness results. The specific goals include key computational problems over graphs, sequences and kernels.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.