This project will develop a framework for rapid and accurate design-space explorations of application-level workloads assuming technology-enabled in-memory computing (IMC), which is at present being investigated for a range of application spaces (AI/machine learning, bioinformatics, graph processing, etc.). IMC is of great interest as more and more compute workloads must process ever growing amounts of data. Frequently, the energy and latency associated with data transfer from a computer’s memory to a processor can overwhelm the cost of the processing itself. As such, it is highly desirable to co-locate processing and memory. Work in the project will result a publicly available, curated framework that leverages both existing device models and design tools, and that incorporates new device models and design tools to properly evaluate the IMC design space with at-scale, application-level workloads. A modeling and evaluation infrastructure will be developed to address the above design/evaluation challenges as there is an obvious need to explore a vast design space. Investigators in this project will also work with K-8 teachers to augment existing STEM curricula with material that exposes students to fundamental concepts and skills in computer science. This is especially relevant as computer science concepts are now assessed on state-wide standardized tests. Students from under-represented groups will be recruited and mentored via REU experiences.<br/><br/>To explore the IMC design space, device-level modeling, circuit/architectural-level modeling, device non-ideality (e.g., variation) analysis, and ways to integrate heterogeneous architectural solutions that target specific application-level workloads must all be studied. In the IMC space, (i) the number of candidate technologies is large and ever-changing, (ii) multiple candidate IMC circuits and architectures – e.g., computing at the array periphery (CAP), content addressable memories (CAMs) and crossbars – exist, (iii) IMC solutions may be more susceptible to device variations/non-idealities, and this impact must be captured at the application level, (iv) emerging technology-enabled IMC solutions may be used with existing architectural solutions and/or in a variety of heterogenous designs, and (v) there are effectively an infinite number of application-level mappings/potential algorithmic changes that one might consider. With respect to device models, there is a deliberate focus on ferroelectric devices – i.e., front-end-of-line silicon ferroelectric field effect transistors, back-end-of-line metal-oxide ferroelectric field effect transistors, and multi-gate ferroelectric field effect transistors – owing to ever-growing interest in this technology as well as the need to consider monolithic 3D processing/memory systems. For IMC circuits/architectures, this project will expand and develop modeling/evaluation tools for two different “flavors” of computing in memory – (i) CAMs (that can report memory entries that best match a given query) and (ii) CAP. For CAMs, representative efforts include projecting figures of merit for binary, ternary, multi-level, and analog CAM arrays (read/write energy and latency, etc.) designs implemented with different non-volatile memories, for different matching functions. Determining optimal CAM array sizes and other design parameters will also be considered. Evaluation of CAP designs for different NVMs will also be developed. For applications, solutions based on IMC fabrics for a subset of applications from MLPerf will be evaluated. MLPerf represents a consortium of AI leaders who have derived relevant workloads for vision, language, etc.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.