SBIR Phase I: Highly power efficient and scalable hardware accelerator for AI applications

Information

  • NSF Award
  • 1938256
Owner
  • Award Id
    1938256
  • Award Effective Date
    10/15/2019 - 4 years ago
  • Award Expiration Date
    3/31/2020 - 4 years ago
  • Award Amount
    $ 224,996.00
  • Award Instrument
    Standard Grant

SBIR Phase I: Highly power efficient and scalable hardware accelerator for AI applications

The broader impact of this Small Business Innovation Research (SBIR) Phase I project is providing faster, cheaper and lower power alternatives to central processing units (CPUs) and graphic processing units (GPUs), making machine learning more accessible to students, engineers and scientists. In general, this will lead to faster product development and shorter time-to-market in the artificial intelligence market. Highly power-efficient machine learning accelerators make training and complex inferences possible on so-called "Edge" devices and can revolutionize the way machine learning tasks are performed for end users. By enabling fast and power-efficient Edge computing, this innovation benefits society by reducing data traffic while preserving privacy and data security since data never leave the device. The Total Addressable Market for hardware accelerators for machine learning applications was estimated to be around $1B in 2017 but will likely grow at a 50% Compound Annual Growth Rate (CAGR) until 2025 to $66 B. High power-efficiency and scalability of this innovation gives it an immense competitive advantage to penetrate different segments within this market.<br/><br/>The proposed project aims to develop a fast, scalable and area- and power-efficient matrix multiplier for machine learning applications. Matrix multiplication is at the heart of all machine learning algorithms and is the most computationally expensive task in these applications. Most hardware accelerator solutions store inputs, weights and partial sums in memory and retrieve them sequentially in order to perform matrix multiplication. The data movements between memory and computational units dominate the overall power consumption and latency of the system. By performing computations in memory, a significant power and area savings can be achieved. This SBIR project seeks to develop a technology to perform mixed-signal matrix multiplication in memory to significantly improve the speed and power- and area-efficiency of machine learning accelerators. Phase I will involve the design and verification of a matrix multiplier that can perform machine learning tasks more efficiently.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Rick Schwerdtfeger
  • Min Amd Letter Date
    8/28/2019 - 4 years ago
  • Max Amd Letter Date
    9/11/2019 - 4 years ago
  • ARRA Amount

Institutions

  • Name
    AREANNA, INC.
  • City
    BERKELEY
  • State
    CA
  • Country
    United States
  • Address
    1224 ROSE ST
  • Postal Code
    947021139
  • Phone Number
    5105907305

Investigators

  • First Name
    Seyed Behdad
  • Last Name
    Youssefi Azarbayjani
  • Email Address
    behdadyoussefi@yahoo.com
  • Start Date
    8/28/2019 12:00:00 AM

Program Element

  • Text
    SBIR Phase I
  • Code
    5371

Program Reference

  • Text
    Hardware Software Integration
  • Code
    8033