ENG-AI: EPCN: Small: Computationally Efficient Learning using Graph Neural Networks with Theoretical Guarantees

Information

  • NSF Award
  • 2430223
Owner
  • Award Id
    2430223
  • Award Effective Date
    1/1/2025 - a month ago
  • Award Expiration Date
    12/31/2027 - 2 years from now
  • Award Amount
    $ 439,539.00
  • Award Instrument
    Standard Grant

ENG-AI: EPCN: Small: Computationally Efficient Learning using Graph Neural Networks with Theoretical Guarantees

Abstract for NSF proposal #2430223, entitled “ENG-AI: EPCN: Small: Computationally Efficient Learning using Graph Neural Networks with Theoretical Guarantees,”<br/><br/>PI: Wang, Meng: Associate Professor, Rensselaer Polytechnic Institute<br/><br/>Graph neural networks (GNNs) have emerged as a powerful tool for analyzing and processing graph-structured data. They have found applications in diverse fields such as robotics, power systems, recommendation engines, and social network analysis. Despite those success, their widespread application faces significant challenges, including high computational requirements and lack of interpretability and performance guarantees.<br/>This proposal aims to lay the groundwork for overcoming those challenges, establishing theoretical foundations and developing practical algorithms to enhance the efficiency and reliability of GNNs across various engineering applications. Key objectives include systematically analyzing how graph topology and network architecture influence performance by delving into the dynamics of learning and generalization in GNNs. Most of the existing theoretical works on GNNs focus on either analyzing the expressive power of GNNs or bounding the generalization gap between training and testing or characterizing the training convergence, disregarding the joint problem of learning dynamics and generalization. This study encompasses a range of GNN architectures, from established models like graph convolutional networks (GCNs) to emerging structures such as graph transformers (GTs) and graph mixture of experts (GMoEs).<br/>A crucial aspect of this proposal is the optimization of computational and memory resources in various aspects. Techniques such as graph data aggregation reduction, network pruning, attention sparsification, and dynamic joint sparsification methods will be explored to streamline GNN operations. These efforts are complemented by the introduction of novel GMoE architecture to further enhance efficiency.<br/>This proposal will advance the development of trustworthy AI systems applicable across societal infrastructures like social networks and power grids. Moreover, by focusing on computational efficiency, the proposal contributes to the advancement of green AI, aiming to reduce economic costs and environmental impact associated with large-scale AI models. Collaboration with IBM through the RPI-IBM AI Research Collaboration expands the project's reach and ensures real-world applicability. Additionally, an integral education and outreach plan is included, spanning from K-12 education to professional training, with a particular emphasis on engaging women and minority students in AI research and application.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Yih-Fang Huangyhuang@nsf.gov7032928126
  • Min Amd Letter Date
    8/1/2024 - 6 months ago
  • Max Amd Letter Date
    9/19/2024 - 5 months ago
  • ARRA Amount

Institutions

  • Name
    Rensselaer Polytechnic Institute
  • City
    TROY
  • State
    NY
  • Country
    United States
  • Address
    110 8TH ST
  • Postal Code
    121803590
  • Phone Number
    5182766000

Investigators

  • First Name
    Meng
  • Last Name
    Wang
  • Email Address
    wangm7@rpi.edu
  • Start Date
    8/1/2024 12:00:00 AM

Program Element

  • Text
    EPCN-Energy-Power-Ctrl-Netwrks
  • Code
    760700

Program Reference

  • Text
    LEARNING & INTELLIGENT SYSTEMS
  • Code
    8888