EAGER: TaskDCL: Collaborative Research: IMPACT: Interactive Mixed-Reality-Based Platform for AI-Driven Adaptive and Collaborative Task Training Environments

Information

  • NSF Award
  • 2420351
Owner
  • Award Id
    2420351
  • Award Effective Date
    9/15/2024 - 4 months ago
  • Award Expiration Date
    8/31/2026 - a year from now
  • Award Amount
    $ 150,000.00
  • Award Instrument
    Standard Grant

EAGER: TaskDCL: Collaborative Research: IMPACT: Interactive Mixed-Reality-Based Platform for AI-Driven Adaptive and Collaborative Task Training Environments

Advanced AI-driven training platforms can revolutionize education, workforce development, and specialized training, such as emergency response preparation, by providing realistic, cost-effective, and interactive environments. Progress in this area has been hindered by the need for more affordable and adaptable platforms capable of creating realistic, real-time, closed-loop environments. This EArly-concept Grants for Exploratory Research (EAGER) project aims to develop research infrastructure to enhance human training through AI-driven task environments integrating humans with virtual scene simulations and multi-modal sensorimotor interactions, which holds significant societal benefits. By overcoming current technological limitations, AI-driven task environments can improve the quality and accessibility of training for various applications. The technology this award aims to develop has the potential to significantly reduce training costs, enhance learning experiences, and better prepare individuals for real-world challenges, ultimately benefiting society as a whole. Additionally, the project will introduce K-12 students to cutting-edge mixed reality and AI technologies, sparking their interest in STEM fields. <br/><br/>This research will first develop the platform for multi-modal sensorimotor interactions to ensure the best immersive and smooth interactions between humans and virtual scene simulations. It will then work to establish theoretical foundations and develop efficient algorithms for a closed-loop AI-driven scene task environment. In particular, the research will encompass three interdependent thrusts: 1)Establishing an edge-assisted mixed-reality infrastructure to provide immersive environments and enable smooth sensorimotor interactions between humans and virtual scene simulations. 2)Creating a wide range of training tasks and realistic sensorimotor interactions using flexible and composable modules. 3)Developing a multi-agent reinforcement learning engine to enable dynamic virtual scene generation and adaptation based on interactions. Collectively, this project will attempt to produce an immersive mixed-reality infrastructure that ensures smooth sensorimotor interactions, supports real-time task execution, and allows for the exploration of innovative multi-agent learning algorithms.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Alexandra Medina-Borjaamedinab@nsf.gov7032927557
  • Min Amd Letter Date
    8/7/2024 - 6 months ago
  • Max Amd Letter Date
    8/7/2024 - 6 months ago
  • ARRA Amount

Institutions

  • Name
    Pennsylvania State Univ University Park
  • City
    UNIVERSITY PARK
  • State
    PA
  • Country
    United States
  • Address
    201 OLD MAIN
  • Postal Code
    168021503
  • Phone Number
    8148651372

Investigators

  • First Name
    Bin
  • Last Name
    Li
  • Email Address
    binli@psu.edu
  • Start Date
    8/7/2024 12:00:00 AM

Program Element

  • Text
    M3X - Mind, Machine, and Motor
  • Text
    Special Initiatives
  • Code
    164200

Program Reference

  • Text
    HUMAN-ROBOT INTERACTION
  • Code
    7632
  • Text
    EAGER
  • Code
    7916