EAGER: TaskDCL: Collaborative Research: IMPACT: Interactive Mixed-Reality-Based Platform for AI-Driven Adaptive and Collaborative Task Training Environments

Information

  • NSF Award
  • 2420352
Owner
  • Award Id
    2420352
  • Award Effective Date
    9/15/2024 - a year ago
  • Award Expiration Date
    8/31/2026 - 6 months from now
  • Award Amount
    $ 149,999.00
  • Award Instrument
    Standard Grant

EAGER: TaskDCL: Collaborative Research: IMPACT: Interactive Mixed-Reality-Based Platform for AI-Driven Adaptive and Collaborative Task Training Environments

Advanced AI-driven training platforms can revolutionize education, workforce development, and specialized training, such as emergency response preparation, by providing realistic, cost-effective, and interactive environments. Progress in this area has been hindered by the need for more affordable and adaptable platforms capable of creating realistic, real-time, closed-loop environments. This EArly-concept Grants for Exploratory Research (EAGER) project aims to develop research infrastructure to enhance human training through AI-driven task environments integrating humans with virtual scene simulations and multi-modal sensorimotor interactions, which holds significant societal benefits. By overcoming current technological limitations, AI-driven task environments can improve the quality and accessibility of training for various applications. The technology this award aims to develop has the potential to significantly reduce training costs, enhance learning experiences, and better prepare individuals for real-world challenges, ultimately benefiting society as a whole. Additionally, the project will introduce K-12 students to cutting-edge mixed reality and AI technologies, sparking their interest in STEM fields. <br/><br/>This research will first develop the platform for multi-modal sensorimotor interactions to ensure the best immersive and smooth interactions between humans and virtual scene simulations. It will then establish theoretical foundations and develop efficient algorithms for a closed-loop AI-driven scene task environment. In particular, the research will encompass three interdependent thrusts: 1)Establishing an edge-assisted mixed-reality infrastructure to provide immersive environments and enable smooth sensorimotor interactions between humans and virtual scene simulations. 2)Creating a wide range of training tasks and realistic sensorimotor interactions using flexible and composable modules. 3)Developing a multi-agent reinforcement learning engine to enable dynamic virtual scene generation and adaptation based on interactions. Collectively, this project will produce an immersive mixed-reality infrastructure that ensures smooth sensorimotor interactions, supports real-time task execution, and allows for the exploration of innovative multi-agent learning algorithms.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Alexandra Medina-Borjaamedinab@nsf.gov7032927557
  • Min Amd Letter Date
    8/7/2024 - a year ago
  • Max Amd Letter Date
    8/7/2024 - a year ago
  • ARRA Amount

Institutions

  • Name
    George Washington University
  • City
    WASHINGTON
  • State
    DC
  • Country
    United States
  • Address
    1918 F ST NW
  • Postal Code
    200520042
  • Phone Number
    2029940728

Investigators

  • First Name
    Tian
  • Last Name
    Lan
  • Email Address
    tlan@gwu.edu
  • Start Date
    8/7/2024 12:00:00 AM

Program Element

  • Text
    M3X - Mind, Machine, and Motor
  • Text
    Special Initiatives
  • Code
    164200

Program Reference

  • Text
    HUMAN-ROBOT INTERACTION
  • Code
    7632
  • Text
    EAGER
  • Code
    7916