NRI: Collaborative Research: Learning Adaptive Representations for Robust Mobile Robot Navigation from Multi-Modal Interactions

Information

  • NSF Award
  • 1638072
Owner
  • Award Id
    1638072
  • Award Effective Date
    10/1/2016 - 7 years ago
  • Award Expiration Date
    9/30/2019 - 4 years ago
  • Award Amount
    $ 332,728.00
  • Award Instrument
    Standard Grant

NRI: Collaborative Research: Learning Adaptive Representations for Robust Mobile Robot Navigation from Multi-Modal Interactions

Most existing autonomous systems reason over flat, task-dependent models of the world that do not scale to large, complex environments. This lack of scalability and generalizability is a significant barrier to the widespread adoption of robots for common tasks. This research will advance the state-of-the-art in robot perception, natural language understanding, and learning to develop new models and algorithms that significantly improve the scalability and efficiency of mapping and motion planning in large, complex environments. These contributions will impact the next generation of autonomous systems that interact with humans in many domains, including manufacturing, healthcare, and exploration. Outcomes will include the release of open source software and data, workshops, K-12 STEM outreach efforts, and undergraduate and graduate education in the unique, multidisciplinary fields of perception, natural language understanding, and motion planning.<br/><br/>As robots perform a wider variety of tasks within increasingly complex environments, their ability to learn and reason over expressive models of their environment becomes critical. The goal of this research is to develop models and algorithms for learning adaptive, hierarchical environment representations that afford efficient planning for mobility tasks. These representations will take the form of probabilistic models that capture the rich spatial-semantic properties of the robot's environment and are factorable to enable scalable inference. This research will develop algorithms that learn and adapt these representations by fusing knowledge conveyed through human-provided natural language utterances with information extracted from the robot's multimodal sensor streams. This research will develop algorithms that then reason over the complexity of these models in the context of the inferred task, thereby identifying simplifications that enable more efficient robot motion planning.

  • Program Officer
    Reid Simmons
  • Min Amd Letter Date
    8/10/2016 - 7 years ago
  • Max Amd Letter Date
    8/10/2016 - 7 years ago
  • ARRA Amount

Institutions

  • Name
    Toyota Technological Institute at Chicago
  • City
    Chicago
  • State
    IL
  • Country
    United States
  • Address
    6045 S. Kenwood Avenue
  • Postal Code
    606372902
  • Phone Number
    7738340409

Investigators

  • First Name
    Matthew
  • Last Name
    Walter
  • Email Address
    mwalter@ttic.edu
  • Start Date
    8/10/2016 12:00:00 AM

Program Element

  • Text
    National Robotics Initiative
  • Code
    8013

Program Reference

  • Text
    Natl Robotics Initiative (NRI)
  • Code
    8086