RI: Small: Collaborative Research: Structured Inference for Low-Level Vision

Information

  • NSF Award
  • 1618021
Owner
  • Award Id
    1618021
  • Award Effective Date
    7/1/2016 - 7 years ago
  • Award Expiration Date
    6/30/2019 - 4 years ago
  • Award Amount
    $ 194,612.00
  • Award Instrument
    Standard Grant

RI: Small: Collaborative Research: Structured Inference for Low-Level Vision

Vision is a valuable sensing modality because it is versatile. It lets humans navigate through unfamiliar environments, discover assets, grasp and manipulate tools, react to projectiles, track targets through clutter, interpret body language, and recognize familiar objects and people. This versatility stems from low-level visual processes that somehow produce, from ambiguous retinal measurements, useful intermediate representations of depth, surface orientation, motion, and other intrinsic scene properties. This project establishes a mathematical and computational foundation for similar low-level processing in machines. The key challenge it addresses is how to usefully encode and exploit the fact that, visually, the world exhibits substantial intrinsic structure. By advancing understanding of low-level vision in machines, this project makes progress toward computer vision systems that can compare to vision in humans, in terms of accuracy, reliability, speed, and power-efficiency.<br/><br/>This research revisits low-level vision, and develops a comprehensive framework that possesses a common abstraction for information from different optical cues; the ability to encode scene structure across large regions and at multiple scales; implementation as parallel and distributed processing; and large-scale end-to-end learnability. The project approaches low-level vision as a structured prediction task, with ambiguous local predictions from many overlapping receptive fields being combined to produce a consistent global scene map that spans the visual field. The structured prediction models are different from those used for categorical tasks such as semantic segmentation, because they are specifically designed to accommodate the distinctive requirements and properties of low-level vision: continuous-valued output spaces; ambiguities that may form equiprobable manifolds; extreme scale variations; and global scene maps with higher-order piecewise smoothness. By strengthening the computational foundations of low-level vision, this project strives to enable many kinds of vision systems that are more efficient and more versatile, and it strives to have impacts across the breadth of computer vision.

  • Program Officer
    Jie Yang
  • Min Amd Letter Date
    6/7/2016 - 7 years ago
  • Max Amd Letter Date
    6/7/2016 - 7 years ago
  • ARRA Amount

Institutions

  • Name
    Toyota Technological Institute at Chicago
  • City
    Chicago
  • State
    IL
  • Country
    United States
  • Address
    6045 S. Kenwood Avenue
  • Postal Code
    606372902
  • Phone Number
    7738340409

Investigators

  • First Name
    Ayan
  • Last Name
    Chakrabarti
  • Email Address
    ayanc@ttic.edu
  • Start Date
    6/7/2016 12:00:00 AM

Program Element

  • Text
    ROBUST INTELLIGENCE
  • Code
    7495

Program Reference

  • Text
    ROBUST INTELLIGENCE
  • Code
    7495
  • Text
    SMALL PROJECT
  • Code
    7923