Collaborative Research: HCC: Medium: Untethered3D: In-Air 3D Modeling Using Non-Visual Feedback

Information

  • NSF Award
  • 2402894
Owner
  • Award Id
    2402894
  • Award Effective Date
    10/1/2024 - 8 months ago
  • Award Expiration Date
    9/30/2028 - 3 years from now
  • Award Amount
    $ 479,999.00
  • Award Instrument
    Standard Grant

Collaborative Research: HCC: Medium: Untethered3D: In-Air 3D Modeling Using Non-Visual Feedback

In this context of virtual reality, creating, perceiving, and editing three-dimensional (3D) shapes are at the core of activities such as product design (creating or evaluating objects for manufacturing or personal fabrication), online shopping (experiencing furniture in a room or trying on clothing), and specialized training (gaining familiarity with a remote tool). Yet, today's approaches for interacting with virtual 3D shapes are strictly visual, requiring precise manipulation and interpretation of digital designs on a screen. This project's goal is to create algorithms and interfaces that make 3D modeling easier and more effective, even in the absence of visual cues: auto-correct for 3D drawing, the ability to hear shapes, and the ability to edit 3D shapes verbally. By using senses that do not require a screen—body awareness and sound—this project aims to untether people from their screens, enabling virtual 3D perception from anywhere. The outcomes of this project are expected to have far-reaching impacts, including increased accessibility for people with visual impairments, enhanced interface techniques for low-visibility scenarios, and new opportunities for underrepresented groups in research and do-it-yourself fabrication.<br/><br/>The research focuses on three main objectives: developing accurate “in-air” 3D drawing tools, designing sonification (conveying information through sound) techniques for non-visual shape perception and editing, and creating verbal 3D shape editing tools and interactions. These aims will be pursued through auto-correct algorithms that account for the limits of proprioceptive (a person’s sense of their body pose and movement) accuracy, techniques to sonify shapes based on hand pose, and methods for verbal shape modification. This research sets the stage for future studies on incorporating sound and speech into 3D modeling, as well as non-visual user interfaces.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Thomas Martintmartin@nsf.gov7032922170
  • Min Amd Letter Date
    8/18/2024 - 9 months ago
  • Max Amd Letter Date
    8/18/2024 - 9 months ago
  • ARRA Amount

Institutions

  • Name
    University of Chicago
  • City
    CHICAGO
  • State
    IL
  • Country
    United States
  • Address
    5801 S ELLIS AVE
  • Postal Code
    606375418
  • Phone Number
    7737028669

Investigators

  • First Name
    Rana
  • Last Name
    Hanocka
  • Email Address
    ranahanocka@uchicago.edu
  • Start Date
    8/18/2024 12:00:00 AM

Program Element

  • Text
    HCC-Human-Centered Computing
  • Code
    736700

Program Reference

  • Text
    Cyber-Human Systems
  • Code
    7367
  • Text
    MEDIUM PROJECT
  • Code
    7924