How do we segregate the distinct objects in a complex visual scene? Since most real-world objects are opaque, and therefore partly occlude one another, the eye receives a patchwork of overlapping surfaces, and the brain is then left with the task of determining how to perceptually group these various surface patches into unified objects. Although we do this effortlessly every day, we still do not understand the underlying neural computations that accomplish this scene analysis. Perceptual grouping cues (e.g., two surface fragments moving together or possessing the same texture) provide important clues that can be used to organize the visual scene into complete objects, but the specific computations performed, and the brain regions involved, are largely unknown. This project employs brain imaging to quantify the relative strength (and “pecking order”) of the many possible perceptual grouping cues used in constructing perceived objects from their component structures. This has been achieved by the development of a novel method for visual stimulation and analysis using noise-based image classification (i.e., reverse correlation) during functional brain imaging. This method has been used extensively in behavioral laboratory testing, but until recently, has not been practical for application to brain imaging because it typically requires a very large number of trials. However, by optimizing this technique to achieve reverse correlation during brain imaging, it is possible to uncover the brain regions driving the perception of objects in our environment. A more complete understanding of the brain mechanisms underlying perceptual grouping will lead to optimized designs for visual displays in our environment including street signs, occupational safety warnings, medical equipment instructions, and virtually all dynamic displays of visual information, as well as better artificial intelligence and robotic visual scene analysis, crucial for new technologies such as driverless cars.<br/> <br/>Neuroscientific studies of object perception have previously focused primarily on the specificity of object representations in the brain. In contrast, the new approach in this research is to study the psychological and neural underpinnings of the formation of these object percepts. A key innovation is the development of a novel quantitative metric to reliably and quantitatively measure perceptual grouping that is flexible enough to be used both behaviorally and during functional magnetic resonance brain imaging (fMRI). Using this approach, it is possible to determine the critical grouping cues for object perception and detail their dominance relations in careful behavioral testing, and then to adapt the reverse correlation method to be used with brain imaging data and optimize the algorithm to reduce the number of trials (and total brain scan time) required. Finally, this new technique, comparing internal templates of neural structures to behavioral templates, can be utilized to specify the network computations driving brain-behavior relations during perceptual grouping. The results of this research will advance our understanding of visual cognition, and resolve where in the brain, and specifically at which level of the visual processing cortical hierarchy, the visual grouping cues are operating. This research will reveal computational algorithms used by the human brain for perceptual grouping and scene segregation that can also be utilized to enhance artificial intelligence (AI) visual scene analysis.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.