A brief glimpse at a face quickly reveals rich multi-dimensional information about the person in front of us. How is this impressive computational feat accomplished? A recently revised neural framework for face processing suggests perception of face form information, i.e. face invariant features such as gender, age, and identity, are processed through the ventral visual pathway, comprising the occipital face area, fusiform face area, and anterior temporal lobe face area. However, evidence from fMRI remains equivocal about when, where, and how specific face dimensions of age, gender, and identity, are extracted. A key property of a complex computation is that it proceeds via stages and hence unfolds over time. We recently investigated the computational stages of face perception in a MEG study (Dobs et al., Nature Comms, 2019) and found that gender and age are extracted before identity information. However, this temporal information has yet to be linked to the spatial information available from fMRI because of limitations in current methods for spatial localization of MEG sources. Here, we propose to overcome these limitations and provide the full picture of how face computations unfold over both time and space in the brain by developing novel methods for localizing MEG sources, leveraging our team?s expertise in MEG and machine learning. In Aim 1 we will develop a new analytical MEG localization method called Alternating Projections that iteratively fits focal sources to the MEG data. In Aim 2 we will develop a novel data-driven MEG localization method based on geometric deep learning that reconstructs distributed cortical maps by learning statistical relationships in the non-Euclidean space of the cortical manifold. In Aim 3, we will first identify which method is most suitable to model human MEG face responses using fMRI face localizers as ground truth. We will then extract spatially and temporally accurate face processing maps to characterize the computational steps entailed in extracting age, gender, and identity information along the ventral visual pathway. A computationally precise characterization of the neural basis of face processing would be a landmark achievement for basic research in vision and social perception in humans. Insights into how face perception is accomplished in humans may further yield clues for how to improve AI systems conducting similar tasks. Further, the methods developed here may increase the power of MEG data to answer questions about the spatiotemporal trajectory of neural computation in the human brain.