Humans and other animals can effortlessly and subconsciously reconstruct the 3D world around them from the video imagery streaming to their eyes, and successfully use it for navigation, food-finding, predator avoidance, etc. Computer vision 3D technology has been evolving rapidly to reconstruct the world from a set of cameras and locate these cameras in the environment. This technology is a basis of navigation as in automated driving, robot navigation, and drone flights; a basis of manipulation as in robotic manufacturing, robotic medical interventions, etc.; measurement in metrology; modeling for the entertainment industry; and a host of other applications. As a result, 3D vision has experienced an exponential growth in capability, efficiency, and robustness. Despite this phenomenal growth arising from exploiting what is currently achievable, fundamental shortcomings exist that need to be addressed to enlarge the scope of application and to increase robustness in existing ones. First, images from rapidly moving cameras (e.g., drones and pedestrians) are often blurry and lack features; indoor scenes and others which have textureless surfaces or surfaces with repeated texture lack features or have indistinguishable features; and there are other examples which are often beyond the capabilities of current technologies. Second, image sensing typically enjoys a high degree of redundancy which is often discarded in current algorithms, thus forfeiting the opportunity to use the high information content inherent in the redundancy. Third, there is often a large gap between the internal representations used in the current technology, which are often point-based, and a semantic representation of the scene, which are more resonant with an understanding of underlying curves (e.g., ridges) and surface patches (faces) of an object. This project aims to remedy these shortcomings.<br/><br/>Several technical challenges need to be addressed to achieve these goals. First, this project identifies that the notion of numerical stability, currently confounded with degeneracy, should be thoroughly studied and analyzed for key multiview geometry (MVG) tasks. The stability requirement leads to a new class of techniques which will be implemented and made readily available to the community to help avoid failure modes in a broad selection of MVG problems. Second, the development of tools to solve very large polynomial systems is an enabling technology that will transform not just multiview geometry problems, but also a broad range problem from other scientific areas. Third, these developments will enable a novel MVG approach based on curves, surfaces, and their differential geometry for relative pose estimation, absolute pose estimation, and 3D reconstruction. This will serve to bridge the semantic-metric gap that exists between geometrically accurate 3D point clouds/meshes and semantically meaningful organizations in terms of objects, object parts, spatial layout, mapping, etc. In conjunction, these three streams of research will allow direct, efficient and reliable integration of information across a large number of views in multinocular vision systems.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.