PROJECT SUMMARY/ABSTRACT Modern experimental approaches allow researchers to collect a variety of whole-brain data from the same animal via different anatomical labels, including tracers, genetic markers, and fiducial marks from recording electrodes. Unfortunately, viewing and analysis methods have not kept pace with the complexity of these datasets, which can be as large as several terabytes. This limitation makes it time- and resource-intensive to view and manipulate light-microscopy data or to share these datasets with distant laboratories. Currently available software solves some aspects of this problem, but no existing program provides a user-friendly way to visualize, annotate, and compare large neuroanatomical datasets across research sites, with minimal investment of computational resources. We propose to develop a web-based tool, named BrainSharer, to allow researchers to access, visualize, align, share, and semi-automatically annotate brain-wide data within a common framework. The foundation for this tool will be provided by Neuroglancer, a generic web-based volumetric viewer first developed at Google and then adapted for use in electron microscopy laboratories. While some of its current features are useful across applications, existing versions of Neuroglancer are not optimized for light-microscopy data. In particular, they do not realize the potential for sharing, viewing, and editing data across multi-laboratory collaborations, such as U19 projects. To enable BrainSharer to serve data rapidly and to save and restore sessions, we will add a modular distributed database to synchronize metadata across laboratories. In addition, we will tailor BrainSharer for light microscopy by displaying data in formats independent of the imaging modality, adding semiautomatic means to segment cell bodies and processes, adding tools for annotation (with special attention to defining cytological boundaries in three dimensions and tracing projection pathways), and adding ways to incorporate auxiliary data such as electrode tracks. In addition, we will integrate alignment tools into BrainSharer, so that separate datasets can be co-registered, visualized, and annotated in the same framework, along with established and emerging atlases. As test beds for development of BrainSharer, we will use three types of datasets from our U19 projects: whole-brain disynaptic and polysynaptic tracing, activity-based staining with c-fos, and neurovascular data. All software, training datasets, and video tutorials for BrainSharer will be made freely available to the community, hosted on our website, along with a slice histology dataset and an electrophysiology dataset with probes implanted throughout the brain. To orient new users, we will also provide a Jupyter notebook for converting raw, intermediate, and registered light-sheet data, along with detected cells and brain atlases, to precomputed format, so they can be loaded into BrainSharer. When complete, BrainSharer will make it straightforward for researchers to use their laptops to combine and compare large datasets from different anatomical labels for viewing and analysis relative to reference atlases, and to share this information across performance sites, thus increasing the ease of use and interoperability of big data in neuroscience.