CRCNS: Multimodal Dynamic Causal Learning for Neuroimaging

Information

  • Research Project
  • 10396137
  • ApplicationId
    10396137
  • Core Project Number
    R01MH129047
  • Full Project Number
    1R01MH129047-01
  • Serial Number
    129047
  • FOA Number
    PAR-21-005
  • Sub Project Id
  • Project Start Date
    8/5/2021 - 2 years ago
  • Project End Date
    5/31/2025 - 12 months from now
  • Program Officer Name
    FERRANTE, MICHELE
  • Budget Start Date
    8/5/2021 - 2 years ago
  • Budget End Date
    5/31/2022 - 2 years ago
  • Fiscal Year
    2021
  • Support Year
    01
  • Suffix
  • Award Notice Date
    8/5/2021 - 2 years ago
Organizations

CRCNS: Multimodal Dynamic Causal Learning for Neuroimaging

CRCNS Research Proposal: Collaborative Research: Multimodal Dynamic Causal Learning for Neuroimaging A Project Description A.1 Introduction Many analyses of fMRI and other neuroimaging data aim to discover the underlying causal or commu- nication structures that generated that activity.1,2 An accurate characterization of these brain structures is important for understanding neural circuits, systems-level neuroscience, and the neural bases of var- ious cognitive psychological phenomena or mental diseases. Brain structures learned from neuroimag- ing data also provide a powerful diagnostic tool for predicting everything from the concepts currently in one?s mind3?5 to whether one suffers from different mental diseases.6?10 Given the importance of such brain connectivity networks, it is unsurprising that a wide variety of learning algorithms have been developed for different neuroimaging modalities. In particular, many of these methods aim to infer the underlying causal or connectivity networks from data (in contrast with model comparison methods such as dynamic causal modeling (DCM)11). These network discovery methods have achieved some notable successes,2,6,12?21 but have also largely failed to address two issues that can impede our ability to achieve improved understanding of the full, working brain. Our project will develop, validate, and apply methods that solve both of these challenges. First, existing brain connectivity inference methods can be roughly divided into two groups: static me- thods that do not actually treat the brain as a dynamic system (e.g., IMaGES22 and most of the ap- proaches tested by Smith et al.23); and dynamic methods that explicitly measure and model the dy- namics of the brain. Static methods obviously fail to use all of the available information. In contrast, dynamic methods use the full structure of the measurements, but essentially all such methods24?28 infer causal and connectivity structures at the timescale of the measurement modality, rather than at the brain?s causal timescale. However, the networks learned from data at the measurement and brain timescales can be quite different, even given solutions for all of the other statistical and measurement problems facing neuroimaging analysis.29 Moreover, the important facts about causal or connectivity structure are frequently about which brain regions communicate directly with which other brain re- gions, which requires a focus on the brain timescale, not the measurement modality timescale. It is thus scienti?cally critical that we have methods that can determine the causal connections that exist at the timescale of the underlying neural systems, not just those that are found at the timescale of our particular neuroimaging methods. Second, there are multiple neuroimaging measurement modalities, each with their own strengths and weaknesses. There are obvious and widely recognized advantages of multimodal information fusion: 1) access to multiple, richer datasets, larger sample sizes, and improved estimation quality; 2) improved spatial coverage of the brain compared to fast dynamic modalities alone; 3) improved dynamic coverage of the range of signals that are informative about interactions of brain networks; and 4) enhanced estimation quality and reductions in modality-speci?c de?ciencies due to complementary aspects of different modalities. These advantages are heavily exploited for feature and representation learning; our group has been highly active in this ?eld.10,30?37 However, to our knowledge, no methods have been developed and validated that can learn causal information (effective connectivity) using data from multiple modalities. There exist machine learning methods (developed by members of our group) that can combine causal information from disparate datasets,38?41 but these methods have never been applied to multimodal neuroimaging data. Moreover, use of these methods to combine spatially precise (e.g. fMRI) and dynamically precise (e.g. EEG, MEG) modalities requires a theory of the differences in 1

IC Name
NATIONAL INSTITUTE OF MENTAL HEALTH
  • Activity
    R01
  • Administering IC
    MH
  • Application Type
    1
  • Direct Cost Amount
    260665
  • Indirect Cost Amount
    93077
  • Total Cost
    353742
  • Sub Project Total Cost
  • ARRA Funded
    False
  • CFDA Code
    242
  • Ed Inst. Type
    SCHOOLS OF ARTS AND SCIENCES
  • Funding ICs
    NIMH:353742\
  • Funding Mechanism
    Non-SBIR/STTR RPGs
  • Study Section
    ZRG1
  • Study Section Name
    Special Emphasis Panel
  • Organization Name
    GEORGIA STATE UNIVERSITY
  • Organization Department
    BIOSTATISTICS & OTHER MATH SCI
  • Organization DUNS
    837322494
  • Organization City
    ATLANTA
  • Organization State
    GA
  • Organization Country
    UNITED STATES
  • Organization Zip Code
    303023999
  • Organization District
    UNITED STATES