New Directions in Bayesian Model Criticism

Information

  • NSF Award
  • 2311108
Owner
  • Award Id
    2311108
  • Award Effective Date
    9/1/2023 - 9 months ago
  • Award Expiration Date
    8/31/2026 - 2 years from now
  • Award Amount
    $ 225,000.00
  • Award Instrument
    Standard Grant

New Directions in Bayesian Model Criticism

This project will address the problem of Bayesian model criticism, which is crucial for the effective use of Bayesian statistics and probabilistic machine learning. Currently, the process of designing Bayesian models relies heavily on creativity and experience. This research will develop new statistical tools to evaluate the adequacy of Bayesian models, providing guidance for model design and revision. The project will focus on two innovative approaches: population predictive checks (population PCs) and the posterior predictive null (PPN). These methods combine Bayesian and frequentist ideas to enhance the robustness and rigor of Bayesian model checking. The research will contribute to the foundations of Bayesian statistics, foster connections between different statistical approaches, and advance the field of deep probabilistic models. This will also contribute to the research training of a graduate student who will be involved in the project.<br/><br/>Specifically, the research will develop two innovative approaches for Bayesian model criticism that will contribute to the field's technical advancements. The first approach focuses on population predictive checks (population PCs), which combine Bayesian and frequentist principles to provide population-based evaluation of Bayesian models. By leveraging the strengths of both paradigms, this research will develop novel methods that effectively assess the adequacy of Bayesian models, enabling researchers to gain insights into their behavior and performance for informed decisions on model design and revision. The second technical thread centers around the posterior predictive null (PPN), a novel type of model criticism that explores whether data generated from one proposed model can "fool" the model check of another model. By developing statistical tools to address this question, this research will assess the distinctiveness and Bayesian models, and give new directions for finding parsimonious solutions to data modeling. Through theoretical investigations, empirical evaluations, and real-world applications, including medical informatics and computational astrophysics, this research will demonstrate the efficacy of these innovations. The ultimate goal is to provide a comprehensive and practical workflow for building, evaluating, revising, and selecting modern Bayesian models. To ensure widespread access, the algorithms will be disseminated as open-source software, empowering statisticians, scientists, and probabilistic modelers to effectively employ these tools and advance the adoption of Bayesian statistics and probabilistic machine learning methodologies.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

  • Program Officer
    Yong Zengyzeng@nsf.gov7032927299
  • Min Amd Letter Date
    7/17/2023 - 10 months ago
  • Max Amd Letter Date
    7/17/2023 - 10 months ago
  • ARRA Amount

Institutions

  • Name
    Columbia University
  • City
    NEW YORK
  • State
    NY
  • Country
    United States
  • Address
    202 LOW LIBRARY 535 W 116 ST MC
  • Postal Code
    10027
  • Phone Number
    2128546851

Investigators

  • First Name
    David
  • Last Name
    Blei
  • Email Address
    david.blei@columbia.edu
  • Start Date
    7/17/2023 12:00:00 AM

Program Element

  • Text
    STATISTICS
  • Code
    1269

Program Reference

  • Text
    Machine Learning Theory