Bayesian Model Comparison¶
Marginal Likelihood¶
The marginal likelihood (also called the model evidence) measures how well a model fits the data by integrating over possible parameter values under the prior:
If the prior concentrates mass on parameters that assign high likelihood to the data, the marginal likelihood is large. If the prior is diffuse over many parameter values, the marginal likelihood is lower — even if the maximum likelihood is high.
This gives the marginal likelihood a built-in Occam’s razor property: more flexible models can assign probability to many datasets, but since each model’s distribution must normalize, no single dataset receives very high probability. Simpler models that focus probability on fewer datasets can assign higher marginal likelihood to those datasets — provided the data actually came from such a model.
Bayesian Model Averaging and Selection¶
Given a collection of models with prior probabilities , a fully Bayesian prediction integrates over all models:
This is Bayesian model averaging. A simpler approximation — model selection — picks the single best model and uses only it for prediction.
Marginal Likelihood in Exponential Families¶
For exponential family models with conjugate priors, the marginal likelihood has a closed form as a ratio of normalizing constants:
where and are the posterior hyperparameters. This is the same structure used to derive collapsed Gibbs samplers.
Example — Bayesian linear regression. Under a normal-inverse-chi-squared conjugate prior with hyperparameters , the marginal likelihood is:
This can be used to compare models defined by different choices of basis functions (i.e., different design matrices ) or different prior hyperparameters.
The Laplace Approximation¶
When the marginal likelihood is intractable, the Laplace approximation offers a closed-form estimate. The idea is to approximate the log posterior by a second-order Taylor expansion around the MAP estimate :
where is the negative inverse Hessian at the mode. The gradient term vanishes because is a stationary point. This approximation gives a Gaussian posterior .
The Laplace approximation is theoretically justified by the Bernstein–von Mises theorem: as , the posterior converges to a Gaussian centered on the true parameter with covariance , where
is the Fisher information.
Approximating the log marginal likelihood. Substituting the Laplace approximation into the marginal likelihood integral gives:
Using and leads to the Bayesian information criterion (BIC):
a penalized maximum likelihood score where is the number of parameters.
Importance Sampling Estimates¶
For an unbiased Monte Carlo estimate of the marginal likelihood, draw and average the likelihoods:
This estimate is unbiased but can have very high variance when the prior and posterior are misaligned. Importance sampling reduces variance by using a proposal that targets high-likelihood regions:
The optimal proposal is the posterior itself (giving zero variance), but that is not available in practice.
Annealed importance sampling (AIS) Neal, 2001 constructs a good proposal by defining a sequence of distributions that anneal from the prior to the posterior:
Samples are propagated through this sequence via MCMC transition operators, yielding an importance weight that provides an unbiased estimate of the marginal likelihood.
Empirical Bayes¶
Rather than fixing hyperparameters and in advance, empirical Bayes (also called type-II maximum likelihood) chooses them by maximizing the marginal likelihood:
For exponential families this objective is available in closed form; for more complex models, the Laplace approximation or other methods can be used. Optimization is typically done via gradient descent.
Caveats¶
Bayesian model comparison via the marginal likelihood requires a proper prior. In the improper/uninformative limit, the marginal likelihood is zero.
It is most meaningful for finite, discrete sets of models .
The marginal likelihood does not measure generalization. It measures the expected probability of the observed data under the prior, not the probability of new data under the posterior.
Research on Bayesian model comparison and marginal likelihood estimation remains active Lotfi et al., 2022.
Posterior Predictive Checks¶
Posterior Predictive Distribution¶
Given a fitted model, the posterior predictive distribution for a new observation at covariates is:
This can be approximated via Monte Carlo in general, or computed in closed form for conjugate models (e.g., Bayesian linear regression yields a Student-t predictive distribution).
Posterior Predictive Checks (PPCs)¶
Posterior predictive checks compare the observed data to data replicated from the posterior predictive distribution. The procedure is:
Draw from the posterior.
Draw a replicated dataset from .
Compare to the observed data .
If the model is well-specified, the observed data should look like a plausible draw from the posterior predictive distribution.
Example — Newcomb’s speed of light. Using a simple Gaussian model with a flat prior, we can generate replicated datasets and compare their histograms to the original data. Systematic discrepancies indicate model misspecification.
Test Statistics¶
Rather than comparing full datasets, it is often easier to compare test statistics (or discrepancy measures) :
Compute for the observed data under each posterior draw.
Compute for the replicated data.
Compare the distributions.
Example. Using for Newcomb’s data reveals that the Gaussian model fails to capture the outliers in the left tail — the minimum of the observed data is far smaller than the minimum of any replicated dataset.
The posterior predictive -value formalizes this:
A -value near 0 or 1 indicates a poor fit. In practice, the full distribution of is more informative than the scalar -value.
Sensitivity Analysis¶
Beyond goodness-of-fit, it is important to assess how sensitive conclusions are to modeling choices:
Structural choices: try different model families (e.g., distribution instead of Gaussian for robustness to outliers, hierarchical instead of pooled models).
Prior choices: vary prior hyperparameters and check that key inferential conclusions are stable.
Quantity sensitivity: extreme quantiles and extrapolations are more sensitive than means and interpolations.
Conclusion¶
This chapter covered two pillars of model criticism in the Bayesian framework. The marginal likelihood provides a principled score for comparing models that automatically penalizes unnecessary complexity (Occam’s razor), and it can be approximated via the Laplace approximation, importance sampling, or annealed importance sampling. Posterior predictive checks close the loop of Box’s loop: by generating replicated data from the fitted model and comparing them to the observed data using test statistics, we can detect systematic failures and guide model improvement. Together these tools support the iterative cycle of building, computing, and critiquing probabilistic models.
- Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125–139.
- Lotfi, S., Izmailov, P., Benton, G., Goldblum, M., & Wilson, A. G. (2022). Bayesian Model Selection, the Marginal Likelihood, and Generalization. arXiv Preprint arXiv:2202.11678.