Comparing Bayesian models

Bayesian workshop - STEP 2023

Scott James Perry

University of Alberta

By the end of this lesson you will be able to…

  • explain what the leave-one-out information criterion (LOOIC) is trying to estimate
  • know how to use the LOOIC in brms to compare models
  • interpret LOOIC comparisons according to rules of thumb from one of the creators

What about Bayes Factors?


  • LOOIC is not the only option for comparing models
  • We can use Bayes Factors to make inferences
  • They heavily depend on priors
  • We won’t discuss them further

How should we evaluate a model?

LOOIC estimates out-of-sample predictive performance

  • LOOIC approximates the performance of our model to new data
  • It give us the expected log pointwise predictive density
  • Comparisons have a measure of uncertainty

LOOIC estimates out-of-sample predictive performance

  • LOOIC approximates the performance of our model to new data
  • It give us the expected log pointwise predictive density
  • Comparisons have a measure of uncertainty



IMPORTANT:
Prediction and inference are two different goals. LOOIC will happily pick the ‘wrong’ model if it has better predictions.

We get a difference + some uncertainty

loo(m1,m2,m3,m4)


elpd_diff se_diff
m4 0.0 0.0
m3 -25.2 8.8
m2 -171.0 19.4
m1 -505.9 26.7

LOOIC warnings - problematic observations


These threaten the validity of LOOIC



  • We can set moment_match = TRUE
  • We can refit the model for each problematic obs.
  • We can perform k-fold CV
  • We can perform LOO CV

Let’s go find out whether a lognormal model of reaction times is better



Open up script S4_E2_model_comparison.R