Bayesian workshop - STEP 2023
University of Alberta
We sample from a distribution proportional to the posterior
Your job:
You have:
https://chi-feng.github.io/mcmc-demo/app.html
brm()
Let’s say you have a difficult math problem:
How confident are you going to be with any answer?
Same problem:
Would we all be more confident in the answer?
Iterations:
Iterations:
Warmup:
iter
We want:
Exceeding max treedepth
Low ESS:
Stan has excellent documentation, and warnings often direct you to helpful pages
After throwing out warmup draws, we are left with 1000s of samples from posterior distribution
What do we do with them?
Because we have samples from posterior, making inferences is like doing descriptive statistics:
as_draws_df()
tidybayes
for processing models in tidyverse-style code# A draws_df: 6 iterations, 1 chains, and 5 variables
b_Intercept b_scaled_Freq sigma lprior lp__
1 691 -82 235 -6.3 -1367
2 707 -83 220 -6.3 -1367
3 698 -92 263 -6.5 -1372
4 681 -62 207 -6.2 -1369
5 724 -80 211 -6.2 -1370
6 676 -57 243 -6.4 -1369
# ... hidden reserved variables {'.chain', '.iteration', '.draw'}
as_draws_df()
ggdist
We are going to practice handling and visualizing posterior draws in script S2_E2_processing_draws.R