Last updated: 2026-01-12
Checks: 7 0
Knit directory: fiveMinuteStats/analysis/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(12345) was run prior to running the
code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version e3df0f7. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
working directory clean
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown
(analysis/bayes_conjugate_normal_mean.Rmd) and HTML
(docs/bayes_conjugate_normal_mean.html) files. If you’ve
configured a remote Git repository (see ?wflow_git_remote),
click on the hyperlinks in the table below to view the files as they
were in that past version.
| File | Version | Author | Date | Message |
|---|---|---|---|---|
| Rmd | e3df0f7 | Peter Carbonetto | 2026-01-12 | Updates to the bayes_conjugate_normal_mean vignette. |
| html | a221240 | Peter Carbonetto | 2026-01-09 | Push a bunch of updates to the webpages. |
| Rmd | ee39c95 | Peter Carbonetto | 2026-01-09 | Created pdf version of bayes_conjugate_normal_mean vignette. |
| html | 5f62ee6 | Matthew Stephens | 2019-03-31 | Build site. |
| Rmd | 0cd28bd | Matthew Stephens | 2019-03-31 | workflowr::wflow_publish(all = TRUE) |
| html | f995d0a | stephens999 | 2018-04-23 | Build site. |
| Rmd | a7ec7b7 | stephens999 | 2018-04-23 | workflowr::wflow_publish(c("analysis/index.Rmd", |
See here for a PDF version of this vignette.
We consider computing the posterior distribution of \(\mu\) given data \(X \sim N(\mu,\sigma^2)\), where \(\sigma^2\) is known. You should be familiar with the idea of a conjugate prior.
This problem is really about algebraic manipulation.
There are two tricks to making the algebra a bit simpler. The first is to work with the precision \(\tau=1/\sigma^2\) instead of the variance \(\sigma^2\). So consider \(X \sim N(\mu,1/\tau)\).
The second trick is to rewrite the normal density slightly. First, let us recall the usual form for the normal density. If \(Y \sim N(\mu, 1/\tau)\), then it has density \[ p(y) = (\tau/2\pi)^{1/2} \textstyle \exp(-\frac{\tau}{2} (y-\mu)^2). \]
We can rewrite this as \[ p(y) \propto \textstyle \exp(-\frac{1}{2}\tau y^2 + \tau \mu y), \] or equivalently \[ p(y) \propto \textstyle \exp(-\frac{1}{2}Ay^2 + y), \] where \(A = \tau\) and \(B=\tau \mu\).
Thus, if \(p(y) \propto \exp(-\frac{1}{2}Ay^2 + By)\), then \(Y\) is normal with precision \(\tau= A\) and mean \(\mu= B/A\).
Now let’s go back to the problem. Assume we observe a single data point, \(X \sim N(\mu, 1/\tau)\), with \(\tau\) known, and our goal is to do Bayesian inference for the mean \(\mu\).
As we will see, the conjugate prior for the mean \(\mu\) turns out to be a normal distribution. So we will assume the prior \[ \mu \sim N(\mu_0, 1/\tau_0). \] (Here, the “0” subscripts are used to indicate that \(\mu_0, \tau_0\) are parameters in the prior.)
Now we can compute the posterior density for \(\mu\) using Bayes Theorem: \[ \begin{aligned} p(\mu \mid X) &\propto p(X \mid \mu) \, p(\mu) \\ &\propto \textstyle \exp[-\frac{\tau}{2}(X-\mu)^2] \times \exp[-\frac{\tau_0}{2} (\mu-\mu_0)^2] \\ &\propto \textstyle \exp[-\frac{1}{2}(\tau+\tau_0)\mu^2 + (X\tau + \mu_0\tau_0)\mu]. \end{aligned} \] Using the result in “Preliminaries”, we obtain \[ \mu \mid X \sim N(\mu_1, 1/\tau_1), \] where \[ \begin{aligned} \tau_1 &= \tau + \tau_0 \\ \mu_1 &= \frac{X\tau+\mu_0 \tau_0}{\tau + \tau_0}. \end{aligned} \]
Although the algebra may look a little messy the first time you see it, in fact this result has some simple and elegant interpretations.
First, let us deal with the precision. Note that the posterior precision (\(\tau_1\)) is the sum of the data precision (\(\tau\)) and the prior precision (\(\tau_0\)). This makes sense: the more precise your data, and the more precise your prior information, the more precise your posterior information. This also means that the data always improve your posterior precision over the prior preccision: noisy data (small \(\tau\)) improves it only a little, whereas precise data improves it a lot.
Second, let us deal with the mean. We can rewrite the posterior mean as \[ \mu_1 = w X + (1-w) \mu_0, \] where \(w = \tau/(\tau+\tau_0)\). Thus \(\mu_1\) is a weighted average of the data \(X\) and the prior mean \(\mu_0\). And the weights \(w, 1 - w\) depend on the relative precision of the data and the prior: if the data are precise compared with the prior (\(\tau \gg \tau_0\)), the weight \(w\) will be close to 1 and the posterior mean will be close to the data; if the data are imprecise compared with the prior (\(\tau \ll \tau_0\)), the weight \(w\) will be close to zero and the posterior mean will be close to the prior mean.
You can see a visual illustration of this result in this shiny app.
sessionInfo()
# R version 4.3.3 (2024-02-29)
# Platform: aarch64-apple-darwin20 (64-bit)
# Running under: macOS 15.7.1
#
# Matrix products: default
# BLAS: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib
# LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
#
# locale:
# [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#
# time zone: America/Chicago
# tzcode source: internal
#
# attached base packages:
# [1] stats graphics grDevices utils datasets methods base
#
# loaded via a namespace (and not attached):
# [1] vctrs_0.6.5 cli_3.6.5 knitr_1.50 rlang_1.1.6
# [5] xfun_0.52 stringi_1.8.7 promises_1.3.3 jsonlite_2.0.0
# [9] workflowr_1.7.1 glue_1.8.0 rprojroot_2.0.4 git2r_0.33.0
# [13] htmltools_0.5.8.1 httpuv_1.6.14 sass_0.4.10 rmarkdown_2.29
# [17] evaluate_1.0.4 jquerylib_0.1.4 tibble_3.3.0 fastmap_1.2.0
# [21] yaml_2.3.10 lifecycle_1.0.4 whisker_0.4.1 stringr_1.5.1
# [25] compiler_4.3.3 fs_1.6.6 Rcpp_1.1.0 pkgconfig_2.0.3
# [29] later_1.4.2 digest_0.6.37 R6_2.6.1 pillar_1.11.0
# [33] magrittr_2.0.3 bslib_0.9.0 tools_4.3.3 cachem_1.1.0