Last updated: 2026-01-14

Checks: 6 1

Knit directory: fiveMinuteStats/analysis/

This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


The R Markdown file has unstaged changes. To know which version of the R Markdown file created these results, you’ll want to first commit it to the Git repo. If you’re still working on the analysis, you can ignore this warning. When you’re finished, you can run wflow_publish to commit the R Markdown file and build the HTML.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(12345) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version ec23313. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Unstaged changes:
    Modified:   analysis/bayes_beta_binomial.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/bayes_beta_binomial.Rmd) and HTML (docs/bayes_beta_binomial.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
html ec23313 Peter Carbonetto 2026-01-14 Build site.
Rmd 29fc90b Peter Carbonetto 2026-01-14 Fixed use of propto for the likelihood.
Rmd d5f5166 Peter Carbonetto 2026-01-12 Fixed a link.
html 9f0f1a7 Peter Carbonetto 2026-01-12 Ran wflow_publish("analysis/bayes_beta_binomial.Rmd").
Rmd 98191c4 Peter Carbonetto 2026-01-12 A few more small updates to the bayes_beta_binomial vignette.
Rmd 2853350 Peter Carbonetto 2026-01-12 A few minor updates for the bayes_beta_binomial vignette.
Rmd dfbb324 Peter Carbonetto 2026-01-12 Updated links in bayes_beta_binomial vignette.
html a221240 Peter Carbonetto 2026-01-09 Push a bunch of updates to the webpages.
Rmd 25e1cf5 Peter Carbonetto 2026-01-08 Adding pdf versions of three other vignettes.
html 5f62ee6 Matthew Stephens 2019-03-31 Build site.
Rmd 0cd28bd Matthew Stephens 2019-03-31 workflowr::wflow_publish(all = TRUE)
html 34bcc51 John Blischak 2017-03-06 Build site.
Rmd 5fbc8b5 John Blischak 2017-03-06 Update workflowr project with wflow_update (version 0.4.0).
html 8e61683 Marcus Davy 2017-03-03 rendered html using wflow_build(all=TRUE)
Rmd d674141 Marcus Davy 2017-02-26 typos, refs
html aedeb92 stephens999 2017-02-19 Build site.
Rmd e7101d7 stephens999 2017-02-19 Files commited by wflow_commit.
html 7bc1873 stephens999 2017-01-28 Build site.
Rmd 35d9a16 stephens999 2017-01-28 Files commited by wflow_commit.
html 5d88119 stephens999 2017-01-25 Build site.
Rmd b48dd9c stephens999 2017-01-25 Files commited by wflow_commit.

See here for a PDF version of this vignette.

Overview

This vignette illustrates how to perform Bayesian inference for a continuous parameter, specifically a binomial proportion. Specifically, it illustrates the mechanics of how we actually calculate the posterior distribution.

You should be familiar with the concepts of the likelihood function and Bayesian inference for discrete random variables. You should also be familiar with the binomial distribution and the Beta distribution.

Motivation

Technical note: To simplify this problem, I have assumed that elephants are haploid, which they are not. If you do not know what this means then you should simply ignore this comment.

Suppose we sample 100 elephants from a population, and measure their DNA at one location (“locus”) in their genome, where there are two types (“alleles”). We label these alleles as “0” and “1”.

In my sample, I observe that 30 of the elephants have the 1 allele, and 70 have the 0 allele. What can I say about the frequency (\(q\)) of the 1 allele in the population?

Bayesian inference: calculating the posterior

Here we are doing inference for a parameter, \(q\), that can, in principle, take any value between 0 and 1. That is, we are doing inference for a “continuous” parameter. Bayesian inference for a continuous parameter proceeds in essentially the same way as Bayesian inference for a discrete quantity except that probability mass functions get replaced by densities.

Remember Bayes Theorem: \[ \text{posterior} \propto \text{likelihood} \times \text{prior}. \] To apply this, we need to have both the prior distribution and the likelihood.

Likelihood

Here, the likelihood for \(q\) is \[ L(q) := p(D \mid q) = q^{30} (1-q)^{70}, \] where \(D\) here denotes the data. This expression comes from the fact that the data consist of 30 “1” alleles (each of which occur with probability \(q\)) and 70 “0” alleles (each of which occur with probability \(1-q\)), and we assume that the samples are independent. (You might have heard this likelihood called the “binomial likelihood”, because it arises when the data come from a binomial distribution.)

Prior

Recall that the prior distribution is a distribution that is supposed to reflect what we know about \(q\) before (“prior to”) to seeing the data. For illustration, we will assume a uniform prior on \(q\), \[ q \sim U[0,1]. \] That is, \[ p(q) = 1, \quad q \in [0,1]. \]

This prior says many things. For example, it says that, before seeing the data, a \(q\) less than 0.5 is just as plausible as a \(q\) greater than 0.5. It also says that a \(q\) less than 0.1 is just as plausible as a \(q\) greater than 0.9 or a \(q\) between 0.5 and 0.5. If for some reason these are not equally plausible, then you should use a different prior. However, in practice it is sometimes (but not always!) the case that the results of Bayesian inference are robust to the choice of prior distribution, and in such cases it is common not to worry too much about minor discrepancies between what you believe and what the prior implies.

For now, we are simply aiming to show how the Bayesian calculations are done under this prior.

Posterior calculation

Using Bayes Theorem to combine the prior distribution and the likelihood, \[ p(q \mid D) \propto p(D \mid q) \, p(q), \] we obtain \[ p(q \mid D) \propto q^{30} (1-q)^{70}. \] Because \(q\) is a continuous parameter, this is called the posterior density for \(q\).

Now the final trick is to notice that this density, \(q^{30} (1-q)^{70}\) is exactly the density of a Beta distribution (up to a constant of proportionality). Specifically, it is the density of a \(\mathrm{Beta}(31, 71)\) distribution. So the posterior distribution for \(q\) is \(\mathrm{Beta}(31, 71)\), which we write as \(q \mid D \sim \mathrm{Beta}(31, 71)\).

This kind of trick is common in Bayesian inference: you look at the posterior density and “recognize” it as a distribution you know. It turns out that the number of distributions in common use is relatively small, so you only need to learn a few distributions to get sufficiently good at this trick for practical purposes. For example, it is a good start to be able to recognize the following distributions: exponential, binomial, Poisson, Gamma, Beta, Dirichlet and normal. If your posterior distribution does not look like one of these, then you may be in a situation where you need to use computational methods like Importance Sampling or Markov chain Monte Carlo to do your computations.

In this case, we were lucky: the posterior distribution is a distribution that we recognize, and this means we can do many calculations very easily. R has many built-in functions for calculations with the Beta distribution, and many analytic properties have been derived (e.g., see the Wikipedia page.) And we can use this result to summarize and interpret the posterior distribution, as we illustrate here.

Summary

  • To compute the posterior density of a continuous parameter, up to a normalizing constant, you multiply the likelihood by the prior density.

  • In simple cases, you may find that the result is the density of a distribution you recognize. If so, you can often use known properties of that distribution to compute quantities of interest. See here for an example.

  • In cases where you do not recognize the posterior distribution, you may need to use computational methods like importance sampling or Markov chain Monte Carlo to compute quantities of interest.


sessionInfo()
# R version 4.3.3 (2024-02-29)
# Platform: aarch64-apple-darwin20 (64-bit)
# Running under: macOS 15.7.1
# 
# Matrix products: default
# BLAS:   /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib 
# LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0
# 
# locale:
# [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
# 
# time zone: America/Chicago
# tzcode source: internal
# 
# attached base packages:
# [1] stats     graphics  grDevices utils     datasets  methods   base     
# 
# loaded via a namespace (and not attached):
#  [1] vctrs_0.6.5       cli_3.6.5         knitr_1.50        rlang_1.1.6      
#  [5] xfun_0.52         stringi_1.8.7     promises_1.3.3    jsonlite_2.0.0   
#  [9] workflowr_1.7.1   glue_1.8.0        rprojroot_2.0.4   git2r_0.33.0     
# [13] htmltools_0.5.8.1 httpuv_1.6.14     sass_0.4.10       rmarkdown_2.29   
# [17] evaluate_1.0.4    jquerylib_0.1.4   tibble_3.3.0      fastmap_1.2.0    
# [21] yaml_2.3.10       lifecycle_1.0.4   whisker_0.4.1     stringr_1.5.1    
# [25] compiler_4.3.3    fs_1.6.6          Rcpp_1.1.0        pkgconfig_2.0.3  
# [29] later_1.4.2       digest_0.6.37     R6_2.6.1          pillar_1.11.0    
# [33] magrittr_2.0.3    bslib_0.9.0       tools_4.3.3       cachem_1.1.0