Last updated: 2026-01-23
Checks: 7 0
Knit directory: fiveMinuteStats/analysis/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(12345) was run prior to running the
code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 415a5ca. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Untracked files:
Untracked: figure/
Untracked: temp.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/MH_intro.Rmd) and HTML
(docs/MH_intro.html) files. If you’ve configured a remote
Git repository (see ?wflow_git_remote), click on the
hyperlinks in the table below to view the files as they were in that
past version.
| File | Version | Author | Date | Message |
|---|---|---|---|---|
| Rmd | 415a5ca | Peter Carbonetto | 2026-01-23 | A few more updates to the MH_intro vignette. |
| Rmd | 9431f0f | Peter Carbonetto | 2026-01-23 | Updates to the MH_intro vignette. |
| Rmd | 656ff76 | Peter Carbonetto | 2026-01-23 | Some basic formatting fix to the MH_intro vignette. |
| html | 656ff76 | Peter Carbonetto | 2026-01-23 | Some basic formatting fix to the MH_intro vignette. |
| html | 5f62ee6 | Matthew Stephens | 2019-03-31 | Build site. |
| Rmd | 0cd28bd | Matthew Stephens | 2019-03-31 | workflowr::wflow_publish(all = TRUE) |
| html | 6053d26 | stephens999 | 2018-04-23 | Build site. |
| Rmd | 1077c97 | stephens999 | 2018-04-23 | workflowr::wflow_publish(c("analysis/index.Rmd", "analysis/MH_intro.Rmd", |
See here for a PDF version of this vignette.
You should be familiar with the concept of a Markov chain and its stationary distribution.
The Metropolis Hastings algorithm is a beautifully simple algorithm for producing samples from distributions that may otherwise be difficult to sample from.
Suppose we want to sample from a distribution \(\pi\), which we will call the “target distribution”. For simplicity we assume that \(\pi\) is a one-dimensional distribution on the real line, although it is easy to extend to more than one dimension (see below).
The M-H algorithm works by simulating a Markov Chain, whose stationary distribution is \(\pi\). This means that, in the long run, the samples from the Markov chain look like the samples from \(\pi\). As we will see, the algorithm is incredibly simple and flexible. Its main limitation is that, for difficult problems, “in the long run” may mean after a very long time. However, for simple problems the algorithm can work well.
To implement the M-H algorithm, the user (you!) must provide a “transition kernel”, \(Q\). A transition kernel is simply a way of moving, randomly, to a new position in space (\(y\) say), given a current position (\(x\) say). That is, \(Q\) is a distribution on \(y\) given \(x\), and we will write it \(Q(y \mid x)\). In many applications, \(Q\) will be a continuous distribution, in which case \(Q(y \mid x)\) will be a density on \(y\), and so \(\int Q(y \mid x) \, dy = 1\) (for all \(x\)).
For example, a very simple way to generate a new position \(y\) from a current position \(x\) is to add an \(N(0,1)\) random number to \(x\). That is, set \(y = x + N(0,1)\), or \(y \mid x \sim N(x,1)\). So we have \[ Q(y \mid x) = \frac{1}{\sqrt{2\pi}} \times \textstyle \exp\big[-\frac{1}{2}(y-x)^2\big]. \] This kind of transition kernel, which adds some random number to the current position \(x\) to obtain \(y\), is often used in practice and is called a “random walk” transition kernel.
Because of the role \(Q\) plays in the M-H algorithm (see below), it is also sometimes called the “proposal distribution”. And the example given above would be called a “random walk proposal”.
The M-H algorithm for sampling from a target distribution \(\pi\), using transition kernel \(Q\), consists of the following steps:
Initialize the Markov chain, \(X_1 = x_1\).
For \(t = 1,2,\ldots\)
Sample \(y\) from \(Q(y \mid x_t)\). Think of \(y\) as a “proposed” value for \(x_{t+1}\).
Compute \(A= \min \big\{1, \frac{\pi(y)Q(x_t \mid y)}{\pi(x_t)Q(y \mid x_t)} \big\}.\) \(A\) is often called the “acceptance probabilty”.
With probability \(A\), “accept” the proposed value, and set \(x_{t+1} = y\). Otherwise, set \(x_{t+1} = x_t\).
Notice that the example random walk proposal \(Q\) given above satisfies \(Q(y \mid x) = Q(x \mid y)\) for all \(x, y\). Any proposal that satisfies this is called “symmetric”. When \(Q\) is symmetric, the formula for \(A\) in the M-H algorithm simplifies to \[ A = \min \bigg\{1, \frac{\pi(y)}{\pi(x_t)} \bigg\}. \]
This special case of the algorithm, with \(Q\) symmetric, was first presented by Metropolis et al (1953), and for this reason it is sometimes called the “Metropolis algorithm”.
In 1970, Hastings presented the more general version — now known as the M-H algorithm — which allows that \(Q\) may be assymmetric. Specifically, Hastings modified the acceptance probability by introducing the term \(Q(x_t \mid y)/Q(y \mid x_t)\). This ratio is sometimes called the “Hastings ratio”.
To help understand the M-H algorithm we now do a simple example: we implement the algorithm to sample from an exponential distribution with \(\lambda = 1\): \[ \pi(x) = e^{-x}, \quad x \geq 0. \] (Of course it would be much easier to sample from an exponential distribution in other ways; we are just using this to illustrate the algorithm.)
Remember that \(\pi\) is called the “target” distribution, so we call our function to compute \(\pi\) “target”:
target <- function (x)
ifelse(x < 0,0,exp(-x))
Now we implement the M-H algorithm using the simple normal random walk transition kernel \(Q\) mentioned above. Since this \(Q\) is symmetric, the Hastings ratio is 1, and we get the simpler form for the acceptance probability \(A\) in the Metropolis algorithm.
Here is the code:
x <- rep(0,10000)
x[1] <- 3
for (i in 2:10000) {
current_x <- x[i-1]
proposed_x <- current_x + rnorm(1)
A <- target(proposed_x)/target(current_x)
u <- runif(1)
if (u < A)
x[i] <- proposed_x
else
x[i] <- current_x
}
Having run this code, we can plot the locations visited by the Markov chain, \(x\) (sometimes called a “trace plot”).
plot(x,xlab = "time",main = "values of x visited by the M-H algorithm")

Remember that we designed this algorithm to sample from an exponential distribution. This means that — provided we ran the algorithm for long enough! — the histogram of \(x\) should look like an exponential distribution. Here we check this:
hist(x,xlim = c(0,10),probability = TRUE,main = "")
xx <- seq(0,10,length = 100)
lines(xx,target(xx),col = "red")

So we see that, indeed, the histogram of the values in \(x\) indeed provides a close fit to an exponential distribution.
One particularly useful feature of the M-H algorithm is that it can be implemented even when \(\pi\) is known only up to a constant: that is, \(\pi(x) = c f(x)\) for some known \(f\) with unknown constant \(c\). This is because the algorithm depends on \(\pi\) only through the ratio: \[ \frac{\pi(y)}{\pi(x_t)} = \frac{cf(y)}{cf(x_t)} = \frac{f(y)}{f(x_t)}. \]
This issue arises in Bayesian applications where the posterior distribution is proportional to the prior times the likelihood but the constant of proportionality is often unknown. So the M-H algorithm is particularly useful for sampling from posterior distributions to perform analytically intractible Bayesian calculations.
sessionInfo()
# R version 4.3.3 (2024-02-29)
# Platform: aarch64-apple-darwin20 (64-bit)
# Running under: macOS 15.7.1
#
# Matrix products: default
# BLAS: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib
# LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
#
# locale:
# [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#
# time zone: America/Chicago
# tzcode source: internal
#
# attached base packages:
# [1] stats graphics grDevices utils datasets methods base
#
# loaded via a namespace (and not attached):
# [1] vctrs_0.6.5 cli_3.6.5 knitr_1.50 rlang_1.1.6
# [5] xfun_0.52 stringi_1.8.7 promises_1.3.3 jsonlite_2.0.0
# [9] workflowr_1.7.1 glue_1.8.0 rprojroot_2.0.4 git2r_0.33.0
# [13] htmltools_0.5.8.1 httpuv_1.6.14 sass_0.4.10 rmarkdown_2.29
# [17] evaluate_1.0.4 jquerylib_0.1.4 tibble_3.3.0 fastmap_1.2.0
# [21] yaml_2.3.10 lifecycle_1.0.4 whisker_0.4.1 stringr_1.5.1
# [25] compiler_4.3.3 fs_1.6.6 Rcpp_1.1.0 pkgconfig_2.0.3
# [29] later_1.4.2 digest_0.6.37 R6_2.6.1 pillar_1.11.0
# [33] magrittr_2.0.3 bslib_0.9.0 tools_4.3.3 cachem_1.1.0