Loading [MathJax]/jax/output/HTML-CSS/jax.js
  • Introduction
  • Preliminaries
  • Posterior Calculation
  • Interpretation

Last updated: 2019-03-31

Checks: 6 0

Knit directory: fiveMinuteStats/analysis/

This reproducible R Markdown analysis was created with workflowr (version 1.2.0). The Report tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(12345) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    analysis/.Rhistory
    Ignored:    analysis/bernoulli_poisson_process_cache/

Untracked files:
    Untracked:  _workflowr.yml
    Untracked:  analysis/CI.Rmd
    Untracked:  analysis/gibbs_structure.Rmd
    Untracked:  analysis/libs/
    Untracked:  analysis/results.Rmd
    Untracked:  analysis/shiny/tester/
    Untracked:  docs/MH_intro_files/
    Untracked:  docs/citations.bib
    Untracked:  docs/figure/MH_intro.Rmd/
    Untracked:  docs/hmm_files/
    Untracked:  docs/libs/
    Untracked:  docs/shiny/tester/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
html f995d0a stephens999 2018-04-23 Build site.
Rmd a7ec7b7 stephens999 2018-04-23 workflowr::wflow_publish(c(“analysis/index.Rmd”,

Introduction

We consider computing the posterior distribution of μ given data XN(μ,σ2) where σ2 is known. You should be familiar with the idea of a conjugate prior.

Preliminaries

This problem is really about algebraic manipulation.

There are two tricks to making the algebra a bit simpler. The first is to work with the precision τ=1/σ2 instead of the variance σ2. So consider XN(μ,1/τ).

The second trick is to rewrite the normal density slightly. First, let us recall the usual form for the normal density. If YN(μ,1/τ) then it has density: p(y)=(τ/2π)0.5exp(0.5τ(yμ)2)

We can rewrite this: p(y)exp(0.5τy2+τμy). Or, equivalently: p(y)exp(0.5Ay2+By) where A=τ and B=τμ.

Thus if p(y)exp(0.5Ay2+By) then Y is normal with precision τ=A and mean μ=B/A.

Posterior Calculation

Now, back to the problem. Assume we observe a single data point XN(μ,1/τ), with τ known, and our goal is to do Bayesian inference for the mean μ.

As we will see, the conjugate prior for the mean μ turns out to be a normal distribution. So we will assume a prior: μN(μ0,1/τ0). (Here the 0 subscript is being used to indicate that μ0,τ0 are parameters in the prior.)

Now we can compute the posterior density for μ using Bayes Theorem: p(μ|X)p(X|μ)p(μ) exp[0.5τ(Xμ)2]exp[0.5τ0(μμ0)2] exp[0.5(τ+τ0)μ2+(Xτ+μ0τ0)μ]

From the result in “Preliminaries” above we see that μ|XN(μ1,1/τ1) where τ1=τ+τ0 and μ1=(Xτ+μ0τ0)/(τ+τ0).

Interpretation

Although the algebra may look a little messy the first time you see this, in fact this result has some simple and elegant interpretations.

First, let us deal with the precision. Note that the Posterior precision (τ1) is the sum of the Data precision (τ) and the Prior precision (τ0). This makes sense: the more precise your data, and the more precise your prior information, the more precise your posterior information. Also, this means that the data always improves your posterior precision compared with the prior: noisy data (small τ) improves it only a little, whereas precise data improves it a lot.

Second, let us deal with the mean. We can rewrite the posterior mean as: μ1=wX+(1w)μ0, where w=τ/(τ+τ0). Thus μ1 is a weighted average of the data X and the prior mean μ0. And the weights depend on the relative precision of the data and the prior. If the data are precise compared with the prior (τ>>τ0) then the weight w will be close to 1 and the posterior mean will be close to the data.

In contrast, if the data are imprecise compared with the prior (τ<<τ0) then the weight w will be close to 0 and the posterior mean will be close to the prior mean.

You can see a visual illustration of this result in this shiny app.



sessionInfo()
R version 3.5.2 (2018-12-20)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Mojave 10.14.1

Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] workflowr_1.2.0 Rcpp_1.0.0      digest_0.6.18   rprojroot_1.3-2
 [5] backports_1.1.3 git2r_0.24.0    magrittr_1.5    evaluate_0.12  
 [9] stringi_1.2.4   fs_1.2.6        whisker_0.3-2   rmarkdown_1.11 
[13] tools_3.5.2     stringr_1.3.1   glue_1.3.0      xfun_0.4       
[17] yaml_2.2.0      compiler_3.5.2  htmltools_0.3.6 knitr_1.21     

This site was created with R Markdown