Processing math: 100%
  • Introduction
  • Preliminaries
  • Posterior Calculation
  • Interpretation

Introduction

We consider computing the posterior distribution of μ given data XN(μ,σ2) where σ2 is known. You should be familiar with the idea of a conjugate prior.

Preliminaries

This problem is really about algebraic manipulation.

There are two tricks to making the algebra a bit simpler. The first is to work with the precision τ=1/σ2 instead of the variance σ2. So consider XN(μ,1/τ).

The second trick is to rewrite the normal density slightly. First, let us recall the usual form for the normal density. If YN(μ,1/τ) then it has density: p(y)=(τ/2π)0.5exp(0.5τ(yμ)2)

We can rewrite this: p(y)exp(0.5τy2+τμy). Or, equivalently: p(y)exp(0.5Ay2+By) where A=τ and B=τμ.

Thus if p(y)exp(0.5Ay2+By) then Y is normal with precision τ=A and mean μ=B/A.

Posterior Calculation

Now, back to the problem. Assume we observe a single data point XN(μ,1/τ), with τ known, and our goal is to do Bayesian inference for the mean μ.

As we will see, the conjugate prior for the mean μ turns out to be a normal distribution. So we will assume a prior: μN(μ0,1/τ0). (Here the 0 subscript is being used to indicate that μ0,τ0 are parameters in the prior.)

Now we can compute the posterior density for μ using Bayes Theorem: p(μ|X)p(X|μ)p(μ) exp[0.5τ(Xμ)2]exp[0.5τ0(μμ0)2] exp[0.5(τ+τ0)μ2+(Xτ+μ0τ0)μ]

From the result in “Preliminaries” above we see that μ|XN(μ1,1/τ1) where τ1=τ+τ0 and μ1=(Xτ+μ0τ0)/(τ+τ0).

Interpretation

Although the algebra may look a little messy the first time you see this, in fact this result has some simple and elegant interpretations.

First, let us deal with the precision. Note that the Posterior precision (τ1) is the sum of the Data precision (τ) and the Prior precision (τ0). This makes sense: the more precise your data, and the more precise your prior information, the more precise your posterior information. Also, this means that the data always improves your posterior precision compared with the prior: noisy data (small τ) improves it only a little, whereas precise data improves it a lot.

Second, let us deal with the mean. We can rewrite the posterior mean as: μ1=wX+(1w)μ0, where w=τ/(τ+τ0). Thus μ1 is a weighted average of the data X and the prior mean μ0. And the weights depend on the relative precision of the data and the prior. If the data are precise compared with the prior (τ>>τ0) then the weight w will be close to 1 and the posterior mean will be close to the data.

In contrast, if the data are imprecise compared with the prior (τ<<τ0) then the weight w will be close to 0 and the posterior mean will be close to the prior mean.

You can see a visual illustration of this result in this shiny app.


This site was created with R Markdown