Last updated: 2017-03-06
Code version: c7339fc
Be familiar with the concept of joint distribution and a conditional distribution. Ideally also with the concept of a Markov chain and its stationary distribution.
Gibbs sampling is a very useful way of simulating from distributions that are difficult to simulate from directly. However, in this introduction to the key concept, we will use a Gibbs sampler to simulate from a very simple distribution that could be simulated from in other ways.
Suppose X and Y are two binary random variables with joint distribution Pr(X=x,Y=y)=pX,Y(x,y) given by the following table:
X \ Y | 0 | 1 |
---|---|---|
0 | 0.6 | 0.1 |
1 | 0.15 | 0.15 |
That is, for example, pX,Y(0,0)=0.6.
The conditional distribution of X given any given value is easy to compute by the usual formula for conditional probability, Pr(A|B)=Pr(A∩B)/Pr(B). For example: Pr(X=0|Y=0)=Pr(X=0∩Y=0)/Pr(Y=0)=0.6/0.75=0.8 and so Pr(X=1|Y=0)=1−0.8=0.2. Similarly Pr(X=0|Y=1)=0.1/0.25=0.4 and so Pr(X=1|Y=1)=0.6.
We can just as easily compute the conditional distribution of Y for any given value of X: Pr(Y=0|X=0)=6/7 Pr(Y=1|X=0)=1/7 Pr(Y=0|X=1)=1/2 Pr(Y=1|X=1)=1/2
Now the question: what if we start at some value of X,Y and proceed to iterate the following steps: 1. Simulate a new value of X from Pr(X|Y=y) where y is the current value of Y. 2. Simulate a new value of Y from Pr(Y|X=x) where x is the current value of X (generated in 1.) What happens?
Let’s try it:
#returns 1 with probability p, and 0 with probability 1-p
rbernoulli=function(p){return(1*runif(1)<p)}
# sample from distribution X given Y above
sample_XgivenY = function(y){
if(y==0){
x = rbernoulli(0.2) # returns 1 with probability 0.2; otherwise 0
} else {
x = rbernoulli(0.6)
}
return(x)
}
#' sample from distribution Y given X above
sample_YgivenX = function(x){
if(x==0){
y = rbernoulli(1/7)
} else {
y = rbernoulli(0.5)
}
return(y)
}
set.seed(100)
niter = 1000
X = rep(0,niter)
Y = rep(0,niter)
X[1]=1
Y[1]=1 # start from (1,1)
for(i in 2:niter){
X[i] = sample_XgivenY(Y[i-1])
Y[i] = sample_YgivenX(X[i])
}
res = data.frame(X=X,Y=Y)
Here is what happens for the first 20 iterations:
head(res,20)
X Y
1 1 1
2 1 1
3 1 1
4 1 1
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
12 0 0
13 0 0
14 0 0
15 0 0
16 0 0
17 0 0
18 0 0
19 0 0
20 1 0
And here is a summary of what proportion of the rows are of each type:
table(data.frame(X=X,Y=Y))/niter
Y
X 0 1
0 0.617 0.092
1 0.154 0.137
As you can see, the proportion of iterations in which X=x and Y=y is very close to Pr(X=x,Y=y)=pX,Y(x,y). This is not a coincidence!
What we have done here is simulate a Markov chain (X1,Y1),(X2,Y2),(X3,Y3),…, whose stationary distribution is Pr(X=x,Y=y)=pX,Y(x,y).
To see that the pairs (X,Y) form a Markov chain, note that the simulation of Xi is done using only the previous value Yi−1, and the simulation of Yi is done using only Xi. So simulation of (Xi,Yi) depends on the previous states only through the immediate previous state (Xi−1,Yi−1), which means it is a Markov chain. (And in fact it only depends on Yi−1, but that is not so important.)
To see why it has stationary distribution pX,Y(x,y) imagine simulating X1,Y1 from this distribution, so Pr(X1=x,Y1=y)=pX,Y(x,y), and in particular Pr(Y1=y)=∑xpX,Y(x,y)=pY(y).
Now, what is Pr(X2=x,Y1=y)? Well we know Pr(X2=x,Y1=y)=Pr(Y1=y)Pr(X2=x|Y1=y). And we know from above that Pr(Y1=y)=pY(y). And we know that given Y1−y, X2 was simulated from the conditional distribution pX|Y(x|y), so Pr(X2=x|Y1=y)=pX|Y(x|y). Putting these together we have Pr(X2=x,Y1=y)=pY(y)pX|Y(x|y)=pX,Y(x,y).
Now, essentially the same argument shows that Pr(X2=x,Y2=y)=pX,Y(x,y). [Exercise]
Thus, we have shown that if Pr(X1=x,Y1=y)=pX,Y(x,y) then also Pr(X2=x,Y2=y)=pX,Y(x,y). That is exactly what it means for pX,Y(x,y) to be the stationary distribution: if we start the chain by simulating from that distribution then it remains in that distribution after one step, and so it remains in that distribution for ever.
Of course, we did not start the chain at that distribution. But the above argument shows that this is indeed the stationary distribution. There is a general result that discrete Markov chains converge to their stationary distribution, provided that they are what is called “irreducible and aperiodic” (which this Markov chain is). That is, for large enough n, we should see Pr(Xn=x,Yn=y)≈pX,Y(x,y) no matter where we start. Furthermore, in the long-run, the proportion of iterations spent in each state will also converge to this distribution.
This explains the simulation result.
sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.5 LTS
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] workflowr_0.4.0 rmarkdown_1.3.9004
loaded via a namespace (and not attached):
[1] backports_1.0.5 magrittr_1.5 rprojroot_1.2 htmltools_0.3.5
[5] tools_3.3.2 yaml_2.1.14 Rcpp_0.12.9 stringi_1.1.2
[9] knitr_1.15.1 git2r_0.18.0 stringr_1.2.0 digest_0.6.12
[13] evaluate_0.10
This site was created with R Markdown