Last updated: 2017-03-03
Code version: d6741417d44f168473b77d41bba75ddf1acce30b
An understanding of matrix multiplication and matrix powers.
Here we provide a quick introduction to discrete Markov Chains.
A Markov Chain is a discrete stochastic process with the Markov property : P(Xt|Xt−1,…,X1)=P(Xt|Xt−1). It is fully determined by a probability transition matrix P which defines the transition probabilities (Pij=P(Xt=j|Xt−1=i) and an initial probability distribution specified by the vector x where xi=P(X0=i). The time-dependent random variable Xt is describing the state of our probabilistic system at time-step t.
In Sheldon Ross’s Introduction to Probability Models, he has an example (4.3) of a Markov Chain for modeling Gary’s mood. Gary alternates between 3 state: Cheery (X=1), So-So (X=2), or Glum (X=3). Here we input the P matrix given by Ross and we input an arbitrary initial probability matrix.
# Define prob transition matrix
# (note matrix() takes vectors in column form so there is a transpose here to switch col's to row's)
P=t(matrix(c(c(0.5,0.4,0.1),c(0.3,0.4,0.3),c(0.2,0.3,0.5)),nrow=3))
# Check sum across = 1
apply(P,1,sum)
[1] 1 1 1
# Definte initial probability vector
x0=c(0.1,0.2,0.7)
# Check sums to 1
sum(x0)
[1] 1
If initial prob distribution x0 is 3×1 column vector, then xT0P=xT1. In R, the %*% operator automatically promotes a vector to the appropriate matrix to make the arguments conformable. In the case of multiplying a length 3 vector by a 3\time3 matrix, it takes the vector to be a row-vector. This means our math can look simply:
# After one step
x0%*%P
[,1] [,2] [,3]
[1,] 0.25 0.33 0.42
And after two time-steps:
## The two-step prob trans matrix
P%*%P
[,1] [,2] [,3]
[1,] 0.39 0.39 0.22
[2,] 0.33 0.37 0.30
[3,] 0.29 0.35 0.36
## Multiplied by the initial state probability
x0%*%P%*%P
[,1] [,2] [,3]
[1,] 0.308 0.358 0.334
To generalize to an arbitrary number of time steps into the future, we can compute a the matrix power. In R, this can be done easily with the package expm. Let’s load the library and verify the second power is the same as we saw for P%*%P above.
# Load library
library(expm)
Loading required package: Matrix
Attaching package: 'expm'
The following object is masked from 'package:Matrix':
expm
# Verify the second power is P%*%P
P%^%2
[,1] [,2] [,3]
[1,] 0.39 0.39 0.22
[2,] 0.33 0.37 0.30
[3,] 0.29 0.35 0.36
And now let’s push this : Looking at the state of the chain after many steps, say 100. First let’s look at the probability transition matrix…
P%^%100
[,1] [,2] [,3]
[1,] 0.3387097 0.3709677 0.2903226
[2,] 0.3387097 0.3709677 0.2903226
[3,] 0.3387097 0.3709677 0.2903226
What do you notice about the rows? And let’s see what this does for various starting distributions:
c(1,0,0) %*%(P%^%100)
[,1] [,2] [,3]
[1,] 0.3387097 0.3709677 0.2903226
c(0.2,0.5,0.3) %*%(P%^%100)
[,1] [,2] [,3]
[1,] 0.3387097 0.3709677 0.2903226
Note that after a large number of steps the initial state does not matter any more, the probability of the chain being in any state j is independent of where we started. This is our first view of the equilibrium distribuion of a Markov Chain. These are also known as the limiting probabilities of a Markov chain or stationary distribution.
sessionInfo()
R version 3.3.0 (2016-05-03)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.10.5 (Yosemite)
locale:
[1] en_NZ.UTF-8/en_NZ.UTF-8/en_NZ.UTF-8/C/en_NZ.UTF-8/en_NZ.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] expm_0.999-0 Matrix_1.2-6 workflowr_0.3.0 rmarkdown_1.3
loaded via a namespace (and not attached):
[1] Rcpp_0.12.5 lattice_0.20-33 digest_0.6.9 rprojroot_1.2
[5] grid_3.3.0 backports_1.0.5 magrittr_1.5 evaluate_0.10
[9] stringi_1.1.2 tools_3.3.0 stringr_1.2.0 yaml_2.1.14
[13] htmltools_0.3.5 knitr_1.15.1
This site was created with R Markdown