Last updated: 2017-01-02

Code version: 55e11cf8f7785ad926b716fb52e4e87b342f38e1

```

Pre-requisites

This vignette builds on the Introduction to Discrete Markov chains vignette. It assumes an understanding of matrix multiplication, matrix powers, and eigendecomposition. We also do not explain the notion of an ergodic Markov chain (but we hope to add a vignette on this soon!).

Overview

The stationary distribution of a Markov chain is an important feature of the chain. One of the ways is using an eigendecomposition. The eigendecomposition is also useful because it suggests how we can quickly compute matrix powers like \(P^n\) and how we can assess the rate of convergence to a stationary distribution.

Stationary distribution of a Markov Chain

As part of the definition of a Markov chain, there is some probability distribution on the states at time \(0\). Each time step the distribution on states evolves - some states may become more likely and others less likely and this is dictated by \(P\). The stationary distribution of a Markov chain describes the distribution of \(X_t\) after a sufficiently long time that the distribution of \(X_t\) does not change any longer. To put this notion in equation form, let \(\pi\) be a column vector of probabilities on the states that a Markov chain can visit. Then, \(\pi\) is the stationary distribution if it has the property \[\pi^T= \pi^T P.\]

Not all Morkov chains have a stationary distribution but for some classes of probability transition matrix (those defining ergodic Markov chains), a stationary distribution is guaranteed to exist.

Example: Gary’s mood

In Sheldon Ross’s Introduction to Probability Models, he has an example (4.3) of a Markov Chain for modeling Gary’s mood. Gary alternates between 3 state: Cheery (\(X=1\)), So-So (\(X=2\)), or Glum (\(X=3\)). Here we input the \(P\) matrix given by Ross and we input an abitrary initial probability matrix.

# Define prob transition matrix 
# (note matrix() takes vectors in column form so there is a transpose here to switch col's to row's)
P=t(matrix(c(c(0.5,0.4,0.1),c(0.3,0.4,0.3),c(0.2,0.3,0.5)),nrow=3))
# Check sum across = 1
apply(P,1,sum)  
[1] 1 1 1
# Definte initial probability vector
x0=c(0.1,0.2,0.7)
# Check sums to 1
sum(x0)
[1] 1

Solving for stationary distributions

The stationary distribution has the property \(\pi^T= \pi^T P\)

Brute-force solution

A brute-force hack to finding the stationary distribution is simply to take the transition matrix to a high power and then extract out any row.

pi_bru <- (P%^%100)[1,]
pi_bru
[1] 0.3387097 0.3709677 0.2903226

We can test if the resulting vector is a stationary distribution by assessing if the resulting vector statisfies \(\pi^{T}=pi^{T}P\) (i.e. \(pi^{T}-pi^{T}P - = 0\)).

pi_bru - pi_bru%*%P
             [,1]          [,2] [,3]
[1,] 5.551115e-17 -5.551115e-17    0

As we can see up to some very small errors, for this example, our numerical solution checks out.

Solving via eigendecomposition

Note that the equation \(\pi^T P=\pi^T\) implies that the vector \(\pi\) is a left eigenvector of P with eigenvalue equal to 1 (Recall \(xA=\lambda x\) where \(x\) is a row vector is definition of a left eigenvector, as opposed to the more standard right eigenvector \(Ax=\lambda x\)). In what follows, we use eigenvector functions in R to extract out the solution.

library(MASS)
# Get the eigenvectors of P, note: R returns right eigenvectors
r=eigen(P)
rvec=r$vectors
# left eigenvectors are the inverse of the right eigenvectors
lvec=ginv(r$vectors)
# The eigenvalues
lam<-r$values
# Two ways of checking the spectral decomposition:
## Standard definition
rvec%*%diag(lam)%*%ginv(rvec)
     [,1] [,2] [,3]
[1,]  0.5  0.4  0.1
[2,]  0.3  0.4  0.3
[3,]  0.2  0.3  0.5
## With left eigenvectors (trivial chang)
rvec%*%diag(lam)%*%lvec
     [,1] [,2] [,3]
[1,]  0.5  0.4  0.1
[2,]  0.3  0.4  0.3
[3,]  0.2  0.3  0.5
lam 
[1] 1.00000000 0.34142136 0.05857864

We see the first eigenvalue is \(1\) and so the first left eigenvector, suitably normalized, should contain the stationary distribution:

pi_eig<-lvec[1,]/sum(lvec[1,])
pi_eig
[1] 0.3387097 0.3709677 0.2903226
sum(pi_eig)
[1] 1
pi_eig %*% P
          [,1]      [,2]      [,3]
[1,] 0.3387097 0.3709677 0.2903226

And we see the procedure checks out.

As a side-note: We can also obtain the left eigenvectors as the transposes of the right eigenvectors of t(P)

r<-eigen(t(P))
V<-r$vectors
lam<-r$values
V%*%diag(lam)%*%ginv(V)
     [,1] [,2] [,3]
[1,]  0.5  0.3  0.2
[2,]  0.4  0.4  0.3
[3,]  0.1  0.3  0.5
# Note how we are pulling columns here. 
pi_eig2 <- V[,1]/sum(V[,1])

Rate of approach to the stationary distribution

The size of the first non-unit eigenvalue (\(\lambda_2\)) indicates the rate of approach to equilibrium because it describes how quickly the largest of the vanishing terms (i.e. those with \(\lambda_i<1\)) will approach zero.

This is easiest seen by recalling the eigendecomposition of \(P^n\) can be written as \[P^n\sum_i \lambda_i^n r_i l_i^T\], where \(r_i\), \(l_i\), and \(\lambda_i\) are right eigenvectors, left eigenvectors, and eigenvalues of the matrix \(P\), respectively. So, when \(\lambda_2^n\) approaches 0, the only terms left in the eigendecomposition will be the terms corresponding to the first eigenvalue - i.e. the stationary distribution! As a rough rule of thumb for approximation, taking a number \(x\) less than 1 to the \(n\)’th power will approach 0 if \(n\) is larger than some small multiple of \(1/x\) time-steps (e.g if n > 4/x).

For our example, \(1/\lambda_2\) is approximately 3 generations.

1/lam[2]
[1] 2.928932

Which implies we will reach equilibrium fairly quickly - much more quickly than the 100 generations we were using for our brute-force soluton to the stationary distribution. As a test, let’s see how \(P^12\) (i.e approx \(4/\lambda_2\)) looks:

P%^%12
          [,1]      [,2]      [,3]
[1,] 0.3387108 0.3709682 0.2903210
[2,] 0.3387095 0.3709677 0.2903228
[3,] 0.3387086 0.3709673 0.2903241

Indeed - Gary’s mood will return to its stationary distribution relatively quickly after any perturbation!

A side-note: Computational advantage of using an eigendecomposition for matrix powers

Thanks to the eigenvector decomposition, to obtain the matrix power \(P^n\) we just need to take the powers of the eigenvalues. Compare the following lines of code to \(P\),\(P^2\), \(P^100\) computed above. And note - this is much faster than naively doing the matrix multipliation over and over to obtain the powers.

rvec%*%diag(lam)%*%lvec
     [,1] [,2] [,3]
[1,]  0.5  0.4  0.1
[2,]  0.3  0.4  0.3
[3,]  0.2  0.3  0.5
rvec%*%diag(lam^2)%*%lvec
     [,1] [,2] [,3]
[1,] 0.39 0.39 0.22
[2,] 0.33 0.37 0.30
[3,] 0.29 0.35 0.36
rvec%*%diag(lam^100)%*%lvec
          [,1]      [,2]      [,3]
[1,] 0.3387097 0.3709677 0.2903226
[2,] 0.3387097 0.3709677 0.2903226
[3,] 0.3387097 0.3709677 0.2903226

Miscellaneous : Solving a system of linear equations solution

Another approach is to solve the system of linear equations \(\pi^{T}=\pi^{T}P\). These equations are known as the global balance equations, and this approach is introduced in Discrete Markov Chains: Finding the Stationary Distribution via solution of the global balance equations. We include it here for comparison to the eigendecomposition approach on the same example.

K<-3
A_basic <- t(diag(rep(1,K))-P)
b_basic <- rep(0,K)

# Now add the constraint 
A_constr <- rbind(A_basic,rep(1,K))
b_constr <- c(b_basic,1)

pi_lineq <- t(solve(t(A_constr)%*%A_constr,t(A_constr)%*%b_constr))
pi_lineq%*%P
          [,1]      [,2]      [,3]
[1,] 0.3387097 0.3709677 0.2903226

And the solution checks out!

Session information

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.5 LTS

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] MASS_7.3-45    expm_0.999-0   Matrix_1.2-7.1 rmarkdown_1.1 

loaded via a namespace (and not attached):
 [1] Rcpp_0.12.7     lattice_0.20-34 gtools_3.5.0    digest_0.6.9   
 [5] assertthat_0.1  grid_3.3.2      formatR_1.4     magrittr_1.5   
 [9] evaluate_0.9    stringi_1.1.1   tools_3.3.2     stringr_1.0.0  
[13] yaml_2.1.13     htmltools_0.3.5 knitr_1.14      tibble_1.2     

This site was created with R Markdown