**Last updated:** 2019-03-31

**Checks:** 6 0

**Knit directory:** `fiveMinuteStats/analysis/`

This reproducible R Markdown analysis was created with workflowr (version 1.2.0). The *Report* tab describes the reproducibility checks that were applied when the results were created. The *Past versions* tab lists the development history.

`set.seed(12345)`

was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use `wflow_publish`

or `wflow_git_commit`

). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:

```
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.Rhistory
Ignored: analysis/bernoulli_poisson_process_cache/
Untracked files:
Untracked: _workflowr.yml
Untracked: analysis/CI.Rmd
Untracked: analysis/gibbs_structure.Rmd
Untracked: analysis/libs/
Untracked: analysis/results.Rmd
Untracked: analysis/shiny/tester/
Untracked: docs/MH_intro_files/
Untracked: docs/citations.bib
Untracked: docs/figure/MH_intro.Rmd/
Untracked: docs/hmm_files/
Untracked: docs/libs/
Untracked: docs/shiny/tester/
```

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.

These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see `?wflow_git_remote`

), click on the hyperlinks in the table below to view them.

File | Version | Author | Date | Message |
---|---|---|---|---|

html | 34bcc51 | John Blischak | 2017-03-06 | Build site. |

Rmd | 5fbc8b5 | John Blischak | 2017-03-06 | Update workflowr project with wflow_update (version 0.4.0). |

Rmd | 391ba3c | John Blischak | 2017-03-06 | Remove front and end matter of non-standard templates. |

html | fb0f6e3 | stephens999 | 2017-03-03 | Merge pull request #33 from mdavy86/f/review |

html | 0713277 | stephens999 | 2017-03-03 | Merge pull request #31 from mdavy86/f/review |

Rmd | d674141 | Marcus Davy | 2017-02-27 | typos, refs |

html | c3b365a | John Blischak | 2017-01-02 | Build site. |

Rmd | 67a8575 | John Blischak | 2017-01-02 | Use external chunk to set knitr chunk options. |

Rmd | 5ec12c7 | John Blischak | 2017-01-02 | Use session-info chunk. |

Rmd | 8bcd7f9 | mfrasco | 2016-04-04 | changed references to four groups to five groups |

Rmd | 9156677 | stephens999 | 2016-01-19 | add multiclass example |

Be familiar with the Computing the posterior probability on classes for 2 classes

Suppose we have an observation \(X\), which we believe to have come from one of \(K\) models, \(M_1 \dots, M_K\). Suppose we can compute the likelihood for any models. This document lays out the computation of the posterior probability that the model came from model \(K\), and emphasizes that the result depends only on the likelihood ratios. This is a straightforward extension of the 2-class calculations.

In a previous example we considered the question of whether a tusk came from one of two classes: a savanna elephant or a forest elephant, based on its DNA. In practice we might be interested in finer-scale distinctions than this. For example, forest elephants from West Africa actually differ somewhat genetically from those in Central Africa. And Savanna elephants from North Africa differ from those in the East and the South. (Actually elephant genetics within each subspecies varies roughly continuously across the continent; and any division into discrete groups can be seen as a convenient approximation.)

So what if now we have allele frequencies for “North Savanna”, “South Savanna”, “East Savanna”, “West Forest”, and “Central Forest” groups. How do we decided which group a tusk likely came from? Now we have five models, but the calculation is the same for \(K\) models, so we may as well do it for \(K\). Here is the general outline:

Suppose we are presented with a series of observations \(x_1,\dots,x_n\), each of which are generated from a model \(M_k\) for some \(k \in {1,\dots,K}\). Let \(Z_i\in {1,\dots,K}\) indicate which model the \(i\)th observation came from, and let \(\pi_k\) denote the proportion of the observations that came from model \(M_k\). Bayes Theorem says that \[\Pr(Z_i = k | x_i) = \Pr(x_i | Z_i = k) \Pr(Z_i=k)/ \Pr(x_i).\]

Recall that, by the law of total probability, \(\Pr(x_i) = \sum_{k'} \Pr(x_i, Z_i=k') = \sum_{k'} \Pr(x_i | Z_i=k')\Pr(Z_i=k')\). Also note that \(\Pr(x_i | Z_i=k)\) is the “likelihood” for model \(k\) given data \(x_i\), so we can write that \(L_{ik}\), and we can write \(\pi_k\) for \(\Pr(Z_i=k)\). Putting this together gives: \[\Pr(Z_i = k | x_i) = L_{ik} \pi_k / \sum_{k'} L_{ik'} \pi_{k'}.\]

Note that the denominator \(\Pr(x_i)=\sum_{k'} L_{ik'} \pi_{k'}\) is the same for all \(k\). So an equivalent way of laying out this calculation is to write \[\Pr(Z_i = k | x_i) \propto L_{ik} \pi_k\] and to note that the constant of proportionality is determined by the fact that probabilities must sum to 1. This way of applying Bayes theorem is very common and convenient in practice, so you should get used to it. In words, this formula can be said \[\text{Posterior} \propto \text{likelihood x prior}.\]

Here is an example of the calculation in practice. The five rows of the matrix `ref_freqs`

represent the allele frequencies in five groups: “North Savanna”, “South Savanna”, “East Savanna”, “West Forest”, and “Central Forest”. The calculation presented here assumes that the population of tusks we are looking at is equally drawn from all four groups, so \(\pi=(0.2,0.2,0.2,0.2,0.2)\), but it would of course be easy to change to any other value of \(\pi\).

```
x = c(1,0,1,0,0,1)
ref_freqs = rbind(
c(0.39, 0.14,0.22,0.12,0.03,0.38),
c(0.41, 0.10,0.18, 0.12,0.02,0.28),
c(0.40, 0.11,0.22, 0.11,0.01,0.3),
c(0.75,0.25,0.11,0.18,0.25,0.25),
c(0.85,0.15,0.11,0.16,0.21,0.26)
)
# define functions for computing posterior from Likelihood vector and pi vector
normalize = function(x){return(x/sum(x))}
posterior_prob = function(L_vec, pi_vec){ return(normalize(L_vec*pi_vec)) }
# define likelihood function
L = function(f,x){ prod(f^x*(1-f)^(1-x)) }
# compute the likelihoods for each model by applying L to rows of ref_freqs
L_vec=apply(ref_freqs,1,L,x=x)
print(L_vec)
```

`[1] 0.023934466 0.016038570 0.020702326 0.009513281 0.013712299`

`posterior_prob(L_vec, c(0.2,0.2,0.2,0.2,0.2))`

`[1] 0.2852705 0.1911608 0.2467472 0.1133871 0.1634344`

Remember that when comparing two models, only the likelihood ratios matter, not the actual likelihoods. In fact the same is true when comparing \(K\) models, as we can see by examining the calculation above. Specifically, imagine multiplying all the likelihoods by some positive constant c, and notice that this would not change the final answer, because of the normalization step.

Notice that, just as with the 2-model case, the calculation involves weighing the relative support from the data for each model (from the likelihood function) against the “prior” plausibility of each model (from the vector \(\pi\)).

In practice we might not know \(\pi\). And although in such a case it might seem natural to assume that all the values of \(\pi\) are equal, one has to be careful to note that this is still an assumption, and such assumptions may have unforeseen implications. For example, in this case, this assumption implies that 60% of the tusks are from savanna elephants and 40% from forest elephants, not 50%-50% (because three of our five groups are savanna). The difference between 60-40 and 50-50 is probably not a big deal in most applications, but imagine that we had 20 different savanna groups and 2 forest groups. Would we still be happy to assume that every group was equally common (and so savanna tusks are 10 times as common as forest tusks)? The answer would depend on the context, but quite possibly not.

`sessionInfo()`

```
R version 3.5.2 (2018-12-20)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Mojave 10.14.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] workflowr_1.2.0 Rcpp_1.0.0 digest_0.6.18 rprojroot_1.3-2
[5] backports_1.1.3 git2r_0.24.0 magrittr_1.5 evaluate_0.12
[9] stringi_1.2.4 fs_1.2.6 whisker_0.3-2 rmarkdown_1.11
[13] tools_3.5.2 stringr_1.3.1 glue_1.3.0 xfun_0.4
[17] yaml_2.2.0 compiler_3.5.2 htmltools_0.3.6 knitr_1.21
```

This site was created with R Markdown