Last updated: 2017-01-02

Code version: 55e11cf8f7785ad926b716fb52e4e87b342f38e1

Summary

This document introduces the idea that drawing conclusions from likelihood ratios (or Bayes Factors) from fully specified models is context dependent. In particular, it involves weighing the information in the data against the relative (prior) plausibility of the models.

Pre-requisites

You should understand the concept of using likelihood ratio for discrete data and continuous data to compare support for two fully specified models.

Overview

Recall that a fully specified model is one with no free parameters. So a fully specified model for \(X\) is simply a probability distribution \(p(x|M)\). And, given observed data \(X=x\) the Likelihood Ratio for comparing two fully specified models, \(M_1\) vs \(M_0\), is defined as the ratio of the probabilities of the observed data under each model:

\[LR(M_1,M_0) := p(x | M_1) / p(x | M_0).\]

For fully specified models, the likelihood ratio is also known as the “Bayes Factor” (BF), so we could also define the Bayes Factor for \(M_1\) vs \(M_0\) as \[BF(M_1,M_0) := p(x | M_1) / p(x | M_0).\] When comparing fully specified models the LR and BF are just two different names for the same thing.

In the example here we considered the problem of determining whether an elephant tusk came from a savanna elephant or a forest elephant, based on examining DNA data. Specifically we computed the LR (or BF) comparing two models, \(M_S\) and \(M_F\), where \(M_S\) denotes the model that the tusk came from a savanna elephant and \(M_F\) denotes the model that the tusk came from a forest elephant. In our example we found LR=1.8, so the data favor \(M_S\) by a factor of 1.8. We commented that a factor of 1.8 is relatively modest, and not sufficient to convince that the tusk definitely came from a savanna elephant.

In the example here we considered the problem of testing a patient for a disease based on measuring the concentration of a particular diagnostic protein in the blood. Specifically we computed the LR (or BF) comparing two models, \(M_n\) and \(M_d\), where \(M_S\) denotes the model that the patient is normal and \(M_d\) denotes the model that the patient has the disease. In our example we found LR for \(M_n\) vs \(M_d\) to be approximately 0.07, so the data favor \(M_d\) by a factor of 14. This is a much larger LR than in the first case, but is it convincing? Can we confidently conclude that the patient has the disease?

More generally, we would like to address the question: what value of the LR should we treat as
“convincing” evidence for one model vs another?

It is crucial to recognize that the answer to this question has to be context dependent. In particular, the extent to which we should be “convinced” by a particular LR value has to depend on the relative plausibility of the models we are comparing. For example, in statistics there are many situations where we want to compare models that are not equally plausible. Suppose that model \(M_1\) is much less plausible than \(M_0\). Then we must surely demand stronger evidence from the data (larger LR) to be “convinced” that it arose from model \(M_1\) rather than \(M_0\), than in contexts where \(M_1\) and \(M_0\) are equally plausible.

In the disease screening example, the interpretation depends on the frequency of the disease in the population being screened. For example, suppose that only 5% of people screened actually have the disease. Then to outweigh that fact, we would have to demand a high LR to “convince” us that a particular person has the disease. In contrast, if 50% of people screened have the disease then we might be convinced by a much smaller LR.

Calculations using Bayes Theorem

The following calculation formalizes this intuition. Suppose we are presented with a series of observations \(x_1,\dots,x_n\), some of which are generated from model \(M_0\) and the others of which are generated from model \(M_1\). Let \(Z_i\in {0,1}\) denote whether \(x_i\) was generated from model \(M_0\) or \(M_1\), and let \(\pi_j\) denote the proportion of the observations that came from model \(M_j\) (\(j=0,1\)). So in the disease screening example, \(\pi_1\) would be the proportion of screened individuals who have the disease. That is, \(\pi_j = \Pr(Z_i=j)\).

Bayes Theorem says that \[\Pr(Z_i = 1 | x_i) = \Pr(x_i | Z_i = 1) \Pr(Z_i=1)/ \Pr(x_i).\] Also, \[\Pr(x_i) = \Pr(x_i | Z_i=0)\Pr(Z_i=0) + \Pr(x_i | Z_i=1)\Pr(Z_i=1).\]

Putting these together, substituting \(\pi_j\) for \(\Pr(Z_i=j)\), and dividing numerator and denominator by \(\Pr(x_i | Z_i=0)\) yields:

\[\Pr(Z_i = 1 | x_i) = \pi_1 LR_i / (\pi_0 + \pi_1 LR_i)\]

where \(LR_i\) denotes the likelihood ratio for \(M_1\) vs \(M_0\) computed for observation \(x_i\).

Example

Suppose that only 5% of screened people have the disease. Then a LR of 14 in favor of disease yields: \[\Pr(Z_i=1 | x_i) = 0.05*14/(0.95 + 0.05* 14)\] = 0.4242424.

In contrast, if 50% of screened people have the disease then the LR of 14 yields \[\Pr(Z_i=1 | x_i) = 0.5*14/(0.5 + 0.5* 14)\] = 0.9333333.

Thus in the first case, of patients with \(LR=14\), less than half would actually have the disease. In the second case, of patients with \(LR=14\), more than 90% would have the disease!

A useful formula

There is another way of laying out this kind of calculation, which may be slightly easier to interpret and remember, and also has the advantage of holding even when more than two models are under consideration. From Bayes theorem we have

\[\Pr(Z_i = 1 | x_i) = \Pr(x_i | Z_i = 1) \Pr(Z_i=1)/ \Pr(x_i).\]

and

\[\Pr(Z_i = 0 | x_i) = \Pr(x_i | Z_i = 0) \Pr(Z_i=0)/ \Pr(x_i).\]

Taking the ratio of these gives \[\Pr(Z_i = 1 | x_i)/\Pr(Z_i = 0 | x_i) = [\Pr(Z_i=1)/\Pr(Z_i=0)] \times [\Pr(x_i | Z_i = 1)/\Pr(x_i | Z_i = 0)].\]

This formula can be conveniently stated in words, using the notion of ``odds“, as follows: \[\text{Posterior Odds = Prior Odds x LR}\] or, recalling that the LR is sometimes referred to as the Bayes Factor (BF), we have \[\text{Posterior Odds = Prior Odds x BF}.\]

Note that the “Odds” of an event \(E_1\) vs an event \(E_2\) means the ratio of their probabilities. So \(\Pr(Z_i=1)/\Pr(Z_i=0)\) is the “Odds” of \(Z_i=1\) vs \(Z_i=0\). It is referred to as the “Prior Odds”, because it is the odds prior to seeing the data \(x\). Similarly the Posterior Odds refers to the Odds of \(Z_i=1\) vs \(Z_i=0\) “posterior to” (after) seeing the data \(x\).

Example

Suppose that only 5% of screened people have the disease. Then the prior odds for disease is 1/20. And a LR of 14 in favor of disease yields a posterior odds of 14/20 (or “14 to 20”, or “7 to 10”).

Suppose that 50% of screened people have the disease. Then the prior odds for disease is 1. And a LR of 14 in favor of disease yields a posterior odds of 14 (or “14 to 1”).

In cases where there are only two possibilities, as here, then the posterior odds determines the posterior probabilities.

Exercise

    1. Write a function to simulate data for the medical screening example above. The function should take as input the proportion of individuals who have the disease, and the number of individuals to simulate. It should output a table, one row for each individual, with two columns: the first column (x) is the protein concentration for that individual, the second column (z) is an indicator of disease status (1=disease, 0=normal).
  1. Write a function to compute the likelihood ratio in the medical screening example.

  2. Use the above functions to answer the following question by simulation. Suppose we screen a population of individuals, 20% of which are diseased, and compute the LR for each individual screened. Among individuals with an LR “near” c, what proportion are truly diseased? Denoting this proportion \(q_D(c)\), make a plot of \(\log_{10}(c)\) [\(x\) axis] vs \(q_D(c)\) [\(y\) axis], with \(c\) varying from \(1/10\) to \(10\) say (\(log_{10}(c)\) varies from -1 to 1.) Or maybe a wider range if you like (the wider the range, the larger the simulation study you will need to get reliable results).

  3. Use the computations introduced in this vignette to compute the theoretical value for \(q_D(c)\), and plot these on the same graph as your simulation results to demonstrate that your simulations match the theory. [It should provide a good agreement, provided your simulation is large enough.]

  4. Repeat the above, but in the case where only 2% of the screened population are diseased.

Session information

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.5 LTS

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] rmarkdown_1.1

loaded via a namespace (and not attached):
 [1] magrittr_1.5    assertthat_0.1  formatR_1.4     htmltools_0.3.5
 [5] tools_3.3.2     yaml_2.1.13     tibble_1.2      Rcpp_0.12.7    
 [9] stringi_1.1.1   knitr_1.14      stringr_1.0.0   digest_0.6.9   
[13] gtools_3.5.0    evaluate_0.9   

This site was created with R Markdown