Last updated: 2019-07-17

Checks: 2 0

Knit directory: lieb/

This reproducible R Markdown analysis was created with workflowr (version 1.3.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rhistory
    Ignored:    analysis/.Rhistory
    Ignored:    analysis/pairwise_fitting_cache/
    Ignored:    docs/.DS_Store
    Ignored:    output/.DS_Store

Unstaged changes:
    Modified:   analysis/_site.yml
    Modified:   analysis/candidate_latent_classes.Rmd
    Modified:   analysis/pairwise_fitting.Rmd
    Modified:   analysis/priors.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
html 2177dfa hillarykoch 2019-07-17 update about page
Rmd 8b30694 hillarykoch 2019-07-17 update about page
html 8b30694 hillarykoch 2019-07-17 update about page
Rmd e36b267 hillarykoch 2019-07-17 update about page
html e36b267 hillarykoch 2019-07-17 update about page
Rmd a991668 hillarykoch 2019-07-17 update about page
html a991668 hillarykoch 2019-07-17 update about page
html a36d893 hillarykoch 2019-07-17 update about page
html e8e54b7 hillarykoch 2019-07-17 update about page
Rmd f47c013 hillarykoch 2019-07-17 update about page
html f47c013 hillarykoch 2019-07-17 update about page
Rmd 50cf23e hillarykoch 2019-07-17 make skeleton
html 50cf23e hillarykoch 2019-07-17 make skeleton
html a519159 hillarykoch 2019-07-17 Build site.
Rmd b81d6ff hillarykoch 2019-07-17 Start workflowr project.

The Limited Information Empirical Bayes (LIEB) methodology was designed for Gaussian mixture models where the behavior of each component in the mixture is dictated by a latent class \(h_m\) which has the form \(h_m=(h_{[1]},\,h_{[2]},\ldots,h_{[D]})\) where \(h_{[d]} \in\{-1,\,0,\,1\}\) for \(d\in\{1,\ldots,D\}\), where \(D\) is the dimension of the data. For even moderate dimensions, this model becomes computationally intractible to fit directly because the number of candidate latent classes is \(3^D\). The LIEB procedure rigorously circumvents this computational challenge, providing the user with the likely latent classes and fitting the model in an empirical Bayesian hierarchical framework with Markov Chain Monte Carlo (MCMC).

LIEB occurs in 4 major steps:

  1. Pairwise fitting of the Gaussian mixture over pairs of dimensions
  2. Enumerating candidate latent classes based on the output of the pairwise fits
  3. Pruning the candidate list of latent classes based on computed prior probabilities of each class’s mixing weight
  4. Fitting the final model using MCMC

This analysis can be tricky, because some parts (e.g., Steps 1 and 3) require low memory but can be parallelized. Meanwhile, Steps 2 and 4 cannot be parallelized, and Step 2 in particular can require high memory. To be efficient when running an analysis with LIEB, we recommend splitting the steps up accordingly.

Each step of LIEB has its own page. We walk through a complete analysis of a simulated dataset throughout these 4 pages. Though the simple analysis provided here can be run on your personal laptop, in practice with real genomic datasets, one will need access to a computing cluster. We assume the user knows how to request multiple cores for parallel jobs or more memory for high-memory tasks. Navigate to any step’s page to learn more.