Last updated: 2019-07-17
Checks: 2 0
Knit directory: lieb/
This reproducible R Markdown analysis was created with workflowr (version 1.3.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: analysis/.Rhistory
Ignored: analysis/pairwise_fitting_cache/
Ignored: docs/.DS_Store
Ignored: output/.DS_Store
Untracked files:
Untracked: docs/figure/
Unstaged changes:
Modified: analysis/priors.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view them.
File | Version | Author | Date | Message |
---|---|---|---|---|
html | 2faf446 | hillarykoch | 2019-07-17 | update about page |
html | 2177dfa | hillarykoch | 2019-07-17 | update about page |
Rmd | 8b30694 | hillarykoch | 2019-07-17 | update about page |
html | 8b30694 | hillarykoch | 2019-07-17 | update about page |
Rmd | e36b267 | hillarykoch | 2019-07-17 | update about page |
html | e36b267 | hillarykoch | 2019-07-17 | update about page |
Rmd | a991668 | hillarykoch | 2019-07-17 | update about page |
html | a991668 | hillarykoch | 2019-07-17 | update about page |
html | a36d893 | hillarykoch | 2019-07-17 | update about page |
html | e8e54b7 | hillarykoch | 2019-07-17 | update about page |
Rmd | f47c013 | hillarykoch | 2019-07-17 | update about page |
html | f47c013 | hillarykoch | 2019-07-17 | update about page |
Rmd | 50cf23e | hillarykoch | 2019-07-17 | make skeleton |
html | 50cf23e | hillarykoch | 2019-07-17 | make skeleton |
html | a519159 | hillarykoch | 2019-07-17 | Build site. |
Rmd | b81d6ff | hillarykoch | 2019-07-17 | Start workflowr project. |
The Limited Information Empirical Bayes (LIEB) methodology was designed for Gaussian mixture models where the behavior of each component in the mixture is dictated by a latent class hm which has the form hm=(h[1],h[2],…,h[D]) where h[d]∈{−1,0,1} for d∈{1,…,D}, where D is the dimension of the data. For even moderate dimensions, this model becomes computationally intractible to fit directly because the number of candidate latent classes is 3D. The LIEB procedure rigorously circumvents this computational challenge, providing the user with the likely latent classes and fitting the model in an empirical Bayesian hierarchical framework with Markov Chain Monte Carlo (MCMC).
LIEB occurs in 4 major steps:
This analysis can be tricky, because some parts (e.g., Steps 1 and 3) require low memory but can be parallelized. Meanwhile, Steps 2 and 4 cannot be parallelized, and Step 2 in particular can require high memory. To be efficient when running an analysis with LIEB, we recommend splitting the steps up accordingly.
Each step of LIEB has its own page. We walk through a complete analysis of a simulated dataset throughout these 4 pages. Though the simple analysis provided here can be run on your personal laptop, in practice with real genomic datasets, one will need access to a computing cluster. We assume the user knows how to request multiple cores for parallel jobs or more memory for high-memory tasks. Navigate to any step’s page to learn more.