Last updated: 2021-04-17
Checks: 2 0
Knit directory: lieb/
This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version a493056. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.DS_Store
Ignored: analysis/.Rhistory
Ignored: analysis/pairwise_fitting_cache/
Ignored: analysis/preprocessing_cache/
Ignored: analysis/running_mcmc_cache/
Ignored: data/.DS_Store
Ignored: data/.Rhistory
Ignored: output/.DS_Store
Unstaged changes:
Modified: analysis/candidate_latent_classes.Rmd
Modified: analysis/downstream.Rmd
Modified: analysis/preprocessing.Rmd
Modified: analysis/priors.Rmd
Modified: analysis/running_mcmc.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were made to the R Markdown (analysis/index.Rmd
) and HTML (docs/index.html
) files. If you've configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view the files as they were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
html | f3413aa | hillarykoch | 2021-04-17 | Build site. |
html | dd0b829 | hillarykoch | 2020-10-15 | replace cache |
html | ed24c1b | hillarykoch | 2020-05-08 | update toc |
html | 1c43a7f | hillarykoch | 2020-04-29 | rebuilds the html because I forgot to |
Rmd | db94e3b | hillarykoch | 2020-04-29 | update landing page and remove excess output files |
html | 16778b1 | hillarykoch | 2020-04-28 | add some downstream analysis |
Rmd | bf87fa0 | hillarykoch | 2020-04-28 | add some downstream analysis |
html | cc98d2b | hillarykoch | 2020-04-24 | replace cache and update to describe flex_mu |
Rmd | 828e725 | hillarykoch | 2020-04-23 | add a preprocessing section |
html | 828e725 | hillarykoch | 2020-04-23 | add a preprocessing section |
html | c5fd6fc | hillarykoch | 2020-04-23 | change nav bar to accomodate a menu |
html | 5d5c3dd | hillarykoch | 2020-03-07 | Build site. |
html | 7c42345 | hillarykoch | 2020-02-20 | Build site. |
html | 2a5b7c0 | hillarykoch | 2020-02-20 | Build site. |
html | 3a9bf9d | hillarykoch | 2020-02-20 | Build site. |
html | 72e6fec | hillarykoch | 2020-02-20 | Build site. |
html | 862bc02 | hillarykoch | 2020-02-07 | big updates for mcmc processors |
html | 8b3e556 | hillarykoch | 2019-12-03 | Build site. |
html | 39383ac | hillarykoch | 2019-12-03 | Build site. |
Rmd | 645d408 | hillarykoch | 2019-12-03 | workflowr::wflow_publish(files = "*") |
Rmd | e467a51 | hillarykoch | 2019-08-16 | different shiny location |
html | e467a51 | hillarykoch | 2019-08-16 | different shiny location |
html | d58e1a6 | hillarykoch | 2019-08-16 | resource data |
Rmd | 38fb1c0 | hillarykoch | 2019-08-16 | edit shiny |
html | 38fb1c0 | hillarykoch | 2019-08-16 | edit shiny |
Rmd | 550fb1e | hillarykoch | 2019-08-16 | add shiny |
html | 550fb1e | hillarykoch | 2019-08-16 | add shiny |
html | 228123d | hillarykoch | 2019-07-17 | update up to obtaining the hyperparameters |
html | c1dc0c1 | hillarykoch | 2019-07-17 | update up to obtaining the hyperparameters |
html | 674120c | hillarykoch | 2019-07-17 | update about page |
html | e67a3a2 | hillarykoch | 2019-07-17 | update about page |
html | 2faf446 | hillarykoch | 2019-07-17 | update about page |
html | 2177dfa | hillarykoch | 2019-07-17 | update about page |
Rmd | 8b30694 | hillarykoch | 2019-07-17 | update about page |
html | 8b30694 | hillarykoch | 2019-07-17 | update about page |
Rmd | e36b267 | hillarykoch | 2019-07-17 | update about page |
html | e36b267 | hillarykoch | 2019-07-17 | update about page |
Rmd | a991668 | hillarykoch | 2019-07-17 | update about page |
html | a991668 | hillarykoch | 2019-07-17 | update about page |
html | a36d893 | hillarykoch | 2019-07-17 | update about page |
html | e8e54b7 | hillarykoch | 2019-07-17 | update about page |
Rmd | f47c013 | hillarykoch | 2019-07-17 | update about page |
html | f47c013 | hillarykoch | 2019-07-17 | update about page |
Rmd | 50cf23e | hillarykoch | 2019-07-17 | make skeleton |
html | 50cf23e | hillarykoch | 2019-07-17 | make skeleton |
html | a519159 | hillarykoch | 2019-07-17 | Build site. |
Rmd | b81d6ff | hillarykoch | 2019-07-17 | Start workflowr project. |
The Composite LIkelihood eMpirical Bayes (CLIMB) methodology was designed for Gaussian mixture models where the behavior of each component in the mixture is dictated by a latent class hm which has the form hm=(h[1],h[2],…,h[D]), where h[d]∈{−1,0,1} for d∈{1,…,D} and D is the dimension of the data. For even moderate dimensions, this model becomes computationally intractible to fit directly because the number of candidate latent classes is 3D. The CLIMB procedure circumvents this computational challenge, estimating which latent classes are supported by the data, and fitting the model in an empirical Bayesian hierarchical framework with Markov Chain Monte Carlo (MCMC).
CLIMB occurs in 4 major steps:
Each step of analysis CLIMB requires a bit of attention because of their memory/processor requirements. That is, Steps 1 and 3 require low memory but can be parallelized for faster performance. Meanwhile, Steps 2 and 4 cannot be parallelized, and Step 2 in particular in some cases may require high memory. To be efficient when running an analysis with CLIMB, we recommend splitting the steps up accordingly.
Each step of CLIMB has its own page in the "Four-step procedure" menu. We walk through a complete analysis of a simulated dataset throughout these 4 pages. Though the simple analysis provided here can be run on your personal laptop, in practice with real genomic datasets, one will need access to a computing cluster. We assume the user knows how to request multiple cores for parallel jobs or more memory for high-memory tasks. Navigate to any step's page to learn more.
CLIMB's output is rich for exploration. We demonstrate how to produce some of the results or figures in the CLIMB manuscript here.