Last updated: 2020-02-20

Checks: 5 2

Knit directory: lieb/

This reproducible R Markdown analysis was created with workflowr (version 1.6.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20190717) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Recording the operating system, R version, and package versions is critical for reproducibility. To record the session information, add sessioninfo: “sessionInfo()” to _workflowr.yml. Alternatively, you could use devtools::session_info() or sessioninfo::session_info(). Lastly, you can manually add a code chunk to this file to run any one of these commands and then disable to automatic insertion by changing the workflowr setting to sessioninfo: “”.

The following chunks had caches available:
  • unnamed-chunk-2
  • unnamed-chunk-3

To ensure reproducibility of the results, delete the cache directory pairwise_fitting_cache and re-run the analysis. To have workflowr automatically delete the cache directory prior to building the file, set delete_cache = TRUE when running wflow_build() or wflow_publish().

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    analysis/.DS_Store
    Ignored:    analysis/.Rhistory
    Ignored:    analysis/pairwise_fitting_cache/
    Ignored:    analysis/running_mcmc_cache/
    Ignored:    output/.DS_Store
    Ignored:    output/chip_shiny/.DS_Store
    Ignored:    output/chip_shiny/Data/.DS_Store
    Ignored:    output/vision_shiny/.DS_Store
    Ignored:    output/vision_shiny/.Rhistory
    Ignored:    output/vision_shiny/Data/.DS_Store

Untracked files:
    Untracked:  analysis/downstream.Rmd

Unstaged changes:
    Modified:   analysis/_site.yml

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
Rmd 862bc02 hillarykoch 2020-02-07 big updates for mcmc processors
html 862bc02 hillarykoch 2020-02-07 big updates for mcmc processors
html 8b3e556 hillarykoch 2019-12-03 Build site.
html 39383ac hillarykoch 2019-12-03 Build site.
Rmd 645d408 hillarykoch 2019-12-03 workflowr::wflow_publish(files = “*“)
html e467a51 hillarykoch 2019-08-16 different shiny location
html d58e1a6 hillarykoch 2019-08-16 resource data
html 38fb1c0 hillarykoch 2019-08-16 edit shiny
html 228123d hillarykoch 2019-07-17 update up to obtaining the hyperparameters
html c1dc0c1 hillarykoch 2019-07-17 update up to obtaining the hyperparameters
html 674120c hillarykoch 2019-07-17 update about page
html e67a3a2 hillarykoch 2019-07-17 update about page
Rmd 2faf446 hillarykoch 2019-07-17 update about page
html 2faf446 hillarykoch 2019-07-17 update about page
html da65141 hillarykoch 2019-07-17 update about page
Rmd 8278253 hillarykoch 2019-07-17 update about page
html 2177dfa hillarykoch 2019-07-17 update about page
Rmd 8b30694 hillarykoch 2019-07-17 update about page
html 8b30694 hillarykoch 2019-07-17 update about page
Rmd e36b267 hillarykoch 2019-07-17 update about page
html e36b267 hillarykoch 2019-07-17 update about page
html a991668 hillarykoch 2019-07-17 update about page
html a36d893 hillarykoch 2019-07-17 update about page
html e8e54b7 hillarykoch 2019-07-17 update about page
html f47c013 hillarykoch 2019-07-17 update about page
Rmd 50cf23e hillarykoch 2019-07-17 make skeleton
html 50cf23e hillarykoch 2019-07-17 make skeleton

—Special considerations: this portion is highly parallelizable—

Here, we describe how to execute the first step of CLIMB: pairwise fitting (a composite likelihood method).

First, load the package and the simulated dataset. This toy dataset has \(n=1500\) observations across \(D=3\) conditions (that is, dimensions). Thus, we need to fit \(\binom{D}{2}=3\) pairwise models.

# load that package
library(CLIMB)

# load the toy data
data("sim")

The fitting of each pairwise model can be done in parallel, which saves a lot of computing time when the dimension is larger. This can be done simply (in parallel, or linearly) with the function get_pairwise_fits(). Note that the input data should be \(z\)-scores (or data arising from some other scoring mechanism, transformed appropriately to \(z\)-scores).

get_pairwise_fits() runs the pairwise analysis at the default settings used in the CLIMB manuscript. The user can select a few settings with this functions:

  1. nlambda: how many tuning parameters to try (defaults to 10)

  2. parallel: logical indicating whether or not to do the analysis in parallel

  3. ncores: if in parallel, how many cores to use (defaults to 10)

  4. bound: is there a lower bound on the estimated non-null mean? (defaults to zero, and must be non-negative)

With all of this in place, one can obtain the pairwise fits as follows:

fits <- get_pairwise_fits(z = sim$data, parallel = FALSE)

Calling names(fits) tells us which pair of dimensions each fit belongs to.

names(fits)
[1] "1_2" "1_3" "2_3"

Each fit contains additional information, including the length-2 association patterns estimated to be in the given pairwise fit, the posterior probability of each observation belonging to each of these classes, and their corresponding estimated means and covariances.

Finally, save this output, as it is necessary for many parts of the downstream analyses, before moving on to the next step.

save(fits, file = "pwfits.Rdata")