Last updated: 2022-07-30
Checks: 4 2
Knit directory: workflowr/
This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20190717)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Tracking code development and connecting the code version to the
results is critical for reproducibility. To start using Git, open the
Terminal and type git init
in your project directory.
This project is not being versioned with Git. To obtain the full
reproducibility benefits of using workflowr, please see
?wflow_start
.
Now that we have our pairwise fits, we need to figure out what are the remaining candidate latent classes. These can be found using the path enumeration and pruning algorithm. This is a graph-based algorithm which can require a lot of memory (especially when the dimension of the data is large). However, with the memory available, this algorithm runs very quickly – in our experience, in under 5 minutes.
First, load in the previously obtained pairwise fits.
data("fits")
Then, we can obtain a reduced list of candidate latent classes with
get_reduced_classes()
. This function has 3 arguments:
the pairwise fits
the dimension of the data
the name of an output “LEMON graph format” file (here, called
lgf.txt
). This is not to be edited by the user,
but is produced for underlying C software.
# This finds the dimension of the data directly from the pairwise fits
<- as.numeric(strsplit(tail(names(fits),1), "_")[[1]][2])
D
# Get the list of candidate latent classes
<- get_reduced_classes(fits, D, "output/lgf.txt", split_in_two = FALSE) red_class
Writing LGF file...done!
Finding latent classes...done!
# write the output to a text file
::write_tsv(data.frame(red_class), file = "output/red_class.txt", col_names = FALSE) readr
Each row of red_class
corresponds to a candidate latent
class across the 3 dimensions. The remaining candidate latent classes
are as follows:
red_class
[,1] [,2] [,3]
[1,] 1 0 1
[2,] 1 0 0
[3,] 1 0 -1
[4,] 0 1 -1
[5,] 0 0 1
[6,] 0 0 0
[7,] 0 0 -1
[8,] 0 -1 1
[9,] 0 -1 0
[10,] -1 0 1
[11,] -1 0 0
[12,] -1 -1 1
[13,] -1 -1 0
which is a subset of the 3D=27 candidate latent classes. Next, we need to determine the hyperparameters on the priors in our Bayesian Gaussian mixture model. Most important is computing the hyperparameters for the prior on the mixing weights. Some classes (especially when the dimension is larger) will have a mixing weight that will result in a degenerate mixing distribution, and so classes with small enough mixing weights can be further pruned from the model. This is discussed in the next step.