Last updated: 2021-07-14
Checks: 7 0
Knit directory: implementGMSinCassava/
This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20210504)
was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 772750a. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/accuracies.png
Ignored: analysis/fig2.png
Ignored: analysis/fig3.png
Ignored: analysis/fig4.png
Ignored: code/.DS_Store
Ignored: data/.DS_Store
Untracked files:
Untracked: accuracies.png
Untracked: analysis/docs/
Untracked: analysis/inputsForSimulation.Rmd
Untracked: analysis/speedUpPredCrossVar.Rmd
Untracked: code/AlphaAssign-Python/
Untracked: code/calcGameticLD.cpp
Untracked: code/col_sums.cpp
Untracked: code/convertDart2vcf.R
Untracked: code/helloworld.cpp
Untracked: code/imputationFunctions.R
Untracked: code/matmult.cpp
Untracked: code/misc.cpp
Untracked: code/test.cpp
Untracked: data/CassavaGeneticMap/
Untracked: data/DatabaseDownload_2021May04/
Untracked: data/GBSdataMasterList_31818.csv
Untracked: data/IITA_GBStoPhenoMaster_33018.csv
Untracked: data/NRCRI_GBStoPhenoMaster_40318.csv
Untracked: data/PedigreeGeneticGainCycleTime_aafolabi_01122020.xls
Untracked: data/blups_forCrossVal.rds
Untracked: data/chr1_RefPanelAndGSprogeny_ReadyForGP_72719.fam
Untracked: data/dosages_IITA_filtered_2021May13.rds
Untracked: data/genmap_2021May13.rds
Untracked: data/haps_IITA_filtered_2021May13.rds
Untracked: data/recombFreqMat_1minus2c_2021May13.rds
Untracked: fig2.png
Untracked: fig3.png
Untracked: figure/
Untracked: output/IITA_CleanedTrialData_2021May10.rds
Untracked: output/IITA_ExptDesignsDetected_2021May10.rds
Untracked: output/IITA_blupsForModelTraining_twostage_asreml_2021May10.rds
Untracked: output/IITA_trials_NOT_identifiable.csv
Untracked: output/crossValPredsA.rds
Untracked: output/crossValPredsAD.rds
Untracked: output/cvAD_5rep5fold_markerEffects.rds
Untracked: output/cvAD_5rep5fold_meanPredAccuracy.rds
Untracked: output/cvAD_5rep5fold_parentfolds.rds
Untracked: output/cvAD_5rep5fold_predMeans.rds
Untracked: output/cvAD_5rep5fold_predVars.rds
Untracked: output/cvAD_5rep5fold_varPredAccuracy.rds
Untracked: output/cvDirDom_5rep5fold_markerEffects.rds
Untracked: output/cvDirDom_5rep5fold_meanPredAccuracy.rds
Untracked: output/cvDirDom_5rep5fold_parentfolds.rds
Untracked: output/cvDirDom_5rep5fold_predMeans.rds
Untracked: output/cvDirDom_5rep5fold_predVars.rds
Untracked: output/cvDirDom_5rep5fold_varPredAccuracy.rds
Untracked: output/cvMeanPredAccuracyA.rds
Untracked: output/cvMeanPredAccuracyAD.rds
Untracked: output/cvPredMeansA.rds
Untracked: output/cvPredMeansAD.rds
Untracked: output/cvVarPredAccuracyA.rds
Untracked: output/cvVarPredAccuracyAD.rds
Untracked: output/estimateSelectionError.rds
Untracked: output/genomicPredictions_ModelAD.rds
Untracked: output/genomicPredictions_ModelDirDom.rds
Untracked: output/kinship_A_IITA_2021May13.rds
Untracked: output/kinship_D_IITA_2021May13.rds
Untracked: output/markEffsTest.rds
Untracked: output/markerEffects.rds
Untracked: output/markerEffectsA.rds
Untracked: output/markerEffectsAD.rds
Untracked: output/maxNOHAV_byStudy.csv
Untracked: output/obsCrossMeansAndVars.rds
Untracked: output/parentfolds.rds
Untracked: output/ped2check_genome.rds
Untracked: output/ped2genos.txt
Untracked: output/pednames2keep.txt
Untracked: output/pednames_Prune100_25_pt25.log
Untracked: output/pednames_Prune100_25_pt25.nosex
Untracked: output/pednames_Prune100_25_pt25.prune.in
Untracked: output/pednames_Prune100_25_pt25.prune.out
Untracked: output/potential_dams.txt
Untracked: output/potential_sires.txt
Untracked: output/predVarTest.rds
Untracked: output/samples2keep_IITA_2021May13.txt
Untracked: output/samples2keep_IITA_MAFpt01_prune50_25_pt98.log
Untracked: output/samples2keep_IITA_MAFpt01_prune50_25_pt98.nosex
Untracked: output/samples2keep_IITA_MAFpt01_prune50_25_pt98.prune.in
Untracked: output/samples2keep_IITA_MAFpt01_prune50_25_pt98.prune.out
Untracked: output/verified_ped.txt
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were made to the R Markdown (analysis/05-CrossValidation.Rmd
) and HTML (docs/05-CrossValidation.html
) files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view the files as they were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 772750a | wolfemd | 2021-07-14 | DirDom model and selection index calc fully integrated functions. |
Rmd | 97db806 | wolfemd | 2021-07-11 | Work shown finding and fixing a bug where at least one getMarkEffs model failed. Problem was with use of plan(multicore) + OpenBLAS both using forking. Instead use plan(multisession). |
Rmd | 2bc9644 | wolfemd | 2021-07-09 | Re-run cross-val with meanPredAccuracy SELIND handling fixed, but debug work not shown anymore. |
Rmd | 889d98a | wolfemd | 2021-07-09 | test and fix bug in meanPredAccuracy() output when SIwts contain only subset of traits predicted. |
Rmd | 4308b87 | wolfemd | 2021-07-08 | Full run 5-reps x 5-fold parent-wise cross-val both models DirDom and AD. |
Rmd | 7888dee | wolfemd | 2021-07-08 | Work fully shown, testing and integrating DirDom model into crossval funcs. Now using R inside a singularity via rocker. Controlling OpenBLAS inside R session with RhpcBLASctl::blas_set_num_threads() and much more. |
html | 5e45aac | wolfemd | 2021-06-18 | Build site. |
Rmd | fa20501 | wolfemd | 2021-06-18 | Initial results are ready to publish and share with colleagues for |
Rmd | 12cc368 | wolfemd | 2021-06-18 | runParentWiseCrossVal for 1 full rep, 5 folds. Found issue with CBSU R compilation but NOT with my code! |
html | e66bdad | wolfemd | 2021-06-10 | Build site. |
Rmd | a8452ba | wolfemd | 2021-06-10 | Initial build of the entire page upon completion of all |
Rmd | 6a5ef32 | wolfemd | 2021-06-09 | meanPredAccuracy() now also included with function moved to “parentWiseCrossVal.R”. NOTE on previous commit: cross-validation functions are NOT in “predCrossVar.R”. |
Rmd | 63067f7 | wolfemd | 2021-06-07 | Function varPredAccuracy() debugged / tested and moved to predCrossVar.R |
Rmd | 66c0bde | wolfemd | 2021-06-07 | Remove old and unused code. STILL IN PROGRESS at the computeVarPredAccuracy step. |
Rmd | 3c085ee | wolfemd | 2021-06-07 | Cross-validation code IN PROGRESS. Currently working on computeVarPredAccuracy. |
In the manuscript, the cross-validation is documented many pages and scripts, documented here.
For ongoing GS, I have a function runCrossVal()
that manages all inputs and outputs easy to work with pre-computed accuracies.
Goal here is to make a function: runParentWiseCrossVal()
, or at least make progress towards developing one.
However, for computational reasons, I imagine it might still be best to separate the task into a few functions.
My goal is to simplify and integrate into the pipeline used for NextGen Cassava. In the paper, used multi-trait Bayesian ridge-regression (MtBRR) to obtain marker effects, and also stored posterior matrices on disk to later compute posterior mean variances. This was computationally expensive and different from my standard univariate REML approach. I think MtBRR and PMV are probably the least biased way to go… but…
For the sake of testing a simple integration into the in-use pipeline, I want to try univariate REML to get the marker effects, which I’ll subsequently use for the cross-validation.
Revised the functions in package:predCrossVar
to increase the computational efficiency. Not yet included into the actual R package but instead sourced from code/predCrossVar.R
. Additional speed increases were achieved by extra testing to optimize balance of OMP_NUM_THREADS
setting (multi-core BLAS) and parallel processing of the crosses-being-predicted. Improvements will benefit users predicting with REML / Bayesian-VPM, but probably worse for Bayesian-PMV.
Use a a singularity image from the rocker project, as recommended by Qi Sun to get an OpenBLAS-linked R environment that packages can easily be installed on.
This first chunk is one-time only and doesn’t take long. Saves a 650Mb *.sif
file to server’s /workdir/
# copy the project data
cd /home/jj332_cas/marnin/;
cp -R implementGMSinCassava /home/$USER;
# the project directory can be in my networked folder for 2 reasons:
# 1) singularity will automatically recognize and be able to access it
# 2) My analyses not read/write intensive; don't break server rules/etiquette
# set up a working directory on the remote machine
mkdir /workdir/$USER
cd /workdir/$USER/;
# pull a singularity image and save in the file rocker.sif
# next time you use the rocker.sif file to start the container
singularity pull rocker.sif docker://rocker/tidyverse:latest;
For analysis, operate each R session within a singularity Linux shell within a screen shell.
# 1) start a screen shell
screen;
# 2) start the singularity Linux shell inside that
singularity shell /workdir/$USER/rocker.sif;
# Project directory, so R will use as working dir.
cd /home/mw489/implementGMSinCassava/;
# 3) Start R
R
Fully-tested runParentWiseCrossVal()
and component functions are in the code/parentWiseCrossVal.R
script.
Below, source it and use it for a full cross-validation run.
# install.packages(c("RhpcBLASctl","here","rsample","sommer","psych","future.callr","furrr","lme4"))
# install.packages('future.callr')
require(tidyverse); require(magrittr);
# 5 threads per Rsession for matrix math (openblas)
::blas_set_num_threads(5)
RhpcBLASctl
# SOURCE CORE FUNCTIONS
source(here::here("code","parentWiseCrossVal.R"))
source(here::here("code","predCrossVar.R"))
# PEDIGREE
<-read.table(here::here("output","verified_ped.txt"),
pedheader = T, stringsAsFactors = F) %>%
rename(GID=FullSampleName,
damID=DamID,
sireID=SireID) %>%
::select(GID,sireID,damID)
dplyr# Keep only families with _at least_ 2 offspring
%<>%
ped semi_join(ped %>% count(sireID,damID) %>% filter(n>1) %>% ungroup())
# BLUPs
<-readRDS(file=here::here("data","blups_forCrossVal.rds")) %>%
blups::select(-varcomp)
dplyr
# GENOMIC RELATIONSHIP MATRICES (GRMS)
<-list(A=readRDS(file=here::here("output","kinship_A_IITA_2021May13.rds")),
grmsD=readRDS(file=here::here("output",
"kinship_domGenotypic_IITA_2021July5.rds")))
## using A+domGenotypic (instead of domClassic used previously)
## will achieve appropriate dom effects for predicting family mean TGV
## but resulting add effects WILL NOT represent allele sub. effects and thus
## predictions won't equal GEBV, allele sub. effects will be post-computed
## as alpha = a + d(q-p)
# DOSAGE MATRIX
<-readRDS(file=here::here("data",
dosages"dosages_IITA_filtered_2021May13.rds"))
# RECOMBINATION FREQUENCY MATRIX
<-readRDS(file=here::here("data",
recombFreqMat"recombFreqMat_1minus2c_2021May13.rds"))
# HAPLOTYPE MATRIX
## keep only haplos for parents-in-the-pedigree
## those which will be used in prediction, saves memory
<-readRDS(file=here::here("data","haps_IITA_filtered_2021May13.rds"))
haploMat<-union(ped$sireID,ped$damID)
parents<-sort(c(paste0(parents,"_HapA"),
parenthapspaste0(parents,"_HapB")))
<-haploMat[parenthaps,colnames(recombFreqMat)]
haploMat
# SELECTION INDEX WEIGHTS
## from IYR+IK
## note that not ALL predicted traits are on index
<-c(logFYLD=20,
SIwtsHI=10,
DM=15,
MCMDS=-10,
logRTNO=12,
logDYLD=20,
logTOPYLD=15,
PLTHT=10)
Server 1: modelType=“DirDom”
cbsulm17 - 112 cores, 512 GB RAM
<-runParentWiseCrossVal(nrepeats=5,nfolds=5,seed=84,
cvDirDom_5rep5foldmodelType="DirDom",
ncores=20,nBLASthreads=5,
outName="output/cvDirDom_5rep5fold",
ped=ped,
blups=blups,
dosages=dosages,
haploMat=haploMat,
grms=grms,
recombFreqMat = recombFreqMat,
selInd = TRUE, SIwts = SIwts)
saveRDS(cvDirDom_5rep5fold,here::here("output","cvDirDom_5rep5fold_predAccuracy.rds"))
# [1] "Marker-effects Computed. Took 2.3851 hrs"
# [1] "Predicting cross variances and covariances"
# Joining, by = c("Repeat", "Fold")
# [1] "Done predicting fam vars. Took 59.08 mins for 198 crosses"
# [1] "Done predicting fam vars. Took 18.63 mins for 198 crosses"
# [1] "Done predicting fam vars. Took 64.82 mins for 216 crosses"
# [1] "Done predicting fam vars. Took 20.41 mins for 216 crosses"
# [1] "Done predicting fam vars. Took 46.42 mins for 156 crosses"
# [1] "Done predicting fam vars. Took 14.94 mins for 156 crosses"
# [1] "Done predicting fam vars. Took 63.45 mins for 210 crosses"
# [1] "Done predicting fam vars. Took 19.8 mins for 210 crosses"
# [1] "Done predicting fam vars. Took 50.62 mins for 171 crosses"
# [1] "Done predicting fam vars. Took 16.26 mins for 171 crosses"
# [1] "Done predicting fam vars. Took 49.87 mins for 163 crosses"
# [1] "Done predicting fam vars. Took 16.2 mins for 163 crosses"
# [1] "Done predicting fam vars. Took 73.37 mins for 253 crosses"
# [1] "Done predicting fam vars. Took 23.59 mins for 253 crosses"
# [1] "Done predicting fam vars. Took 56.32 mins for 190 crosses"
# [1] "Done predicting fam vars. Took 18.44 mins for 190 crosses"
# [1] "Done predicting fam vars. Took 47.33 mins for 161 crosses"
# [1] "Done predicting fam vars. Took 15.79 mins for 161 crosses"
# [1] "Done predicting fam vars. Took 59.18 mins for 189 crosses"
# [1] "Done predicting fam vars. Took 18.67 mins for 189 crosses"
# [1] "Done predicting fam vars. Took 64.72 mins for 205 crosses"
# [1] "Done predicting fam vars. Took 21.17 mins for 205 crosses"
# [1] "Done predicting fam vars. Took 63.97 mins for 213 crosses"
# [1] "Done predicting fam vars. Took 20.04 mins for 213 crosses"
# [1] "Done predicting fam vars. Took 53.03 mins for 180 crosses"
# [1] "Done predicting fam vars. Took 17.28 mins for 180 crosses"
# [1] "Done predicting fam vars. Took 58.67 mins for 199 crosses"
# [1] "Done predicting fam vars. Took 19.03 mins for 199 crosses"
# ....
# estimate 20 more hours, complete on July 12 very early AM?
# [1] "Accuracies predicted. Took 34.37369 hrs total.Goodbye!"
# Warning message:
# In for (ii in 1L:length(res)) { : closing unused connection 3 (localhost)
# > saveRDS(cvDirDom_5rep5fold,here::here("output","cvDirDom_5rep5fold_predAccuracy.rds"))
Server 2: modelType=“AD”
cbsulm29 - 104 cores, 512 GB RAM
<-list(A=readRDS(file=here::here("output","kinship_A_IITA_2021May13.rds")),
grmsADD=readRDS(file=here::here("output",
"kinship_D_IITA_2021May13.rds")))
rm(grms)
<-runParentWiseCrossVal(nrepeats=5,nfolds=5,seed=84,
cvAD_5rep5foldmodelType="AD",
ncores=20,
outName="output/cvAD_5rep5fold",
ped=ped,
blups=blups,
dosages=dosages,
haploMat=haploMat,
grms=grmsAD,
recombFreqMat = recombFreqMat,
selInd = TRUE, SIwts = SIwts)
saveRDS(cvAD_5rep5fold,here::here("output","cvAD_5rep5fold_predAccuracy.rds"))
# [1] "Marker-effects Computed. Took 1.81086 hrs"
# [1] "Done predicting fam vars. Took 43.11 mins for 198 crosses"
# [1] "Done predicting fam vars. Took 47.04 mins for 216 crosses"
# .....
# [1] "Accuracies predicted. Took 19.68694 hrs total.\n Goodbye!"
# [1] "Accuracies predicted. Took 19.73242 hrs total.Goodbye!"
# > saveRDS(cvAD_5rep5fold,here::here("output","cvAD_5rep5fold_predAccuracy.rds"))
#' @param byGroup logical, if TRUE, assumes a column named "Group" is present which unique classifies each GID into some genetic grouping.
#' @param modelType string, A, AD or ADE representing model with Additive-only, Add. plus Dominance, and Add. plus Dom. plus. AxD Epistasis (AD), respectively.
#' @param grms list of GRMs where each element is named either A, D, or, AD. Matrices supplied must match required by A, AD and ADE models. For ADE grms=list(A=A,D=D,AD=AD)...
#' @param augmentTP option to supply an additional set of training data, which will be added to each training model but never included in the test set.
#' @param TrainTestData data.frame with de-regressed BLUPs, BLUPs and weights (WT) for training and test. If byGroup==TRUE, a column with Group as the header uniquely classifying GIDs into genetic groups, is expected.
<-function(TrainTestData,modelType,grms,nrepeats,nfolds,ncores=1,
runCrossValbyGroup=FALSE,augmentTP=NULL,gid="GID",...){
require(sommer); require(rsample)
# Set-up replicated cross-validation folds
# splitting by clone (if clone in training dataset, it can't be in testing)
if(byGroup){
<-tibble(GroupName=unique(TrainTestData$Group))
cvsampleselse { cvsamples<-tibble(GroupName="None") }
} <-cvsamples %>%
cvsamplesmutate(Splits=map(GroupName,function(GroupName){
if(GroupName!="None"){
<-TrainTestData %>%
thisgroupfilter(Group==GroupName) } else { thisgroup<-TrainTestData }
<-tibble(repeats=1:nrepeats,
outsplits=rerun(nrepeats,group_vfold_cv(thisgroup, group = gid, v = nfolds))) %>%
unnest(splits)
return(out)
%>%
})) unnest(Splits)
## Internal function
## fits prediction model and calcs. accuracy for each train-test split
<-possibly(function(splits,modelType,augmentTP,TrainTestData,GroupName,grms){
fitModel<-proc.time()[3]
starttime# Set-up training set
<-training(splits)
trainingdata## Make sure, if there is an augmentTP, no GIDs in test-sets
if(!is.null(augmentTP)){
## remove any test-set members from augment TP before adding to training data
<-augmentTP %>% filter(!(!!sym(gid) %in% testing(splits)[[gid]]))
training_augment<-bind_rows(trainingdata,training_augment) }
trainingdataif(GroupName!="None"){ trainingdata<-bind_rows(trainingdata,
%>%
TrainTestData filter(Group!=GroupName,
!(!!sym(gid) %in% testing(splits)[[gid]]))) }
# Subset kinship matrices
<-union(trainingdata[[gid]],testing(splits)[[gid]])
traintestgids<-grms[["A"]][traintestgids,traintestgids]
A1paste0(gid,"a")]]<-factor(trainingdata[[gid]],levels=rownames(A1))
trainingdata[[if(modelType %in% c("AD","ADE")){
<-grms[["D"]][traintestgids,traintestgids]
D1paste0(gid,"d")]]<-factor(trainingdata[[gid]],levels=rownames(D1))
trainingdata[[if(modelType=="ADE"){
#AA1<-grms[["AA"]][traintestgids,traintestgids]
<-grms[["AD"]][traintestgids,traintestgids]
AD1diag(AD1)<-diag(AD1)+1e-06
#DD1<-grms[["DD"]][traintestgids,traintestgids]
#trainingdata[[paste0(gid,"aa")]]<-factor(trainingdata[[gid]],levels=rownames(AA1))
paste0(gid,"ad")]]<-factor(trainingdata[[gid]],levels=rownames(AD1))
trainingdata[[#trainingdata[[paste0(gid,"dd")]]<-factor(trainingdata[[gid]],levels=rownames(DD1))
}
}# Set-up random model statements
<-paste0("~vs(",gid,"a,Gu=A1)")
randFormulaif(modelType %in% c("AD","ADE")){
<-paste0(randFormula,"+vs(",gid,"d,Gu=D1)")
randFormulaif(modelType=="ADE"){
<-paste0(randFormula,"+vs(",gid,"ad,Gu=AD1)")
randFormula#"+vs(",gid,"aa,Gu=AA1)",
#"+vs(",gid,"ad,Gu=AD1)")
#"+vs(",gid,"dd,Gu=DD1)")
}
}# Fit genomic prediction model
<- mmer(fixed = drgBLUP ~1,
fit random = as.formula(randFormula),
weights = WT,
data=trainingdata)
# Gather the BLUPs
<-tibble(GID=as.character(names(fit$U[[paste0("u:",gid,"a")]]$drgBLUP)),
gblupsGEBV=as.numeric(fit$U[[paste0("u:",gid,"a")]]$drgBLUP))
if(modelType %in% c("AD","ADE")){
%<>% mutate(GEDD=as.numeric(fit$U[[paste0("u:",gid,"d")]]$drgBLUP))
gblups if(modelType=="ADE"){
%<>% mutate(#GEEDaa=as.numeric(fit$U[[paste0("u:",gid,"aa")]]$drgBLUP),
gblups GEEDad=as.numeric(fit$U[[paste0("u:",gid,"ad")]]$drgBLUP))
#GEEDdd=as.numeric(fit$U[[paste0("u:",gid,"dd")]]$drgBLUP))
}
}# Calc GETGVs
## Note that for modelType=="A", GEBV==GETGV
%<>%
gblups mutate(GETGV=rowSums(.[,grepl("GE",colnames(.))]))
# Test set validation data
<-TrainTestData %>%
validationData::select(gid,BLUP) %>%
dplyrfilter(GID %in% testing(splits)[[gid]])
# Measure accuracy in test set
## cor(GEBV,BLUP)
## cor(GETGV,BLUP)
<-gblups %>%
accuracymutate(GETGV=rowSums(.[,grepl("GE",colnames(.))])) %>%
filter(GID %in% testing(splits)[[gid]]) %>%
left_join(validationData) %>%
summarize(accGEBV=cor(GEBV,BLUP, use = 'complete.obs'),
accGETGV=cor(GETGV,BLUP, use = 'complete.obs'))
<-proc.time()[3]-starttime
computeTime%<>% mutate(computeTime=computeTime)
accuracy return(accuracy)
otherwise = NA)
},## Run models across all train-test splits
## Parallelize
require(furrr); plan(multicore, workers = ncores)
options(future.globals.maxSize=+Inf); options(future.rng.onMisuse="ignore")
<-cvsamples %>%
cvsamplesmutate(accuracy=future_map2(splits,GroupName,
~fitModel(splits=.x,GroupName=.y,
modelType=modelType,augmentTP=NULL,TrainTestData=TrainTestData,grms=grms),
.progress = FALSE)) %>%
unnest(accuracy)
return(cvsamples)
}
See Results: Home for plots and summary tables.
sessionInfo()