Last updated: 2020-12-04
Checks: 7 0
Knit directory: IITA_2020GS/
This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200915)
was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 79b6430. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.DS_Store
Ignored: data/.DS_Store
Ignored: output/.DS_Store
Untracked files:
Untracked: data/GEBV_IITA_OutliersRemovedTRUE_73119.csv
Untracked: data/PedigreeGeneticGainCycleTime_aafolabi_01122020.csv
Untracked: data/iita_blupsForCrossVal_outliersRemoved_73019.rds
Untracked: output/DosageMatrix_IITA_2020Sep16.rds
Untracked: output/IITA_CleanedTrialData_2020Dec03.rds
Untracked: output/IITA_ExptDesignsDetected_2020Dec03.rds
Untracked: output/Kinship_AA_IITA_2020Sep16.rds
Untracked: output/Kinship_AD_IITA_2020Sep16.rds
Untracked: output/Kinship_A_IITA_2020Sep16.rds
Untracked: output/Kinship_DD_IITA2020Sep16.rds
Untracked: output/Kinship_D_IITA_2020Sep16.rds
Untracked: output/cvresults_ModelADE_chunk1.rds
Untracked: output/cvresults_ModelADE_chunk2.rds
Untracked: output/cvresults_ModelADE_chunk3.rds
Untracked: output/genomicPredictions_ModelADE_threestage_IITA_2020Sep21.rds
Untracked: output/genomicPredictions_ModelADE_twostage_IITA_2020Dec03.rds
Untracked: output/genomicPredictions_ModelA_threestage_IITA_2020Sep21.rds
Untracked: output/iita_blupsForModelTraining_twostage_asreml_2020Dec03.rds
Untracked: output/model_meangetgvs_vs_year.csv
Untracked: output/model_rawgetgvs_vs_year.csv
Untracked: output/training_data_summary.csv
Untracked: workflowr_log.R
Unstaged changes:
Modified: output/IITA_ExptDesignsDetected.rds
Modified: output/iita_blupsForModelTraining.rds
Modified: output/maxNOHAV_byStudy.csv
Modified: output/meanGETGVbyYear_IITA_2020Dec03.csv
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were made to the R Markdown (analysis/07-cleanTPdata_Dec2020.Rmd
) and HTML (docs/07-cleanTPdata_Dec2020.html
) files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view the files as they were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 79b6430 | wolfemd | 2020-12-04 | Update the analysis of rate-of-gain. Include regressions and output |
html | 7bae38d | wolfemd | 2020-12-03 | Build site. |
html | b9bb6f8 | wolfemd | 2020-12-03 | Build site. |
Rmd | 9718666 | wolfemd | 2020-12-03 | Refresh BLUPs and GBLUPs with trials harvested so far. Include |
Follow outlined GenomicPredictionChecklist and previous pipeline to process cassavabase data for ultimate genomic prediction.
Below we will clean and format training data.
This will update genomic predictions relative to the earlier analysis done in Sept. 2020.
Downloaded all IITA field trials with studyYear 2018, 2019, 2020.
DatabaseDownload_2020Dec03/
uploaded to Cassavabase FTP server.2018 trials: probably redundant to those previously downloaded in July 2019 for the genomic prediction of GS C4. In case some trials weren’t harvested as of July 2019, use the 2018 trials downloaded here instead of the ones from 2019.
2019 trials: All trials harvested and uploaded as of now (Dec. 3rd, 2020) are to be added to refresh the genomic predictions.
2020 trials: If any current trials already have e.g. disease data, will use it.
Downloaded all IITA field trials.
DatabaseDownload_2020Sep15/
uploaded to Cassavabase FTP server.Possible database bug? The entire >500Mb phenotype dataset for IITA downloaded without a problem. However, I’m getting an “server error” message trying to download the corresponding meta-data in one chunk.
Solution: combine meta-data downloaded for “all” trials in July 2019, with meta-data download for the 2018-2020 period done Dec 3rd, 2020. Feed joined file to readDBdata()
.
<- read.csv("ftp://ftp.cassavabase.org/marnin_datasets/NGC_BigData/DatabaseDownload_72419/2019-07-24T144144metadata_download.csv",
metadata19 na.strings = c("#VALUE!", NA, ".", "", " ", "-", "\""), stringsAsFactors = F)
<- read.csv("ftp://ftp.cassavabase.org/marnin_datasets/NGC_BigData/DatabaseDownload_2020Dec03/2020-12-03T094057metadata_download.csv",
metadata20 na.strings = c("#VALUE!", NA, ".", "", " ", "-", "\""), stringsAsFactors = F)
%>% # remove lines for trials in the 2020 download
metadata19 filter(!studyName %in% metadata20$studyName) %>% bind_rows(metadata20) %>% # ensure no duplicate lines
%>% # write to disk
distinct write.csv(., here::here("output", "all_iita_metadata_Dec2020.csv"), row.names = F)
rm(list = ls())
library(tidyverse)
library(magrittr)
source(here::here("code", "gsFunctions.R"))
Read DB data directly from the Cassavabase FTP server.
## From September
<- readDBdata(phenotypeFile = paste0("ftp://ftp.cassavabase.org/marnin_datasets/NGC_BigData/",
dbdata_sep2020 "DatabaseDownload_2020Sep15/2020-09-15T185453phenotype_download.csv"), metadataFile = here::here("output",
"all_iita_metadata.csv"))
## New, December download
<- readDBdata(phenotypeFile = paste0("ftp://ftp.cassavabase.org/marnin_datasets/NGC_BigData/",
dbdata_dec2020 "DatabaseDownload_2020Dec03/2020-12-03T102130phenotype_download.csv"), metadataFile = here::here("output",
"all_iita_metadata_Dec2020.csv"))
# dbdata_dec2020 %>% filter(!is.na(plantingDate)) %>%
# count(studyName,studyYear,locationName) dbdata_dec2020 %>%
# select(studyYear,studyDbId) table(unique(dbdata_dec2020$studyName) %in%
# unique(dbdata_sep2020$studyName)) FALSE TRUE 42 371
# 42 'new trials' table(dbdata_dec2020$observationUnitDbId %in%
# dbdata_sep2020$observationUnitDbId) FALSE TRUE 5823 175964
# length(dbdata_dec2020$observationUnitDbId)==length(unique(dbdata_dec2020$observationUnitDbId))
# any of the 'new' plots register as being for trials already in the sep data?
# yes, 1 trial (see below)
# table(unique(dbdata_dec2020$studyName[!dbdata_dec2020$observationUnitDbId %in%
# dbdata_sep2020$observationUnitDbId]) %in% unique(dbdata_sep2020$studyName))
# FALSE TRUE 42 1
### probably doesn't matter, just replace the old with the new
<- bind_rows(dbdata_sep2020 %>% mutate(replicate = as.integer(replicate),
dbdata rowNumber = as.integer(rowNumber), colNumber = as.integer(colNumber)) %>% mutate(across(contains("CO_334"),
%>% anti_join(dbdata_dec2020 %>% distinct(observationUnitDbId, studyName,
as.numeric)) studyYear, studyDbId)), dbdata_dec2020)
Make TrialType Variable
<- makeTrialTypeVar(dbdata)
dbdata %>% count(TrialType) dbdata
TrialType n
1 AYT 53226
2 CET 70458
3 Conservation 997
4 CrossingBlock 1546
5 ExpCET 1865
6 GeneticGain 51905
7 NCRP 3872
8 PYT 61254
9 SN 155596
10 UYT 74378
11 <NA> 77651
Looking at the studyName’s of trials getting NA for TrialType, which can’t be classified at present.
Here is the list of trials I am not including.
%>% filter(is.na(TrialType)) %$% unique(studyName) %>% write.csv(., file = here::here("output",
dbdata "iita_trials_NOT_identifiable_Dec2020.csv"), row.names = F)
Wrote to disk a CSV in the output/
sub-directory.
Should any of these trials have been included?
Especially the following new trials (post 2018)?
%>% filter(is.na(TrialType), as.numeric(studyYear) > 2018) %$% unique(studyName) dbdata
[1] "18Hawaii_Parents" "19CB1IB"
[3] "19CVS12Chitala" "19CVS12Mkondezi"
[5] "19flowexpPGR22UB" "19flowexpRedLight22UB"
[7] "19flowLightIntensityUB" "19flowPGRFeminizationIB"
[9] "19flowPGRFreqIB" "19flowPGRRatioIB"
[11] "19flowPGRRtflwrIB" "19GhanaGermplasmUB"
[13] "19GRCgermplasmUB" "19.GS.C1.C2.C3.SelGain.AB"
[15] "19HarvTimeKabangwe" "19LocalGermplasmUB"
[17] "19SN5968Chitala" "2019GXEBUKEMBA"
[19] "2019GXEMUHANGA" "2019GXENGOMA"
[21] "2019GXENYAGATARE" "2019GXERUBIRIZI"
[23] "2019GXERUBONA" "20CSV12Chitala"
[25] "20CSV12Mkondezi" "20GRCgermplasmIB"
[27] "20LocalGermplasmIB" "20PTY49Kabangwe"
[29] "Hawaii_IITA_seed_2019" "Hawaii_seed_Asia_2019"
[31] "Hawaii_seed_CIAT_2019"
%<>% filter(!is.na(TrialType))
dbdata %>% group_by(programName) %>% summarize(N = n()) dbdata
# A tibble: 1 x 2
programName N
<chr> <int>
1 IITA 475097
# 475097 plots (~155K are seedling nurseries which will be excluded from most
# analyses)
Making a table of abbreviations for renaming. Since July 2019 version: added chromometer traits (L, a, b) and added branching levels count (BRLVLS) at IYR’s request.
<- tribble(~TraitAbbrev, ~TraitName, "CMD1S", "cassava.mosaic.disease.severity.1.month.evaluation.CO_334.0000191",
traitabbrevs "CMD3S", "cassava.mosaic.disease.severity.3.month.evaluation.CO_334.0000192",
"CMD6S", "cassava.mosaic.disease.severity.6.month.evaluation.CO_334.0000194",
"CMD9S", "cassava.mosaic.disease.severity.9.month.evaluation.CO_334.0000193",
"CGM", "Cassava.green.mite.severity.CO_334.0000033", "CGMS1", "cassava.green.mite.severity.first.evaluation.CO_334.0000189",
"CGMS2", "cassava.green.mite.severity.second.evaluation.CO_334.0000190", "DM",
"dry.matter.content.percentage.CO_334.0000092", "PLTHT", "plant.height.measurement.in.cm.CO_334.0000018",
"BRNHT1", "first.apical.branch.height.measurement.in.cm.CO_334.0000106", "BRLVLS",
"branching.level.counting.CO_334.0000079", "SHTWT", "fresh.shoot.weight.measurement.in.kg.per.plot.CO_334.0000016",
"RTWT", "fresh.storage.root.weight.per.plot.CO_334.0000012", "RTNO", "root.number.counting.CO_334.0000011",
"TCHART", "total.carotenoid.by.chart.1.8.CO_334.0000161", "LCHROMO", "L.chromometer.value.CO_334.0002065",
"ACHROMO", "a.chromometer.value.CO_334.0002066", "BCHROMO", "b.chromometer.value.CO_334.0002064",
"NOHAV", "plant.stands.harvested.counting.CO_334.0000010")
%>% rmarkdown::paged_table() traitabbrevs
Run function renameAndSelectCols()
to rename columns and remove everything unecessary
<- renameAndSelectCols(traitabbrevs, indata = dbdata, customColsToKeep = "TrialType") dbdata
Standard code, recycled… should be a function?
<- dbdata %>% mutate(CMD1S = ifelse(CMD1S < 1 | CMD1S > 5, NA, CMD1S), CMD3S = ifelse(CMD3S <
dbdata 1 | CMD3S > 5, NA, CMD3S), CMD6S = ifelse(CMD6S < 1 | CMD1S > 5, NA, CMD6S),
CMD9S = ifelse(CMD9S < 1 | CMD1S > 5, NA, CMD9S), CGM = ifelse(CGM < 1 | CGM >
5, NA, CGM), CGMS1 = ifelse(CGMS1 < 1 | CGMS1 > 5, NA, CGMS1), CGMS2 = ifelse(CGMS2 <
1 | CGMS2 > 5, NA, CGMS2), DM = ifelse(DM > 100 | DM <= 0, NA, DM), RTWT = ifelse(RTWT ==
0 | NOHAV == 0 | is.na(NOHAV), NA, RTWT), SHTWT = ifelse(SHTWT == 0 | NOHAV ==
0 | is.na(NOHAV), NA, SHTWT), RTNO = ifelse(RTNO == 0 | NOHAV == 0 | is.na(NOHAV),
NA, RTNO), NOHAV = ifelse(NOHAV == 0, NA, NOHAV), NOHAV = ifelse(NOHAV >
42, NA, NOHAV), RTNO = ifelse(!RTNO %in% 1:10000, NA, RTNO))
<- dbdata %>% mutate(HI = RTWT/(RTWT + SHTWT)) dbdata
I anticipate this will not be necessary as it will be computed before or during data upload.
For calculating fresh root yield:
<- dbdata %>% mutate(PlotSpacing = ifelse(programName != "IITA", 1, ifelse(studyYear <
dbdata 2013, 1, ifelse(TrialType %in% c("CET", "GeneticGain", "ExpCET"), 1, 0.8))))
<- dbdata %>% group_by(programName, locationName, studyYear, studyName,
maxNOHAV_byStudy %>% summarize(MaxNOHAV = max(NOHAV, na.rm = T)) %>% ungroup() %>%
studyDesign) mutate(MaxNOHAV = ifelse(MaxNOHAV == "-Inf", NA, MaxNOHAV))
write.csv(maxNOHAV_byStudy %>% arrange(studyYear), file = here::here("output", "maxNOHAV_byStudy.csv"),
row.names = F)
# I log transform yield traits to satisfy homoskedastic residuals assumption of
# linear mixed models
<- left_join(dbdata, maxNOHAV_byStudy) %>% mutate(RTWT = ifelse(NOHAV > MaxNOHAV,
dbdata NA, RTWT), SHTWT = ifelse(NOHAV > MaxNOHAV, NA, SHTWT), RTNO = ifelse(NOHAV >
NA, RTNO), HI = ifelse(NOHAV > MaxNOHAV, NA, HI), FYLD = RTWT/(MaxNOHAV *
MaxNOHAV, * 10, DYLD = FYLD * (DM/100), logFYLD = log(FYLD), logDYLD = log(DYLD),
PlotSpacing) logTOPYLD = log(SHTWT/(MaxNOHAV * PlotSpacing) * 10), logRTNO = log(RTNO), PropNOHAV = NOHAV/MaxNOHAV)
# remove non transformed / per-plot (instead of per area) traits
%<>% select(-RTWT, -SHTWT, -RTNO, -FYLD, -DYLD) dbdata
<- dbdata %>% mutate(MCMDS = rowMeans(.[, c("CMD1S", "CMD3S", "CMD6S", "CMD9S")],
dbdata na.rm = T)) %>% select(-CMD1S, -CMD3S, -CMD6S, -CMD9S)
This step is mostly copy-pasted from previous processing of IITA-specific data.
Uses 3 flat files, which are available e.g. here. Specifically, IITA_GBStoPhenoMaster_33018.csv
, GBSdataMasterList_31818.csv
and NRCRI_GBStoPhenoMaster_40318.csv
. I copy them to the data/
sub-directory for the current analysis.
In addition, DArT-only samples are now expected to also have phenotypes. Therefore, checking for matches in new flatfiles, deposited in the data/
(see code below).
library(tidyverse); library(magrittr)
<-dbdata %>%
gbs2phenoMasterselect(germplasmName) %>%
%>%
distinct left_join(read.csv(here::here("data","NRCRI_GBStoPhenoMaster_40318.csv"),
stringsAsFactors = F)) %>%
mutate(FullSampleName=ifelse(grepl("C2a",germplasmName,ignore.case = T) &
is.na(FullSampleName),germplasmName,FullSampleName)) %>%
filter(!is.na(FullSampleName)) %>%
select(germplasmName,FullSampleName) %>%
bind_rows(dbdata %>%
select(germplasmName) %>%
%>%
distinct left_join(read.csv(here::here("data","IITA_GBStoPhenoMaster_33018.csv"),
stringsAsFactors = F)) %>%
filter(!is.na(FullSampleName)) %>%
select(germplasmName,FullSampleName)) %>%
bind_rows(dbdata %>%
select(germplasmName) %>%
%>%
distinct left_join(read.csv(here::here("data","GBSdataMasterList_31818.csv"),
stringsAsFactors = F) %>%
select(DNASample,FullSampleName) %>%
rename(germplasmName=DNASample)) %>%
filter(!is.na(FullSampleName)) %>%
select(germplasmName,FullSampleName)) %>%
bind_rows(dbdata %>%
select(germplasmName) %>%
%>%
distinct mutate(germplasmSynonyms=ifelse(grepl("^UG",germplasmName,ignore.case = T),
gsub("UG","Ug",germplasmName),germplasmName)) %>%
left_join(read.csv(here::here("data","GBSdataMasterList_31818.csv"),
stringsAsFactors = F) %>%
select(DNASample,FullSampleName) %>%
rename(germplasmSynonyms=DNASample)) %>%
filter(!is.na(FullSampleName)) %>%
select(germplasmName,FullSampleName)) %>%
bind_rows(dbdata %>%
select(germplasmName) %>%
%>%
distinct mutate(germplasmSynonyms=ifelse(grepl("^TZ",germplasmName,
ignore.case = T),
gsub("TZ","",germplasmName),germplasmName)) %>%
left_join(read.csv(here::here("data","GBSdataMasterList_31818.csv"),
stringsAsFactors = F) %>%
select(DNASample,FullSampleName) %>%
rename(germplasmSynonyms=DNASample)) %>%
filter(!is.na(FullSampleName)) %>%
select(germplasmName,FullSampleName)) %>%
%>%
distinct left_join(read.csv(here::here("data","GBSdataMasterList_31818.csv"),
stringsAsFactors = F) %>%
select(FullSampleName,OrigKeyFile,Institute) %>%
rename(OriginOfSample=Institute)) %>%
mutate(OrigKeyFile=ifelse(grepl("C2a",germplasmName,ignore.case = T),
ifelse(is.na(OrigKeyFile),"LavalGBS",OrigKeyFile),
OrigKeyFile),OriginOfSample=ifelse(grepl("C2a",germplasmName,ignore.case = T),
ifelse(is.na(OriginOfSample),"NRCRI",OriginOfSample),
OriginOfSample))## NEW: check for germName-DArT name matches
<-dbdata %>%
germNamesWithoutGBSgenosselect(programName,germplasmName) %>%
%>%
distinct left_join(gbs2phenoMaster) %>%
filter(is.na(FullSampleName)) %>%
select(-FullSampleName)
<-germNamesWithoutGBSgenos %>%
germNamesWithDArTinner_join(read.table(here::here("data","chr1_RefPanelAndGSprogeny_ReadyForGP_72719.fam"), header = F, stringsAsFactors = F)$V2 %>%
grep("TMS16|TMS17|TMS18|TMS19|TMS20",.,value = T, ignore.case = T) %>%
tibble(dartName=.) %>%
separate(dartName,c("germplasmName","dartID"),"_",extra = 'merge',remove = F)) %>%
group_by(germplasmName) %>%
slice(1) %>%
ungroup() %>%
rename(FullSampleName=dartName) %>%
mutate(OrigKeyFile="DArTseqLD", OriginOfSample="IITA") %>%
select(-dartID)
print(paste0(nrow(germNamesWithDArT)," germNames with DArT-only genos"))
[1] "2401 germNames with DArT-only genos"
# [1] "2401 germNames with DArT-only genos"
# first, filter to just program-DNAorigin matches
<-dbdata %>%
germNamesWithGenosselect(programName,germplasmName) %>%
%>%
distinct left_join(gbs2phenoMaster) %>%
filter(!is.na(FullSampleName))
print(paste0(nrow(germNamesWithGenos)," germNames with GBS genos"))
[1] "9323 germNames with GBS genos"
# [1] "9323 germNames with GBS genos"
# program-germNames with locally sourced GBS samples
<-germNamesWithGenos %>%
germNamesWithGenos_HasLocalSourcedGBSfilter(programName==OriginOfSample) %>%
select(programName,germplasmName) %>%
semi_join(germNamesWithGenos,.) %>%
group_by(programName,germplasmName) %>% # select one DNA per germplasmName per program
slice(1) %>% ungroup()
print(paste0(nrow(germNamesWithGenos_HasLocalSourcedGBS)," germNames with local GBS genos"))
[1] "8257 germNames with local GBS genos"
# [1] "8257 germNames with local GBS genos"
# the rest (program-germNames) with GBS but coming from a different breeding program
<-germNamesWithGenos %>%
germNamesWithGenos_NoLocalSourcedGBSfilter(programName==OriginOfSample) %>%
select(programName,germplasmName) %>%
anti_join(germNamesWithGenos,.) %>%
# select one DNA per germplasmName per program
group_by(programName,germplasmName) %>%
slice(1) %>% ungroup()
print(paste0(nrow(germNamesWithGenos_NoLocalSourcedGBS)," germNames without local GBS genos"))
[1] "167 germNames without local GBS genos"
# [1] "167 germNames without local GBS genos"
<-bind_rows(germNamesWithGenos_HasLocalSourcedGBS,
genosForPhenos%>%
germNamesWithGenos_NoLocalSourcedGBS) bind_rows(germNamesWithDArT)
print(paste0(nrow(genosForPhenos)," total germNames with genos either GBS or DArT"))
[1] "10825 total germNames with genos either GBS or DArT"
# [1] "10825 total germNames with genos either GBS or DArT"
%<>%
dbdata left_join(genosForPhenos)
# Create a new identifier, GID
## Equals the value SNP data name (FullSampleName)
## else germplasmName if no SNP data
%<>%
dbdata mutate(GID=ifelse(is.na(FullSampleName),germplasmName,FullSampleName))
# going to check against SNP data
# snps<-readRDS(file=url(paste0('ftp://ftp.cassavabase.org/marnin_datasets/NGC_BigData/',
# 'DosageMatrix_RefPanelAndGSprogeny_ReadyForGP_73019.rds')))
# rownames_snps<-rownames(snps); rm(snps); gc() # current matches to SNP data
# dbdata %>% distinct(GID,germplasmName,FullSampleName) %>%
# semi_join(tibble(GID=rownames_snps)) %>% nrow() # 10707 dbdata %>%
# distinct(GID,germplasmName,FullSampleName) %>%
# semi_join(tibble(GID=rownames_snps)) %>%
# filter(grepl('TMS13|2013_',GID,ignore.case = F)) %>% nrow() # 2424 TMS13 dbdata
# %>% distinct(GID,germplasmName,FullSampleName) %>%
# semi_join(tibble(GID=rownames_snps)) %>% filter(grepl('TMS14',GID,ignore.case =
# F)) %>% nrow() # 2236 TMS14 dbdata %>%
# distinct(GID,germplasmName,FullSampleName) %>%
# semi_join(tibble(GID=rownames_snps)) %>% filter(grepl('TMS15',GID,ignore.case =
# F)) %>% nrow() # 2287 TMS15 dbdata %>%
# distinct(GID,germplasmName,FullSampleName) %>%
# semi_join(tibble(GID=rownames_snps)) %>% filter(grepl('TMS18',GID,ignore.case =
# F)) %>% nrow() # 2401 TMS18
WARNING: User input required! If I had preselected locations before downloading, this wouldn’t have been necessary.
Based on previous locations used for IITA analysis, but adding based on list of locations used in IYR’s trial list data/2019_GS_PhenoUpload.csv
: “Ago-Owu” wasn’t used last year.
%<>% filter(locationName %in% c("Abuja", "Ago-Owu", "Ibadan", "Ikenne", "Ilorin",
dbdata "Jos", "Kano", "Malam Madori", "Mokwa", "Ubiaja", "Umudike", "Warri", "Zaria"))
nrow(dbdata) # [1] 432408
[1] 432408
saveRDS(dbdata, file = here::here("output", "IITA_CleanedTrialData_2020Dec03.rds"))
sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.7
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] magrittr_2.0.1 forcats_0.5.0 stringr_1.4.0 dplyr_1.0.2
[5] purrr_0.3.4 readr_1.4.0 tidyr_1.1.2 tibble_3.0.4
[9] ggplot2_3.3.2 tidyverse_1.3.0 workflowr_1.6.2
loaded via a namespace (and not attached):
[1] tidyselect_1.1.0 xfun_0.19 haven_2.3.1 colorspace_2.0-0
[5] vctrs_0.3.5 generics_0.1.0 htmltools_0.5.0 yaml_2.2.1
[9] utf8_1.1.4 rlang_0.4.9 later_1.1.0.1 pillar_1.4.7
[13] withr_2.3.0 glue_1.4.2 DBI_1.1.0 dbplyr_2.0.0
[17] modelr_0.1.8 readxl_1.3.1 lifecycle_0.2.0 munsell_0.5.0
[21] gtable_0.3.0 cellranger_1.1.0 rvest_0.3.6 evaluate_0.14
[25] knitr_1.30 ps_1.4.0 httpuv_1.5.4 fansi_0.4.1
[29] broom_0.7.2 Rcpp_1.0.5 promises_1.1.1 backports_1.2.0
[33] scales_1.1.1 formatR_1.7 jsonlite_1.7.1 fs_1.5.0
[37] hms_0.5.3 digest_0.6.27 stringi_1.5.3 rprojroot_2.0.2
[41] grid_4.0.2 here_1.0.0 cli_2.2.0 tools_4.0.2
[45] crayon_1.3.4 whisker_0.4 pkgconfig_2.0.3 ellipsis_0.3.1
[49] xml2_1.3.2 reprex_0.3.0 lubridate_1.7.9.2 rstudioapi_0.13
[53] assertthat_0.2.1 rmarkdown_2.5 httr_1.4.2 R6_2.5.0
[57] git2r_0.27.1 compiler_4.0.2