Last updated: 2023-12-04

Checks: 7 0

Knit directory: WP1/

This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20210216) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version f36a8ac. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    data/MUR/
    Ignored:    data/analyses/CCI_all.RData
    Ignored:    data/analyses/OISST_all.RData
    Ignored:    data/analyses/clean_all.RData
    Ignored:    data/analyses/clean_all_clean.RData
    Ignored:    data/analyses/ice_4km_proc.RData
    Ignored:    data/full_data/
    Ignored:    data/model/
    Ignored:    data/pg_data/
    Ignored:    data/restricted/
    Ignored:    data/sst_CCI_sval.RData
    Ignored:    data/sst_CCI_trom.RData
    Ignored:    data/sst_gland.RData
    Ignored:    data/sst_sval.RData
    Ignored:    data/sst_trom.RData
    Ignored:    metadata/globalfishingwatch_API_key.RData
    Ignored:    metadata/is_gfw_database.RData
    Ignored:    metadata/is_mst_database.RData
    Ignored:    metadata/pangaea_parameters.tab
    Ignored:    metadata/pg_EU_ref_meta.csv
    Ignored:    poster/SSC_2021_landscape_files/paged-0.15/
    Ignored:    presentations/2023_Ilico.html
    Ignored:    presentations/2023_fjord_intercomp.html
    Ignored:    presentations/2023_seminar.html
    Ignored:    presentations/2023_summary.html
    Ignored:    presentations/ASSW_2023.html
    Ignored:    presentations/ASSW_side_2023.html
    Ignored:    shiny/dataAccess/coastline_hi_sub.csv
    Ignored:    shiny/dataAccess/full_data
    Ignored:    shiny/kongCTD/credentials.RData
    Ignored:    shiny/kongCTD/data/
    Ignored:    shiny/test_data/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/FAIR_data.Rmd) and HTML (docs/FAIR_data.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
html 610ff35 Robert 2023-08-30 Build site.
Rmd 7b9b273 Robert 2023-08-30 Re-built site.
Rmd 7c58866 Robert 2023-08-30 Starting on outlines for FAIR slides and FjordLight poster
html 7c58866 Robert 2023-08-30 Starting on outlines for FAIR slides and FjordLight poster

What is FAIR?

As the amount of data builds ever higher and higher, it becomes more and more important that some sort of consistent scheme is followed. While there are many philosophies about what exactly this should look like. Most people now agree that the FAIR Principles provides the best overrall approach to the issue of data production, management, reuse, etc.

In brief (and quoting from the website), FAIR stands for:

Findable “The first step in (re)using data is to find them. Metadata and data should be easy to find for both humans and computers. Machine-readable metadata are essential for automatic discovery of datasets and services, so this is an essential component of the FAIRification process.”

Accessible “Once the user finds the required data, she/he/they need to know how they can be accessed, possibly including authentication and authorisation.”

Interoperable “The data usually need to be integrated with other data. In addition, the data need to interoperate with applications or workflows for analysis, storage, and processing.”

Reusable “The ultimate goal of FAIR is to optimise the reuse of data. To achieve this, metadata and data should be well-described so that they can be replicated and/or combined in different settings.”

Where is FAIR?

While some online data repositories (e.g. Zenodo) can very quickly and conveniently provide a DOI (therefore generally making it acceptable for project proposals etc.), many of these repositories do not ensure that the data undergo any quality control.

In the FAIR data scheme, Zenodo allows for data to be Findable and Accessible, but one encounters issues mostly from the Interoperability and Reusability of the data. Because Zenodo has no requirements for what can be uploaded, it is a “Wild West” situation where a user never knows what exactly they may have to work with.

As for PANGAEA, even though it takes much longer to get ones data hosted there, it has very strict requirements on data quality and formatting. There is a sophisticated search platform on the website, in addition to an R package that allows data searching and downloading directly from R/RStudio. Part of the quality control is ensuring that all data are classified into pre-existing names and units, helping to allow users to integrate existing datasets into their future projects. Without that the I and R of the data is greatly diminished.

In the context of the FACE-IT project specifically, a large amount of the budget was allocated to host data on PANGAEA, and support to upload those data is available from WP1, which is why it is the preferred platform. Without these two things it is understandable why Zenodo would be preferable. It is arguably the best option when one needs only to quickly generate a DOI for a given dataset and nothing more. Looking at the Zenodo website, one may also see that it is funded by Horizon2020.

All of that being said, we are not absolutely required to host everything on PANGAEA. Other data hosting websites with some sort of institutional affiliation, for example NMDC, NPDC, SIOS, GEM, etc. are fine.


R version 4.3.2 (2023-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.6 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1 
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1

locale:
 [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_GB.UTF-8        LC_COLLATE=en_GB.UTF-8    
 [5] LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
 [7] LC_PAPER=en_GB.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       

time zone: UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] kableExtra_1.3.4 lubridate_1.9.3  forcats_1.0.0    stringr_1.5.1   
 [5] dplyr_1.1.4      purrr_1.0.2      readr_2.1.4      tidyr_1.3.0     
 [9] tibble_3.2.1     ggplot2_3.4.4    tidyverse_2.0.0  workflowr_1.7.1 

loaded via a namespace (and not attached):
 [1] sass_0.4.7        utf8_1.2.4        generics_0.1.3    xml2_1.3.5       
 [5] stringi_1.8.2     hms_1.1.3         digest_0.6.33     magrittr_2.0.3   
 [9] timechange_0.2.0  evaluate_0.23     grid_4.3.2        fastmap_1.1.1    
[13] rprojroot_2.0.4   jsonlite_1.8.7    processx_3.8.2    whisker_0.4.1    
[17] ps_1.7.5          promises_1.2.1    rvest_1.0.3       httr_1.4.7       
[21] fansi_1.0.5       viridisLite_0.4.2 scales_1.3.0      jquerylib_0.1.4  
[25] cli_3.6.1         rlang_1.1.2       munsell_0.5.0     withr_2.5.2      
[29] cachem_1.0.8      yaml_2.3.7        tools_4.3.2       tzdb_0.4.0       
[33] colorspace_2.1-0  webshot_0.5.5     httpuv_1.6.12     vctrs_0.6.5      
[37] R6_2.5.1          lifecycle_1.0.4   git2r_0.33.0      fs_1.6.3         
[41] pkgconfig_2.0.3   callr_3.7.3       pillar_1.9.0      bslib_0.6.1      
[45] later_1.3.1       gtable_0.3.4      glue_1.6.2        Rcpp_1.0.11      
[49] systemfonts_1.0.5 xfun_0.41         tidyselect_1.2.0  rstudioapi_0.15.0
[53] knitr_1.45        htmltools_0.5.7   svglite_2.1.2     rmarkdown_2.25   
[57] compiler_4.3.2    getPass_0.2-2