Last updated: 2020-11-13

Checks: 2 0

Knit directory: website/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 8e98482. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/

Unstaged changes:
    Modified:   .gitignore

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/index.Rmd) and HTML (docs/index.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
html ed31ccb L-ENA 2020-11-06 Build site.
html a34fbc8 L-ENA 2020-10-30 Build site.
html b5505f2 L-ENA 2020-10-30 Build site.
Rmd e826d62 L-ENA 2020-10-30 Updated index
html e826d62 L-ENA 2020-10-30 Updated index
html 93bdfca L-ENA 2020-10-30 Build site.
html 7bafffe L-ENA 2020-10-30 Build site.
html 26e1d9b L-ENA 2020-10-30 Build site.
html c16ec66 L-ENA 2020-10-30 Build site.
html c86cb34 L-ENA 2020-10-30 Build site.
html e727571 L-ENA 2020-10-30 Build site.
html 798d57b L-ENA 2020-10-30 Build site.
html 7074ef2 L-ENA 2020-10-30 Build site.
Rmd aa66c47 L-ENA 2020-10-30 Customised about/publications etc
html 02b0296 L-ENA 2020-10-30 Build site.
html 5061dd3 L-ENA 2020-10-30 Build site.
html 581260b L-ENA 2020-10-30 Build site.
Rmd 7fbed26 L-ENA 2020-10-30 Start workflowr project.

Data extraction methods for systematic review (semi)automation: A living review

This living review looks at data extraction methods for systematic review (semi)automation. On this website you will find the latest updates to the review, as well as additional information about the team and related publication of this review and its software.

Abstract of the protocol

Background:

Researchers in evidence-based medicine cannot keep up with the amounts of both old and newly published primary research articles. Support for the early stages of the systematic review process – searching and screening studies for eligibility – is necessary because it is currently impossible to search for relevant research with precision. Better automated data extraction may not only facilitate the stage of review traditionally labelled ‘data extraction’, but also change earlier phases of the review process by making it possible to identify relevant research. Exponential improvements in computational processing speed and data storage are fostering the development of data mining models and algorithms. This, in combination with quicker pathways to publication, led to a large landscape of tools and methods for data mining and extraction.

Objective:

To review published methods and tools for data extraction to (semi)automate the systematic reviewing process.

Methods:

We propose to conduct a living review. With this methodology we aim to do constant evidence surveillance, bi-monthly search updates, as well as review updates every 6 months if new evidence permits it. In a cross-sectional analysis we will extract methodological characteristics and assess the quality of reporting in our included papers.

Conclusions:

We aim to increase transparency in the reporting and assessment of automation technologies to the benefit of data scientists, systematic reviewers and funders of health research. This living review will help to reduce duplicate efforts by data scientists who develop data mining methods. It will also serve to inform systematic reviewers about possibilities to support their data extraction.