Last updated: 2019-08-19

Checks: 6 1

Knit directory: polymeRID/

This reproducible R Markdown analysis was created with workflowr (version 1.4.0.9001). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


The R Markdown file has unstaged changes. To know which version of the R Markdown file created these results, you’ll want to first commit it to the Git repo. If you’re still working on the analysis, you can ignore this warning. When you’re finished, you can run wflow_publish to commit the R Markdown file and build the HTML.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20190729) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rprofile
    Ignored:    .Rproj.user/
    Ignored:    analysis/library.bib
    Ignored:    fun/
    Ignored:    output/20190810_1538/
    Ignored:    output/20190810_1546/
    Ignored:    output/20190810_1609/
    Ignored:    output/20190813_1044/
    Ignored:    output/logs/
    Ignored:    output/natural/
    Ignored:    output/nnet/
    Ignored:    output/svm/
    Ignored:    output/testRunII/
    Ignored:    output/testRunIII/
    Ignored:    packrat/lib-R/
    Ignored:    packrat/lib-ext/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/00LOCK-curl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/BH/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/FactoMineR/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/IDPmisc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/KernSmooth/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/MASS/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Matrix/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/MatrixModels/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ModelMetrics/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/R6/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RColorBrewer/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RCurl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rcpp/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppArmadillo/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppEigen/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppGSL/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppZiggurat/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rfast/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rgtsvm/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rmisc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/SQUAREM/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/SparseM/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/abind/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/askpass/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/assertthat/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/backports/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/base64enc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/baseline/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit64/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/bitops/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/boot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/callr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/car/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/carData/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/caret/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cellranger/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/class/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cli/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/clipr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cluster/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/codetools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/colorspace/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/config/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cowplot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/crayon/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/crosstalk/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/curl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/data.table/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/dendextend/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/digest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/doParallel/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/dplyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/e1071/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipse/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipsis/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/evaluate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/factoextra/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/fansi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/flashClust/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/forcats/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreach/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreign/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/fs/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/generics/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/getPass/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggplot2/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggpubr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggrepel/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsci/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsignif/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/git2r/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/glue/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gower/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gridExtra/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gtable/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/haven/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/hexbin/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/highr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/hms/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmltools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmlwidgets/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/httpuv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/httr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ipred/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/iterators/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/jsonlite/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/keras/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/kerasR/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/knitr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/labeling/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/later/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lattice/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lava/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lazyeval/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/leaps/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lme4/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lubridate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/magrittr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/maptools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/markdown/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/mgcv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/mime/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/minqa/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/munsell/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nlme/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nloptr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nnet/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/numDeriv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/openssl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/openxlsx/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/packrat/tests/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pbkrtest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pillar/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pkgconfig/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plogr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plotly/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/polynom/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prettyunits/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/processx/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prodlim/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/progress/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/promises/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prospectr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ps/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/purrr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/quantreg/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/randomForest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/readr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/readxl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/recipes/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rematch/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/reshape2/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/reticulate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rio/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rlang/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rmarkdown/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rpart/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rprojroot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rsconnect/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rstudioapi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/scales/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/scatterplot3d/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/shiny/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sourcetools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sp/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/survival/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sys/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tensorflow/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tfruns/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tibble/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyselect/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/timeDate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tinytex/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/utf8/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/vctrs/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridis/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridisLite/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/whisker/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/withr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/workflowr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/xfun/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/xtable/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/yaml/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/zeallot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/zip/
    Ignored:    packrat/src/
    Ignored:    polymeRID.Rproj
    Ignored:    smp/20190812_1723_NNET/files/
    Ignored:    smp/20190812_1723_NNET/plots/
    Ignored:    smp/20190812_1729_NNET/files/
    Ignored:    smp/20190812_1729_NNET/plots/
    Ignored:    smp/20190812_1731_NNET/files/
    Ignored:    smp/20190812_1731_NNET/plots/
    Ignored:    smp/20190812_1733_NNET/files/
    Ignored:    smp/20190812_1733_NNET/plots/
    Ignored:    smp/20190815_1847_FUSION/
    Ignored:    website/

Untracked files:
    Untracked:  analysis/cnn_crossvalidation.Rmd
    Untracked:  smp/120619_W2_1000_1.txt
    Untracked:  smp/120619_W2_1000_2.txt
    Untracked:  smp/120619_W2_300_1.txt
    Untracked:  smp/120619_W2_300_2.txt
    Untracked:  smp/120619_W2_300_3.txt
    Untracked:  smp/120619_W2_300_4.txt
    Untracked:  smp/120619_W2_300_5.txt
    Untracked:  smp/120619_W2_500_1.txt
    Untracked:  smp/120619_W2_500_2.txt
    Untracked:  smp/120619_W2_500_3.txt
    Untracked:  smp/120619_W2_500_4.txt
    Untracked:  smp/120619_W2_500_5.txt
    Untracked:  smp/120619_W2_500_6.txt
    Untracked:  smp/120619_W2_500_7.txt

Unstaged changes:
    Deleted:    Rplots.pdf
    Deleted:    analysis/cnn_calibration.Rmd
    Modified:   analysis/cnn_exploration.Rmd
    Modified:   analysis/exploration.Rmd
    Modified:   analysis/index.Rmd
    Modified:   analysis/preparation.Rmd
    Modified:   analysis/rf_exploration.Rmd
    Modified:   analysis/svm_exploration.Rmd
    Modified:   classification.R
    Modified:   code/cnn_cv_K70.R
    Modified:   code/functions.R
    Modified:   code/nnet.R
    Modified:   code/plot_functions.R
    Modified:   code/shiny_apps/app.R
    Deleted:    code/shiny_apps/rsconnect/documents/spectra.Rmd/shinyapps.io/goergen95/spectra.dcf
    Modified:   code/shiny_apps/rsconnect/shinyapps.io/goergen95/spectra.dcf
    Deleted:    code/shiny_apps/spectra_app.tar.gz

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
Rmd c0b21db goergen95 2019-08-15 proceeded on CNN exploration
html c0b21db goergen95 2019-08-15 proceeded on CNN exploration

Overview

Convolutional Neural Networks (CNN) are mainly used in image processing tasks (Rawat and Wang 2017). However, they can also be applied to one-dimensional data sets, such as time-series or spectral data (Liu et al. 2017 @IsmailFawaz2019; Ghosh et al. 2019; Berisha et al. 2019). They mainly consist of three different types of layers, which most of the time are stacked into a sequential model for learning various patterns from some input data to model the desired output. The most relevant layer is the convolutional layer, which serve as extractors for features found in the imput (Rawat and Wang 2017). They work on a specificed numbers of neurons, or filters, each serving as mapping function for a specific range in the input data, also referred to as the kernel size. They do not only map features from the input data, but also detect features in the output of previous convolutional layers. This isachieved through adjusting the weights associated with each filter based on a non-linear activation function. Additionally, after some convolutional layers, pooling layers are most commonly introduced in CNNs. These layers reduce the feature maps of previous layers and are also associated with a function to choose which parameters are preserved. Nowadays, most commonly max-pooling layers are used, preserving the maximum signal from a feature map. Lastly, most CNNs end with a fully-connected-layer. These layers are used to map the last feature map to the output. This is the reason why its structure depends on the problem at hand. For regression probles, this layer may only contain a single neuron while for classifications problems it may contain n-neurons, each modelling the output of a specific class. The network learns through what is called backpropagation. It means that the training data is repeatedly presented to the network and depending on an optimizer function the distance to the desired output is calculated. Then, the weights associated with the filters are updated and a new epoch of presenting the training data to the CNN is started.

Model Architechture

Unlike with the random forest and support vector machines, here we did not test for noise in the dataset. This is mainly due to limitations in computation time. The computation time was significantly reduced by installing the keras package in GPU mode based on the CUDA library from Nvidia. However, CNNs remain computational intensive, since depending on the architecture severel thousands of weights have to be trained. Here, we developed a simple two-block architecture of 4 convolutional layers in total. The number of filters, or feature extractors, increases with each layer by a factor of 2. We choose this network architechture with 4 layers and only small numbers of filters because it delivered relativly high accuracies in small computation times. The code below defines a function to set up and compile a CNN for a given kernel size.

# expects that you have installed keras and tensorflow properly
library(keras)

buildCNN <- function(kernel, nVariables, nOutcome){
  model = keras_model_sequential()
  model %>%
    # block 1
    layer_conv_1d(filters = 8,
                  kernel_size = kernel,
                  input_shape = c(nVariables,1),
                  name = "block1_conv1",) %>%
    layer_activation_relu(name="block1_relu1") %>%
    layer_conv_1d(filters = 16,
                  kernel_size = kernel,
                  name = "block1_conv2") %>%
    layer_activation_relu(name="block1_relu2") %>%
    layer_max_pooling_1d(strides=2,
                         pool_size = 5,
                         name="block1_max_pool1") %>%
    
    # block 2
    layer_conv_1d(filters = 32,
                  kernel_size = kernel,
                  name = "block2_conv1") %>%
    layer_activation_relu(name="block2_relu1") %>%
    layer_conv_1d(filters = 64,
                  kernel_size = kernel,
                  name = "block2_conv2") %>%
    layer_activation_relu(name="block2_relu2") %>%
    layer_max_pooling_1d(strides=2,
                         pool_size = 5,
                         name="block2_max_pool1") %>%
    
    # exit block
    layer_global_max_pooling_1d(name="exit_max_pool") %>%
    layer_dropout(rate=0.5) %>%
    layer_dense(units = nOutcome, activation = "softmax")
  
  # we compile for a classification with the categorcial crossentropy loss function
  # and use adam as optimizer function
  compile(model, loss="categorical_crossentropy", optimizer="adam", metrics="accuracy")
}

The function expects three arguments as input. The first is the kernel size which specifies the width of the window which extracts features from the input data and subsequent layer outputs. Note that the kernel size is held constant throught the network. The second argument expects in integer representing the number of variables of the input which relates to the amount of wavenumbers in the present case. The third argument also expects an integer only this time it is the number of desired output classes. Each convolutional layer as associated with a RelU-activation function. At the end of each block we added a max pooling layer with stride = 2 which takes the maximum values of its respective input and discards unneeded observations effectvly reducing the feature space by half. The exit block again consits of a global max pooling layer and is followed by a dropout layer which randomly silences half of the neurons to reduce the influence of overfitting. The last layer is a fully-connected layer which maps its input to nOutcome classes via the softmax activation function. The last line of code compiles the model so it is ready for training. We use categorical crossentropy as the loss function in our network because currently we have 14 different classes which perfectly fit for one-hot-encoding. If the number of classes is too high, for example in speech recognition problems, sparse categorical crossentropy would be the loss function of choice. As an optimizer function we chose adam as it ensures that the learning rate and decay react adaptive during training. Finally, we tell the model to optimize the training process based on overall accuracy. We can now compile a first model and take a look at its structure:

model = buildCNN(kernel = 50, nVariables = 1863, nOutcome = 12)
model
Model
Model: "sequential"
___________________________________________________________________________
Layer (type)                     Output Shape                  Param #     
===========================================================================
block1_conv1 (Conv1D)            (None, 1814, 8)               408         
___________________________________________________________________________
block1_relu1 (ReLU)              (None, 1814, 8)               0           
___________________________________________________________________________
block1_conv2 (Conv1D)            (None, 1765, 16)              6416        
___________________________________________________________________________
block1_relu2 (ReLU)              (None, 1765, 16)              0           
___________________________________________________________________________
block1_max_pool1 (MaxPooling1D)  (None, 881, 16)               0           
___________________________________________________________________________
block2_conv1 (Conv1D)            (None, 832, 32)               25632       
___________________________________________________________________________
block2_relu1 (ReLU)              (None, 832, 32)               0           
___________________________________________________________________________
block2_conv2 (Conv1D)            (None, 783, 64)               102464      
___________________________________________________________________________
block2_relu2 (ReLU)              (None, 783, 64)               0           
___________________________________________________________________________
block2_max_pool1 (MaxPooling1D)  (None, 390, 64)               0           
___________________________________________________________________________
exit_max_pool (GlobalMaxPooling1 (None, 64)                    0           
___________________________________________________________________________
dropout (Dropout)                (None, 64)                    0           
___________________________________________________________________________
dense (Dense)                    (None, 12)                    780         
===========================================================================
Total params: 135,700
Trainable params: 135,700
Non-trainable params: 0
___________________________________________________________________________

In total, the current network consists of 135,830 weights to be trained. In the output shape column we can observe the shape transformation of the input data from a 1D-array of size 1814 after the first convolutional layer with 8 filters to an 1D-output of shape 12.

Training a CNN

We can use our data base to start a training process with the CNN defined before. First, the input data needs to be transforemt to arrays which can be understood by the keras::fit() function. Here, we uses keras-backend functionality to achieve this. Additionally, every training process needs to be initiated with information on the number of epochs the training data is going to be presented. We used a fix value of 300 epochs because beyond that value no substantial gain in accuracy was observed. Also, the training data is going to be presented in batches of 10 observations, each.

data = read.csv(file = paste0(ref, "reference_database.csv"), header = TRUE)

K <- keras::backend()
x_train = as.matrix(data[,1:ncol(data)-1])
x = K$expand_dims(x_train, axis = 2L)
x_train = K$eval(x)
y_train = keras::to_categorical(as.numeric(data$class)-1, length(unique(data$class)))

history = keras::fit(model, x = x_train, y = y_train,
                               epochs=300,
                               batch_size = 10)
history
plot(history)
Trained on 147 samples (batch_size=10, epochs=300)
Final epoch (plot to see history):
loss: 0.05043
 acc: 0.9796 

Fig. 1: Accuracy and loss values for exemplary training process.

Results

We now can plot the results with increasing kernel sizes on the x-axis, accuracy values for the validation data set on the y-axis and different lines for the data transformations (Fig. 2).

Fig. 2: Accuracy results for different kernel sizes.

We can observe that the pattern of accuracy is highly variable dependent of the kernel size as well as between different data transformations. For example, the use of the second derivative of the Savitzkiy-Golay filtered data yields to very low accuracies across all kernel sizes. To aid the selection of an appropriate kernel size and data transformation we can calculate some descriptive statistic values to find optimal configurations. For example, we can look for the kernel size which delivers the highest accuracy in average. Also, we can search for the different data transformations and kernel sizes wich yielded to the highest accurcies.

kernelAcc = aggregate(val_acc ~ kernel, results, mean)
kernelAcc = kernelAcc[order(-kernelAcc$val_acc), ]

type = results[which(results$kernel == kernelAcc$kernel[1]),]
type = type[order(-type$val_acc), ]

highest = results[order(-results$val_acc),]

On average, a kernel size of 50 delivered the highest accuracy of 0.81 (Tab. 1). A kernel size of 90 yielded to the second highest accuracy results.

Tab. 1: Average performance of kernel size across data preprocessing types.
kernel val_acc
5 50 0.8071429
9 90 0.7940476
3 30 0.7690476
2 20 0.7654762
6 60 0.7630952
7 70 0.7559524
8 80 0.7511905
4 40 0.7511905
10 100 0.7500000
12 150 0.7416667
11 125 0.7333333
13 175 0.7250000
14 200 0.7214286
1 10 0.6904762

When we order the results according to the highest accuracy values achieved, we observe that there are only 4 preprocessing types and kernel sized which yielded to an accuracy of 0.9 or higher (Tab. 2). These are the second derivative of the normalised data which yielded to an accuracy of 0.91 at a kernel size of 90, the simple Savitzkiy-Golay smoothed data with an accuracy of 0.9 at a kernel size of 70, the second derivative of the raw data with an accuracy of 0.9 at a kernel size of 90, and the first derivative of the normalised data with an accuracy of 0.9 at a kernel size of 150.

Tab. 2: The ten highest accuracy results for different preprocessing types at varying kernel sizes.
X types kernel loss acc val_loss val_acc
163 163 norm.d2 90 0.2898488 0.8961039 0.8323456 0.9142857
35 35 sg 70 0.2537513 0.9350649 1.3861195 0.9000000
135 135 raw.d2 90 0.1529336 0.9350649 1.5874773 0.9000000
152 152 norm.d1 150 0.1060385 0.9740260 0.8856056 0.9000000
65 65 raw.d1 90 0.1509757 0.9350649 1.0225650 0.8857143
101 101 sg.norm.d1 30 0.1817555 0.9480519 0.4789599 0.8857143
104 104 sg.norm.d1 60 0.2074841 0.9350649 0.6712436 0.8857143
162 162 norm.d2 80 0.0823583 0.9610389 0.5753851 0.8857143
166 166 norm.d2 150 0.0143513 1.0000000 1.1288143 0.8857143
27 27 norm 175 0.0565004 0.9870130 1.4892539 0.8714285

Cross Validation

After finding the optimal kernel sizes for different preprocessing techniques, a cross-validation approach was used to find the configuration with the optimal generalization potential. The documentation of the results can be found here

Citations on this page

Berisha, Sebastian, Mahsa Lotfollahi, Jahandar Jahanipour, Ilker Gurcan, Michael Walsh, Rohit Bhargava, Hien Van Nguyen, and David Mayerich. 2019. “Deep learning for FTIR histology: leveraging spatial and spectral features with convolutional neural networks.” Analyst 144 (5). Royal Society of Chemistry: 1642–53. https://doi.org/10.1039/c8an01495g.

Ghosh, Kunal, Annika Stuke, Milica Todorovic, Peter Bjørn Jørgensen, Mikkel N Schmidt, Aki Vehtari, Patrick Rinke, et al. 2019. “FULL PAPER 1801367 (1 of 7) Deep Learning Spectroscopy: Neural Networks for Molecular Excitation Spectra.” https://doi.org/10.1002/advs.201801367.

Ismail Fawaz, Hassan, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre Alain Muller. 2019. “Deep learning for time series classification: a review.” Data Mining and Knowledge Discovery 33 (4): 917–63. https://doi.org/10.1007/s10618-019-00619-1.

Liu, Jinchao, Margarita Osadchy, Lorna Ashton, Michael Foster, Christopher J. Solomon, and Stuart J. Gibson. 2017. “Deep convolutional neural networks for Raman spectrum recognition: A unified solution.” Analyst 142 (21): 4067–74. https://doi.org/10.1039/c7an01371j.

Rawat, Waseem, and Zenghui Wang. 2017. “Deep convolutional neural networks for image classification: A comprehensive review.” Neural Computation 29 (9): 2352–2449. https://doi.org/10.1162/NECO_a_00990.


sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 19.1

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=de_DE.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=de_DE.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] plotly_4.9.0              tensorflow_1.14.0        
 [3] abind_1.4-5               e1071_1.7-2              
 [5] keras_2.2.4.1             workflowr_1.4.0.9001     
 [7] baseline_1.2-1            gridExtra_2.3            
 [9] stringr_1.4.0             prospectr_0.1.3          
[11] RcppArmadillo_0.9.600.4.0 openxlsx_4.1.0.1         
[13] magrittr_1.5              ggplot2_3.2.0            
[15] reshape2_1.4.3            dplyr_0.8.3              

loaded via a namespace (and not attached):
 [1] httr_1.4.1        tidyr_0.8.3       jsonlite_1.6     
 [4] viridisLite_0.3.0 foreach_1.4.7     shiny_1.3.2      
 [7] assertthat_0.2.1  highr_0.8         yaml_2.2.0       
[10] pillar_1.4.2      backports_1.1.4   lattice_0.20-38  
[13] glue_1.3.1        reticulate_1.13   digest_0.6.20    
[16] promises_1.0.1    colorspace_1.4-1  htmltools_0.3.6  
[19] httpuv_1.5.1      Matrix_1.2-17     plyr_1.8.4       
[22] pkgconfig_2.0.2   SparseM_1.77      purrr_0.3.2      
[25] xtable_1.8-4      scales_1.0.0      whisker_0.3-2    
[28] later_0.8.0       git2r_0.26.1      tibble_2.1.3     
[31] generics_0.0.2    withr_2.1.2       lazyeval_0.2.2   
[34] crayon_1.3.4      mime_0.7          evaluate_0.14    
[37] fs_1.3.1          class_7.3-15      tools_3.6.1      
[40] data.table_1.12.2 munsell_0.5.0     zip_2.0.3        
[43] compiler_3.6.1    rlang_0.4.0       grid_3.6.1       
[46] iterators_1.0.12  htmlwidgets_1.3   crosstalk_1.0.0  
[49] base64enc_0.1-3   labeling_0.3      rmarkdown_1.14   
[52] gtable_0.3.0      codetools_0.2-16  R6_2.4.0         
[55] tfruns_1.4        knitr_1.24        zeallot_0.1.0    
[58] rprojroot_1.3-2   stringi_1.4.3     Rcpp_1.0.2       
[61] tidyselect_0.2.5  xfun_0.8