Last updated: 2019-08-15

Checks: 6 1

Knit directory: polymeRID/

This reproducible R Markdown analysis was created with workflowr (version 1.4.0.9001). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


The R Markdown is untracked by Git. To know which version of the R Markdown file created these results, you’ll want to first commit it to the Git repo. If you’re still working on the analysis, you can ignore this warning. When you’re finished, you can run wflow_publish to commit the R Markdown file and build the HTML.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20190729) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rprofile
    Ignored:    .Rproj.user/
    Ignored:    analysis/library.bib
    Ignored:    fun/
    Ignored:    output/20190810_1538/
    Ignored:    output/20190810_1546/
    Ignored:    output/20190810_1609/
    Ignored:    output/20190813_1044/
    Ignored:    output/logs/
    Ignored:    output/natural/
    Ignored:    output/nnet/
    Ignored:    output/svm/
    Ignored:    output/testRunII/
    Ignored:    output/testRunIII/
    Ignored:    packrat/lib-R/
    Ignored:    packrat/lib-ext/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/BH/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/FactoMineR/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/IDPmisc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/KernSmooth/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/MASS/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Matrix/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/MatrixModels/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ModelMetrics/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/R6/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RColorBrewer/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rcpp/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppArmadillo/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppEigen/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppGSL/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppZiggurat/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rfast/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rgtsvm/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rmisc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/SQUAREM/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/SparseM/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/abind/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/askpass/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/assertthat/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/backports/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/base64enc/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/baseline/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit64/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/boot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/callr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/car/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/carData/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/caret/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cellranger/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/class/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cli/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/clipr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cluster/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/codetools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/colorspace/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/config/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/cowplot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/crayon/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/crosstalk/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/curl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/data.table/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/dendextend/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/digest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/doParallel/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/dplyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/e1071/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipse/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipsis/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/evaluate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/factoextra/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/fansi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/flashClust/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/forcats/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreach/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreign/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/fs/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/generics/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/getPass/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggplot2/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggpubr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggrepel/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsci/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsignif/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/git2r/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/glue/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gower/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gridExtra/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/gtable/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/haven/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/hexbin/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/highr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/hms/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmltools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmlwidgets/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/httpuv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/httr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ipred/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/iterators/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/jsonlite/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/keras/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/kerasR/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/knitr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/labeling/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/later/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lattice/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lava/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lazyeval/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/leaps/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lme4/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/lubridate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/magrittr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/maptools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/markdown/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/mgcv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/mime/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/minqa/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/munsell/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nlme/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nloptr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/nnet/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/numDeriv/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/openssl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/openxlsx/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/packrat/tests/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pbkrtest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pillar/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/pkgconfig/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plogr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plotly/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/plyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/polynom/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prettyunits/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/processx/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prodlim/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/progress/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/promises/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/prospectr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/ps/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/purrr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/quantreg/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/randomForest/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/readr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/readxl/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/recipes/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rematch/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/reshape2/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/reticulate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rio/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rlang/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rmarkdown/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rpart/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rprojroot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/rstudioapi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/scales/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/scatterplot3d/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/shiny/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sourcetools/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sp/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringi/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/survival/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/sys/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tensorflow/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tfruns/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tibble/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyselect/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/timeDate/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/tinytex/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/utf8/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/vctrs/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridis/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridisLite/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/whisker/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/withr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/workflowr/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/xfun/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/xtable/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/yaml/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/zeallot/
    Ignored:    packrat/lib/x86_64-pc-linux-gnu/3.6.1/zip/
    Ignored:    packrat/src/
    Ignored:    polymeRID.Rproj
    Ignored:    smp/20190812_1723_NNET/files/
    Ignored:    smp/20190812_1723_NNET/plots/
    Ignored:    smp/20190812_1729_NNET/files/
    Ignored:    smp/20190812_1729_NNET/plots/
    Ignored:    smp/20190812_1731_NNET/files/
    Ignored:    smp/20190812_1731_NNET/plots/
    Ignored:    smp/20190812_1733_NNET/files/
    Ignored:    smp/20190812_1733_NNET/plots/
    Ignored:    website/analysis/
    Ignored:    website/code/
    Ignored:    website/docs/
    Ignored:    website/output/
    Ignored:    website/run/
    Ignored:    website/smp/

Untracked files:
    Untracked:  analysis/cnn_calibration.Rmd
    Untracked:  analysis/cnn_exploration.Rmd
    Untracked:  code/cnn_cv_K70.R
    Untracked:  docs/figure/
    Untracked:  website/mod/

Unstaged changes:
    Modified:   code/functions.R
    Modified:   code/nnet.R
    Deleted:    website/ref/reference_Nylon.csv

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


There are no past versions. Publish this analysis with wflow_publish() to start tracking its development.


Interesting stuff about CNNs.

Unlike with the random forest and support vector machines, here we did not test for noise in the dataset. This is mainly due to limitations in computation time. The computation time was significantly reduced by installing the keras package in GPU mode based on the CUDA library of Nvidia. Interested readers in setting up a local machine for GPU computations with the R implementation of keras are refered to the About section of this website. Still, CNNs remain computational intensive, since depending on the architecture severel thousands of weights have to be trained. Here, we developed a simple two-block architecture of 4 convolutional layers in total. The number of filters, or feature extractors, increases with each layer by a factor of 2. We choose a rather ‘deep’ network architechture with 4 layers and only small numbers of filters. The code below defines a function to set up and compile a CNN for a given kernel size.

# expects that you have installed keras and tensorflow properly
library(keras)

buildCNN <- function(kernel, nVariables, nOutcome){
  model = keras_model_sequential()
  model %>%
    # block 1
    layer_conv_1d(filters = 8,
                  kernel_size = kernel,
                  input_shape = c(nVariables,1),
                  name = "block1_conv1",) %>%
    layer_activation_relu(name="block1_relu1") %>%
    layer_conv_1d(filters = 16,
                  kernel_size = kernel,
                  name = "block1_conv2") %>%
    layer_activation_relu(name="block1_relu2") %>%
    layer_max_pooling_1d(strides=2,
                         pool_size = 5,
                         name="block1_max_pool1") %>%
    
    # block 2
    layer_conv_1d(filters = 32,
                  kernel_size = kernel,
                  name = "block2_conv1") %>%
    layer_activation_relu(name="block2_relu1") %>%
    layer_conv_1d(filters = 64,
                  kernel_size = kernel,
                  name = "block2_conv2") %>%
    layer_activation_relu(name="block2_relu2") %>%
    layer_max_pooling_1d(strides=2,
                         pool_size = 5,
                         name="block2_max_pool1") %>%
    
    # exit block
    layer_global_max_pooling_1d(name="exit_max_pool") %>%
    layer_dropout(rate=0.5) %>%
    layer_dense(units = nOutcome, activation = "softmax")
  
  # we compile for a classification with the categorcial crossentropy loss function
  # and use adam as optimizer function
  compile(model, loss="categorical_crossentropy", optimizer="adam", metrics="accuracy")
}

The function expects three arguments as input. The first is the kernel size which specifies the width of the window which extracts features from the input data and subsequent layer outputs. Note that the kernel size is held constant throught the network. The second argument expects in integer representing the number of variables of the input which relates to the amount of wavenumbers in the present case. The third argument also expects an integer only this time it is the number of desired output classes. Each convolutional layer as associated with a RelU-activation function. At the end of each block we added a max pooling layer with stride = 2 which takes the maximum values of its respective input and discards unneeded observations effectvly reducing the feature space by half. The exit block again consits of a global max pooling layer and is followed by a dropout layer which randomly silences half of the neurons to reduce the influence of overfitting. The last layer is a fully-connected layer which maps its input to nOutcome classes via the softmax activation function. The last line of code compiles the model so it is ready for training. We use categorical crossentropy as the loss function in our network because currently we have 14 different classes which perfectly fit for one-hot-encoding. If the number of classes is too high, for example in speech recognition problems, sparse categorical crossentropy would be the loss function of choice. As an optimizer function we chose adam as it ensures that the learning rate and decay react adaptive during training. Finally, we tell the model to optimize the training process based on overall accuracy. Let’s build a model and take a look at its parameters:

model = buildCNN(kernel = 50, nVariables = 1863, nOutcome = 12)
model
Model
Model: "sequential"
___________________________________________________________________________
Layer (type)                     Output Shape                  Param #     
===========================================================================
block1_conv1 (Conv1D)            (None, 1814, 8)               408         
___________________________________________________________________________
block1_relu1 (ReLU)              (None, 1814, 8)               0           
___________________________________________________________________________
block1_conv2 (Conv1D)            (None, 1765, 16)              6416        
___________________________________________________________________________
block1_relu2 (ReLU)              (None, 1765, 16)              0           
___________________________________________________________________________
block1_max_pool1 (MaxPooling1D)  (None, 881, 16)               0           
___________________________________________________________________________
block2_conv1 (Conv1D)            (None, 832, 32)               25632       
___________________________________________________________________________
block2_relu1 (ReLU)              (None, 832, 32)               0           
___________________________________________________________________________
block2_conv2 (Conv1D)            (None, 783, 64)               102464      
___________________________________________________________________________
block2_relu2 (ReLU)              (None, 783, 64)               0           
___________________________________________________________________________
block2_max_pool1 (MaxPooling1D)  (None, 390, 64)               0           
___________________________________________________________________________
exit_max_pool (GlobalMaxPooling1 (None, 64)                    0           
___________________________________________________________________________
dropout (Dropout)                (None, 64)                    0           
___________________________________________________________________________
dense (Dense)                    (None, 12)                    780         
===========================================================================
Total params: 135,700
Trainable params: 135,700
Non-trainable params: 0
___________________________________________________________________________

In total, the current network consists of 135,830 weights to be trained. In the output shape column we can observe the shape transformation of the input data from a 1D-array of size 1814 after the first convolutional layer with 8 filters to an 1D-output of shape 12. We can now take a look how the model performs on our data. But first we need to tranform the input data to arrays which can be understand by the keras::fit() function. We use keras-backend functionality for this.

data = read.csv(file = paste0(ref, "reference_database.csv"), header = TRUE)

K <- keras::backend()
x_train = as.matrix(data[,1:ncol(data)-1])
x = K$expand_dims(x_train, axis = 2L)
x_train = K$eval(x)
y_train = keras::to_categorical(as.numeric(data$class)-1, length(unique(data$class)))

history = keras::fit(model, x = x_train, y = y_train,
                               epochs=300,
                               batch_size = 10)
history
plot(history)
Trained on 147 samples (batch_size=10, epochs=300)
Final epoch (plot to see history):
loss: 0.05043
 acc: 0.9796 

We achived an accuracy of 0.98 in 300 epochs. Still this single value is hardly an indicator for the generalization potential of the CNN because we did not use an independent validation data set on to evaluate the performance of the model on unseen data. But before evaluating the generalization performance, we analyse how the CNN reacts to different kernel sizes as well as some specific data transformations. To save some computation time we only evaluated a handful of data transformations which evaluated positivly during the training process of RF and SVM. These are the raw data itself, normalised data, the Savitzkiy-Golay smoothed representations of these two and for the reason of comparision we also included the first derivative of the raw spectrum since it did not evalute as robust during the training of SVM and RF. We apply a loop to calculate all the different combinations. Note also that we apply the set.seed() function before splitting the data. This way all different combinations of kernels and data transformation actually trains and evaluates on the exact same data set. This would not be valid if the generalization potential of the model was going to be assesed. But since we are interested in the performance of different kernel sizes and data transformation techniques it actually is beneficial for comparison if the different models train on the same data. Otherwise, it would not be possible to accout variations in performance either to the parameters or just because a different training and validation set was used.

kernels = c(10,20,30,40,50,60,70,80,90,100,125,150,175,200)
types = c("raw","norm","sg","sg.norm","raw.d1")
results = data.frame(types = rep(0, length(kernels) * length(types)),
                     kernel =rep(0, length(kernels) * length(types)),
                     loss = rep(0, length(kernels) * length(types)),
                     acc = rep(0, length(kernels) * length(types)),
                     val_loss=rep(0, length(kernels) * length(types)),
                     val_acc=rep(0, length(kernels) * length(types)))

variables = ncol(data)-1
counter = 1

for (type in types){
    if (type == "raw"){
      tmp = data
      variables = ncol(data)-1
    }else{
      tmp = preprocess(data[ ,1:ncol(data)-1], type = type)
      variables = ncol(tmp)
      tmp$class =data$class
    }
  for (kernel in kernels){

    # splitting between training and test
    set.seed(42)
    index = caret::createDataPartition(y=tmp$class,p=.5)
    training = tmp[index$Resample1,]
    validation = tmp[-index$Resample1,]

    # splitting predictors and labels
    x_train = training[,1:variables]
    y_train = training[,1+variables]
    x_test = validation[,1:variables]
    y_test = validation[,1+variables]

    #number of preditors and unique targets
    nOutcome = length(levels(y_train))

    # function to get keras array for dataframes
    K <- keras::backend()
    df_to_karray <- function(df){
      d = as.matrix(df)
      d = K$expand_dims(d, axis = 2L)
      d = K$eval(d)
    }

    # coerce data to keras structure
    x_train = df_to_karray(x_train)
    x_test = df_to_karray(x_test)
    y_train = keras::to_categorical(as.numeric(y_train)-1,nOutcome)
    y_test = keras::to_categorical(as.numeric(y_test)-1,nOutcome)
    # contstruction of "large" neural network
    model = prepNNET(kernel, variables, nOutcome)
    history = keras::fit(model, x = x_train, y = y_train,
                          epochs=200, validation_data = list(x_test,y_test),
                          #callbacks =  callback_tensorboard(paste0(output,"nnet/logs")),
                          batch_size = 10 )
    results$types[counter] = type
    results$kernel[counter] = kernel
    results$loss[counter] = history$metrics$loss[100]
    results$acc[counter] = history$metrics$acc[100]
    results$val_loss[counter] = history$metrics$val_loss[100]
    results$val_acc[counter] = history$metrics$val_acc[100]
    write.csv(results, file = paste0(output,"nnet/kernels.csv"))
    counter = counter + 1
  }

  print(results)
}

We now can plot the results with increasing kernel sizes on the x-axis, accuracy values for the validation data set on the x-axis and different lines for data transformations.

library(ggplot2)
library(plotly)
kernelPlot = ggplot(data = results, aes(x = kernel, y = val_acc))+
  geom_line(aes(color=types,group=types), size = 1.5)+
  ylab("validation accuracy")+
  xlab("kernel size")+
  theme_minimal()
ggplotly(kernelPlot)

We can observe that the pattern of accuracy is highly variable dependent of the kernel size as well as within and between different data transformations. To aid the selection of an appropriate kernel size and data transformation we will calculate some statistic values to find the optimal configuration.

kernelAcc = aggregate(val_acc ~ kernel, results, mean)
kernelAcc = kernelAcc[order(-kernelAcc$val_acc), ]

type = results[which(results$kernel == kernelAcc$kernel[1]),]
type = type[order(-type$val_acc), ]
kernel val_acc
7 70 0.8285714
6 60 0.8228571
3 30 0.8171429
2 20 0.8171429
5 50 0.8085714
8 80 0.8057143
10 100 0.8057143
9 90 0.8000000
4 40 0.7828571
13 175 0.7800000
11 125 0.7742857
1 10 0.7628571
12 150 0.7600000
14 200 0.7514286
X types kernel loss acc val_loss val_acc
35 35 sg 70 0.2537513 0.9350649 1.3861195 0.9000000
21 21 norm 70 0.0427878 0.9870130 0.9719612 0.8428571
49 49 sg.norm 70 0.1419386 0.9610389 0.8783187 0.8142857
7 7 raw 70 0.3521424 0.8311688 0.9944996 0.8000000
63 63 raw.d1 70 0.4158809 0.8311688 0.8798401 0.7857143

On average, a kernel size of 70 delivered the highest accuracy values yielding to an accuracy of 0.83. For this kernel size the Savitzkiy-Golay smoothed data set yielded to the highest accuracy of 0.90. We will thus proceed to work the smoothed data and a kernel size of 70.

Citations on this page


sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 19.1

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=de_DE.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=de_DE.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] plotly_4.9.0              tensorflow_1.14.0        
 [3] abind_1.4-5               e1071_1.7-2              
 [5] keras_2.2.4.1             workflowr_1.4.0.9001     
 [7] baseline_1.2-1            gridExtra_2.3            
 [9] stringr_1.4.0             prospectr_0.1.3          
[11] RcppArmadillo_0.9.600.4.0 openxlsx_4.1.0.1         
[13] magrittr_1.5              ggplot2_3.2.0            
[15] reshape2_1.4.3            dplyr_0.8.3              

loaded via a namespace (and not attached):
 [1] httr_1.4.1        tidyr_0.8.3       jsonlite_1.6     
 [4] viridisLite_0.3.0 foreach_1.4.7     shiny_1.3.2      
 [7] assertthat_0.2.1  highr_0.8         yaml_2.2.0       
[10] pillar_1.4.2      backports_1.1.4   lattice_0.20-38  
[13] glue_1.3.1        reticulate_1.13   digest_0.6.20    
[16] promises_1.0.1    colorspace_1.4-1  htmltools_0.3.6  
[19] httpuv_1.5.1      Matrix_1.2-17     plyr_1.8.4       
[22] pkgconfig_2.0.2   SparseM_1.77      purrr_0.3.2      
[25] xtable_1.8-4      scales_1.0.0      whisker_0.3-2    
[28] later_0.8.0       git2r_0.26.1      tibble_2.1.3     
[31] generics_0.0.2    withr_2.1.2       lazyeval_0.2.2   
[34] crayon_1.3.4      mime_0.7          evaluate_0.14    
[37] fs_1.3.1          class_7.3-15      tools_3.6.1      
[40] data.table_1.12.2 munsell_0.5.0     zip_2.0.3        
[43] compiler_3.6.1    rlang_0.4.0       grid_3.6.1       
[46] iterators_1.0.12  htmlwidgets_1.3   crosstalk_1.0.0  
[49] base64enc_0.1-3   labeling_0.3      rmarkdown_1.14   
[52] gtable_0.3.0      codetools_0.2-16  R6_2.4.0         
[55] tfruns_1.4        knitr_1.24        zeallot_0.1.0    
[58] rprojroot_1.3-2   stringi_1.4.3     Rcpp_1.0.2       
[61] tidyselect_0.2.5  xfun_0.8