Last updated: 2019-09-19
Checks: 7 0
Knit directory: polymeRID/
This reproducible R Markdown analysis was created with workflowr (version 1.4.0.9001). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20190729)
was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rprofile
Ignored: .Rproj.user/
Ignored: analysis/library.bib
Ignored: docs/figure/
Ignored: fun/
Ignored: output/20190810_1538/
Ignored: output/20190810_1546/
Ignored: output/20190810_1609/
Ignored: output/20190813_1044/
Ignored: output/logs/
Ignored: output/natural/
Ignored: output/nnet/
Ignored: output/svm/
Ignored: output/testRunII/
Ignored: output/testRunIII/
Ignored: packrat/lib-R/
Ignored: packrat/lib-ext/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/BH/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/FactoMineR/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/IDPmisc/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/KernSmooth/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/MASS/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/Matrix/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/MatrixModels/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ModelMetrics/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/R6/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RColorBrewer/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RCurl/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rcpp/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppArmadillo/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppEigen/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppGSL/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/RcppZiggurat/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rfast/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rgtsvm/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/Rmisc/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/SQUAREM/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/SparseM/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/abind/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/askpass/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/assertthat/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/backports/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/base64enc/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/baseline/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/bit64/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/bitops/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/boot/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/brew/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/callr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/car/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/carData/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/caret/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/cellranger/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/class/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/cli/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/clipr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/clisymbols/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/cluster/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/codetools/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/colorspace/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/commonmark/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/config/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/cowplot/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/crayon/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/crosstalk/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/curl/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/data.table/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/dendextend/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/desc/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/devtools/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/digest/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/doParallel/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/dplyr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/e1071/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipse/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ellipsis/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/evaluate/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/factoextra/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/fansi/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/flashClust/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/forcats/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreach/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/foreign/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/fs/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/generics/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/getPass/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggplot2/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggpubr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggrepel/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsci/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ggsignif/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/gh/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/git2r/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/glue/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/gower/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/gridExtra/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/gtable/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/haven/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/hexbin/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/highr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/hms/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmltools/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/htmlwidgets/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/httpuv/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/httr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ini/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ipred/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/iterators/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/jsonlite/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/keras/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/kerasR/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/knitr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/labeling/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/later/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/lattice/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/lava/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/lazyeval/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/leaps/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/lme4/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/lubridate/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/magrittr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/maptools/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/markdown/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/memoise/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/mgcv/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/mime/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/minqa/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/munsell/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/nlme/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/nloptr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/nnet/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/numDeriv/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/openssl/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/openxlsx/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/packrat/tests/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/pbkrtest/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/pillar/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/pkgbuild/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/pkgconfig/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/pkgload/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/plogr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/plotly/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/plyr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/polynom/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/praise/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/prettyunits/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/processx/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/prodlim/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/progress/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/promises/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/prospectr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/ps/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/purrr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/quantreg/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/randomForest/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rcmdcheck/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/readr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/readxl/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/recipes/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rematch/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/remotes/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/reshape2/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/reticulate/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rio/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rlang/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rmarkdown/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/roxygen2/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rpart/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rprojroot/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rsconnect/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/rstudioapi/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/scales/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/scatterplot3d/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/sessioninfo/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/shiny/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/sourcetools/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/sp/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringi/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/stringr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/survival/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/sys/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tensorflow/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/testthat/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tfruns/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tibble/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tidyselect/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/timeDate/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/tinytex/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/usethis/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/utf8/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/vctrs/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridis/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/viridisLite/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/whisker/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/withr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/workflowr/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/xfun/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/xml2/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/xopen/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/xtable/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/yaml/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/zeallot/
Ignored: packrat/lib/x86_64-pc-linux-gnu/3.6.1/zip/
Ignored: packrat/src/
Ignored: polymeRID.Rproj
Ignored: smp/20190812_1723_NNET/files/
Ignored: smp/20190812_1723_NNET/plots/
Ignored: smp/20190812_1729_NNET/files/
Ignored: smp/20190812_1729_NNET/plots/
Ignored: smp/20190812_1731_NNET/files/
Ignored: smp/20190812_1731_NNET/plots/
Ignored: smp/20190812_1733_NNET/files/
Ignored: smp/20190812_1733_NNET/plots/
Ignored: smp/20190815_1847_FUSION/
Ignored: smp/20190905_1602_FUSION/
Ignored: smp/20190905_1618_RFRAW/
Ignored: smp/20190905_1637_CNND2/
Ignored: smp/20190905_1708_FUSION/
Ignored: smp/20190910_1805_FUSION/
Ignored: website/
Untracked files:
Untracked: Rplots.pdf
Untracked: analysis/elsevier-harvard.csl
Unstaged changes:
Modified: analysis/assets/images/seperators.jpg
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view them.
File | Version | Author | Date | Message |
---|---|---|---|---|
html | 75bc270 | goergen95 | 2019-09-05 | Build site. |
Rmd | a848def | goergen95 | 2019-09-05 | changed citation style |
html | c26a428 | goergen95 | 2019-08-22 | Build site. |
Rmd | 7e9eddd | goergen95 | 2019-08-22 | wflow_publish(files = c(“analysis/cnn_crossvalidation.Rmd”, “analysis/cnn_exploration.Rmd”, |
html | f2ee83c | goergen95 | 2019-08-19 | Build site. |
html | d960dc2 | goergen95 | 2019-08-19 | included calibration |
html | b846f0b | goergen95 | 2019-08-19 | Build site. |
Rmd | de84a71 | goergen95 | 2019-08-19 | large update for website |
html | de84a71 | goergen95 | 2019-08-19 | large update for website |
Rmd | c0b21db | goergen95 | 2019-08-15 | proceeded on CNN exploration |
html | c0b21db | goergen95 | 2019-08-15 | proceeded on CNN exploration |
Convolutional Neural Networks (CNN) are mainly used in image processing tasks (Rawat and Wang, 2017). However, they can also be applied to one-dimensional data, such as time series or spectral data (Berisha et al., 2019; Ghosh et al., 2019; Ismail Fawaz et al., 2019; Liu et al., 2017). They mainly consist of three different types of layers, which generally are stacked into a sequential model for learning various patterns from the input data to model the desired output. The most relevant layer is the convolutional layer, which serves as an extractor for features found in the input (Rawat and Wang, 2017). They work on a specificed number of neurons, or filters, each serving as a mapping function for a specific range in the input data, also referred to as the kernel size. Not only do they map features from the raw input data, but also detect features in the output of previous convolutional layers. This is achieved through adjusting the weights associated with each filter based on a non-linear activation function. Additionally, after some convolutional layers, pooling layers are most commonly included in CNNs. These layers reduce the feature maps of previous layers and are also associated with a function to choose which parameters are preserved. Nowadays, most commonly max-pooling layers are used, preserving the maximum signal from a feature map. Finally, most CNNs end with a fully-connected layer. These layers are used to transform the last feature map to the output. For regression problems, this layer may only contain a single neuron, while for classification problems it may contain n
-neurons, each modelling the output of a specific class. The network learns through what is called “backpropagation”. It means that the training data is repeatedly presented to the network, depending on an optimizer function the distance to the desired output is calculated. Then, the weights associated with the filters are updated and a new epoch of presenting the training data to the CNN is started.
In contrast to random forest and support vector machines, the effect of noise on the classification outcome was not tested. This is mainly due to limitations in computation time. The computation time was significantly reduced by installing the keras
package in GPU mode based on the CUDA
library from Nvidia. However, CNNs remain computational intensive, since depending on the architecture several thousands of weights have to be trained. Here, we developed a simple two-block architecture of four convolutional layers in total. The number of filters, or feature extractors, increases with each layer by a factor of 2. We chose this network architechture with four layers and only a small number of filters because it delivered relatively high accuracies in short computation times. The code below defines a function to set up and compile a CNN for a given kernel size.
# expects that you have installed keras and tensorflow properly
library(keras)
buildCNN <- function(kernel, nVariables, nOutcome){
model = keras_model_sequential()
model %>%
# block 1
layer_conv_1d(filters = 8,
kernel_size = kernel,
input_shape = c(nVariables,1),
name = "block1_conv1",) %>%
layer_activation_relu(name="block1_relu1") %>%
layer_conv_1d(filters = 16,
kernel_size = kernel,
name = "block1_conv2") %>%
layer_activation_relu(name="block1_relu2") %>%
layer_max_pooling_1d(strides=2,
pool_size = 5,
name="block1_max_pool1") %>%
# block 2
layer_conv_1d(filters = 32,
kernel_size = kernel,
name = "block2_conv1") %>%
layer_activation_relu(name="block2_relu1") %>%
layer_conv_1d(filters = 64,
kernel_size = kernel,
name = "block2_conv2") %>%
layer_activation_relu(name="block2_relu2") %>%
layer_max_pooling_1d(strides=2,
pool_size = 5,
name="block2_max_pool1") %>%
# exit block
layer_global_max_pooling_1d(name="exit_max_pool") %>%
layer_dropout(rate=0.5) %>%
layer_dense(units = nOutcome, activation = "softmax")
# we compile for a classification with the categorcial crossentropy loss function
# and use adam as optimizer function
compile(model, loss="categorical_crossentropy", optimizer="adam", metrics="accuracy")
}
The function expects three arguments as input. The first is the kernel size which specifies the width of the window, extracting features from the input data and subsequent layer outputs. Note that the kernel size is held constant through out the network. The second argument expects an integer representing the number of input variables which relates to the amount of wavenumbers in the present case. The third argument also expects an integer value, specifiying the number of classes in the output. Each convolutional layer is associated with a ReLU-activation function. At the end of each block we added a max-pooling layer with stride = 2
, which takes the maximum values of its respective input and discards unrequired data points, effectively reducing the feature space by half. The exit block again consists of a global-max-pooling layer and is followed by a dropout layer which randomly silences half of the neurons to reduce the influence of overfitting. The last layer is a fully-connected layer which maps its input to nOutcome
classes via the softmax activation function. The last line of code compiles the model so it is ready for training. We use categorical crossentropy as the loss function in our network because we currently have 14 different classes which perfectly fit for one-hot encoding. If the number of classes is too high, for example in speech recognition problems, sparse categorical crossentropy would be the loss function of choice. As an optimizer function we chose adam
because it ensures that the learning rate and decay values will be changed adaptively during training. Finally, we tell the model to optimize the training process based on overall accuracy. We can now compile a first model and take a look at its structure:
model = buildCNN(kernel = 50, nVariables = 1863, nOutcome = 12)
model
Model
Model: "sequential"
___________________________________________________________________________
Layer (type) Output Shape Param #
===========================================================================
block1_conv1 (Conv1D) (None, 1814, 8) 408
___________________________________________________________________________
block1_relu1 (ReLU) (None, 1814, 8) 0
___________________________________________________________________________
block1_conv2 (Conv1D) (None, 1765, 16) 6416
___________________________________________________________________________
block1_relu2 (ReLU) (None, 1765, 16) 0
___________________________________________________________________________
block1_max_pool1 (MaxPooling1D) (None, 881, 16) 0
___________________________________________________________________________
block2_conv1 (Conv1D) (None, 832, 32) 25632
___________________________________________________________________________
block2_relu1 (ReLU) (None, 832, 32) 0
___________________________________________________________________________
block2_conv2 (Conv1D) (None, 783, 64) 102464
___________________________________________________________________________
block2_relu2 (ReLU) (None, 783, 64) 0
___________________________________________________________________________
block2_max_pool1 (MaxPooling1D) (None, 390, 64) 0
___________________________________________________________________________
exit_max_pool (GlobalMaxPooling1 (None, 64) 0
___________________________________________________________________________
dropout (Dropout) (None, 64) 0
___________________________________________________________________________
dense (Dense) (None, 12) 780
===========================================================================
Total params: 135,700
Trainable params: 135,700
Non-trainable params: 0
___________________________________________________________________________
In total, the current network consists of 135,700 weights to be trained. In the column output shape
we observe the shape transformation of the input data from a 1D-array of 1814 in size on the top layer of the network, to a 1D-output of size 12 on the bottom layer.
We can use our database to start a training process with the CNN defined before. First, the input data needs to be transformed to arrays which can be understood by the keras::fit()
function. Here, we used keras-backend
functionality to achieve this. Additionally, every training process needs to be initiated with information on the number of epochs the training data is going to be presented. We used a fixed value of 300 epochs because beyond that value no substantial gain in accuracy was observed. Also, the training data is going to be presented in batches, each of 10 observations.
data = read.csv(file = paste0(ref, "reference_database.csv"), header = TRUE)
K <- keras::backend()
x_train = as.matrix(data[,1:ncol(data)-1])
x = K$expand_dims(x_train, axis = 2L)
x_train = K$eval(x)
y_train = keras::to_categorical(as.numeric(data$class)-1, length(unique(data$class)))
history = keras::fit(model, x = x_train, y = y_train,
epochs=300,
batch_size = 10)
history
plot(history)
Trained on 147 samples (batch_size=10, epochs=300)
Final epoch (plot to see history):
loss: 0.05043
acc: 0.9796
We achived an accuracy of 0.98 in 300 epochs (Fig. 1). This single value is still hardly an indicator for the generalization potential of the CNN because we did not use an independent validation dataset to evaluate the performance of the model on unseen data. Before evaluating the generalization performance, we analyze how the CNN reacts to different kernel sizes as well as to specific data transformations. These are normalization, Savitzkiy-Golay filter, and the first and second derivative of the different representations. We then apply a loop to calculate all the different combinations. Note, we apply the set.seed()
function before splitting the data. This way all the different combinations of kernels and data transformations actually trains and evaluates on the exact same dataset. This would not be valid if the generalization potential of the model was going to be assessed. But since we are interested in the performance of different kernel sizes and data transformation techniques, it is actually beneficial for comparison if the different models train on the same data. It would otherwise not be possible to account variations in performance, either to the kernel size, or to a different split in the training and validation sets.
kernels = c(10,20,30,40,50,60,70,80,90,100,125,150,175,200)
types = c("raw","norm","sg","sg.norm","raw.d1", "sg.norm.d1", "sg.norm.d2",
"raw.d1","raw.d2","norm.d1","norm.d2")
results = data.frame(types = rep(0, length(kernels) * length(types)),
kernel =rep(0, length(kernels) * length(types)),
loss = rep(0, length(kernels) * length(types)),
acc = rep(0, length(kernels) * length(types)),
val_loss=rep(0, length(kernels) * length(types)),
val_acc=rep(0, length(kernels) * length(types)))
variables = ncol(data)-1
counter = 1
for (type in types){
if (type == "raw"){
tmp = data
variables = ncol(data)-1
}else{
tmp = preprocess(data[ ,1:ncol(data)-1], type = type)
variables = ncol(tmp)
tmp$class =data$class
}
for (kernel in kernels){
# splitting between training and test
set.seed(42)
index = caret::createDataPartition(y=tmp$class,p=.5)
training = tmp[index$Resample1,]
validation = tmp[-index$Resample1,]
# splitting predictors and labels
x_train = training[,1:variables]
y_train = training[,1+variables]
x_test = validation[,1:variables]
y_test = validation[,1+variables]
#number of preditors and unique targets
nOutcome = length(levels(y_train))
# function to get keras array for dataframes
K <- keras::backend()
df_to_karray <- function(df){
d = as.matrix(df)
d = K$expand_dims(d, axis = 2L)
d = K$eval(d)
}
# coerce data to keras structure
x_train = df_to_karray(x_train)
x_test = df_to_karray(x_test)
y_train = keras::to_categorical(as.numeric(y_train)-1,nOutcome)
y_test = keras::to_categorical(as.numeric(y_test)-1,nOutcome)
# contstruction of "large" neural network
model = prepNNET(kernel, variables, nOutcome)
history = keras::fit(model, x = x_train, y = y_train,
epochs=200, validation_data = list(x_test,y_test),
#callbacks = callback_tensorboard(paste0(output,"nnet/logs")),
batch_size = 10 )
results$types[counter] = type
results$kernel[counter] = kernel
results$loss[counter] = history$metrics$loss[100]
results$acc[counter] = history$metrics$acc[100]
results$val_loss[counter] = history$metrics$val_loss[100]
results$val_acc[counter] = history$metrics$val_acc[100]
write.csv(results, file = paste0(output,"nnet/kernels.csv"))
counter = counter + 1
}
print(results)
}
We can now plot the results with increasing kernel sizes on the x-axis, accuracy values for the validation dataset on the y-axis and different lines for the data transformations (Fig. 2).
We can observe that the pattern of accuracy is highly variable dependent on the kernel size as well as between different data transformations. For example, the use of the second derivative of the Savitzkiy-Golay filtered data yields to very low accuracies across all kernel sizes. To aid the selection of an appropriate kernel size and data transformation we calculated some descriptive statistic values to find optimal configurations. One indicator are the kernel sizes which deliver the highest accuracy results on average. Another indicator for optimal configurations might be the highest accurcies achieved in absolute terms.
kernelAcc = aggregate(val_acc ~ kernel, results, mean)
kernelAcc = kernelAcc[order(-kernelAcc$val_acc), ]
type = results[which(results$kernel == kernelAcc$kernel[1]),]
type = type[order(-type$val_acc), ]
highest = results[order(-results$val_acc),]
On average, a kernel size of 50 delivered the highest accuracy of 0.81 (Tab. 1). A kernel size of 90 yielded to the second-highest accuracy.
kernel | val_acc | |
---|---|---|
5 | 50 | 0.8071429 |
9 | 90 | 0.7940476 |
3 | 30 | 0.7690476 |
2 | 20 | 0.7654762 |
6 | 60 | 0.7630952 |
7 | 70 | 0.7559524 |
8 | 80 | 0.7511905 |
4 | 40 | 0.7511905 |
10 | 100 | 0.7500000 |
12 | 150 | 0.7416667 |
11 | 125 | 0.7333333 |
13 | 175 | 0.7250000 |
14 | 200 | 0.7214286 |
1 | 10 | 0.6904762 |
When we order the results according to the absolute accuracies achieved, it can be observed that there are only four pre-processing types and kernel sizes which yielded to an accuracy of 0.9 or higher (Tab. 2). The second derivative of the normalized data yielded to an accuracy of 0.91 at a kernel size of 90. The simple Savitzkiy-Golay smoothed data yielded to an accuracy of 0.9 at a kernel size of 70. The second derivative of the raw data yielded to an accuracy of 0.9 at a kernel size of 90. The first derivative of the normalised data yielded to an accuracy of 0.9 at a kernel size of 150.
X | types | kernel | loss | acc | val_loss | val_acc | |
---|---|---|---|---|---|---|---|
163 | 163 | norm.d2 | 90 | 0.2898488 | 0.8961039 | 0.8323456 | 0.9142857 |
35 | 35 | sg | 70 | 0.2537513 | 0.9350649 | 1.3861195 | 0.9000000 |
135 | 135 | raw.d2 | 90 | 0.1529336 | 0.9350649 | 1.5874773 | 0.9000000 |
152 | 152 | norm.d1 | 150 | 0.1060385 | 0.9740260 | 0.8856056 | 0.9000000 |
65 | 65 | raw.d1 | 90 | 0.1509757 | 0.9350649 | 1.0225650 | 0.8857143 |
101 | 101 | sg.norm.d1 | 30 | 0.1817555 | 0.9480519 | 0.4789599 | 0.8857143 |
104 | 104 | sg.norm.d1 | 60 | 0.2074841 | 0.9350649 | 0.6712436 | 0.8857143 |
162 | 162 | norm.d2 | 80 | 0.0823583 | 0.9610389 | 0.5753851 | 0.8857143 |
166 | 166 | norm.d2 | 150 | 0.0143513 | 1.0000000 | 1.1288143 | 0.8857143 |
27 | 27 | norm | 175 | 0.0565004 | 0.9870130 | 1.4892539 | 0.8714285 |
After finding the optimal kernel sizes for different pre-processing techniques, a cross-validation approach was used to find the configuration with the optimal generalization potential. The documentation of the results can be found here.
Berisha, S., Lotfollahi, M., Jahanipour, J., Gurcan, I., Walsh, M., Bhargava, R., Van Nguyen, H., Mayerich, D., 2019. Deep learning for FTIR histology: leveraging spatial and spectral features with convolutional neural networks. Analyst 144, 1642–1653. https://doi.org/10.1039/c8an01495g
Ghosh, K., Stuke, A., Todorovic, M., Bjørn Jørgensen, P., Schmidt, M.N., Vehtari, A., Rinke, P., Ghosh, K., Vehtari, A., Stuke, A., Todorovic, M., Rinke, P., Jørgensen, P.B., Schmidt, M.N., Petersens Plads, R., Lyngby Denmark Rinke, K.P., 2019. FULL PAPER 1801367 (1 of 7) Deep Learning Spectroscopy: Neural Networks for Molecular Excitation Spectra. https://doi.org/10.1002/advs.201801367
Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A., 2019. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery 33, 917–963. https://doi.org/10.1007/s10618-019-00619-1
Liu, J., Osadchy, M., Ashton, L., Foster, M., Solomon, C.J., Gibson, S.J., 2017. Deep convolutional neural networks for Raman spectrum recognition: A unified solution. Analyst 142, 4067–4074. https://doi.org/10.1039/c7an01371j
Rawat, W., Wang, Z., 2017. Deep convolutional neural networks for image classification: A comprehensive review. Neural Computation 29, 2352–2449. https://doi.org/10.1162/NECO_a_00990
sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 19.1
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=de_DE.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=de_DE.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] plotly_4.9.0 tensorflow_1.14.0
[3] abind_1.4-5 e1071_1.7-2
[5] keras_2.2.4.1 workflowr_1.4.0.9001
[7] baseline_1.2-1 gridExtra_2.3
[9] stringr_1.4.0 prospectr_0.1.3
[11] RcppArmadillo_0.9.600.4.0 openxlsx_4.1.0.1
[13] magrittr_1.5 ggplot2_3.2.0
[15] reshape2_1.4.3 dplyr_0.8.3
loaded via a namespace (and not attached):
[1] httr_1.4.1 tidyr_0.8.3 jsonlite_1.6
[4] viridisLite_0.3.0 foreach_1.4.7 shiny_1.3.2
[7] assertthat_0.2.1 highr_0.8 yaml_2.2.0
[10] pillar_1.4.2 backports_1.1.4 lattice_0.20-38
[13] glue_1.3.1 reticulate_1.13 digest_0.6.20
[16] promises_1.0.1 colorspace_1.4-1 htmltools_0.3.6
[19] httpuv_1.5.1 Matrix_1.2-17 plyr_1.8.4
[22] pkgconfig_2.0.2 SparseM_1.77 purrr_0.3.2
[25] xtable_1.8-4 scales_1.0.0 whisker_0.3-2
[28] later_0.8.0 git2r_0.26.1 tibble_2.1.3
[31] generics_0.0.2 withr_2.1.2 lazyeval_0.2.2
[34] crayon_1.3.4 mime_0.7 evaluate_0.14
[37] fs_1.3.1 class_7.3-15 tools_3.6.1
[40] data.table_1.12.2 munsell_0.5.0 zip_2.0.3
[43] compiler_3.6.1 rlang_0.4.0 grid_3.6.1
[46] iterators_1.0.12 htmlwidgets_1.3 crosstalk_1.0.0
[49] base64enc_0.1-3 labeling_0.3 rmarkdown_1.14
[52] gtable_0.3.0 codetools_0.2-16 R6_2.4.0
[55] tfruns_1.4 knitr_1.24 zeallot_0.1.0
[58] rprojroot_1.3-2 stringi_1.4.3 Rcpp_1.0.2
[61] tidyselect_0.2.5 xfun_0.8