Last updated: 2021-05-12
Checks: 7 0
Knit directory:
thesis/analysis/
This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20210321)
was run prior to running the code in the R Markdown file.
Setting a seed ensures that any results that rely on randomness,
e.g. subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 7fd4ff2. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the
analysis have been committed to Git prior to generating the results (you can
use wflow_publish
or wflow_git_commit
). workflowr only
checks the R Markdown file, but you know if there are other scripts or data
files that it depends on. Below is the status of the Git repository when the
results were generated:
Ignored files:
Ignored: .Rproj.user/
Ignored: data/DB/
Ignored: data/raster/
Ignored: data/raw/
Ignored: data/vector/
Ignored: docker_command.txt
Ignored: output/acc/
Ignored: output/bayes/
Ignored: output/ffs/
Ignored: output/models/
Ignored: output/plots/
Ignored: output/test-results/
Ignored: renv/library/
Ignored: renv/staging/
Ignored: report/presentation/
Untracked files:
Untracked: analysis/assets/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were made
to the R Markdown (analysis/thesis-discussion.Rmd
) and HTML (docs/thesis-discussion.html
)
files. If you’ve configured a remote Git repository (see
?wflow_git_remote
), click on the hyperlinks in the table below to
view the files as they were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | de8ce5a | Darius Görgen | 2021-04-05 | add content |
html | de8ce5a | Darius Görgen | 2021-04-05 | add content |
This thesis’s results reveal that there are several trade-offs to consider in predicting violent conflict. The most obvious trade-off is balancing a model’s precision with its sensitivity. For a given model, an increase in one metric will lead to a decrease in the other. Thus, it is the context the model is to be applied in governing the decision for either one of these metrics. In the present thesis, the decision has been made to give more weight to sensitivity than to precision, realized by optimizing towards the \(F_2\)-score. The argument for this decision is that not to miss actual occurrences of conflict is more critical than falsely flagging peaceful district-months as conflict. This comes at the cost that the models tend to predict the occurrence of conflict more frequently. Once a district has crossed a certain threshold, the models predict conflict for almost the entire prediction horizon, resulting in a very low precision in the temporal distribution of conflicts. However, based on the performances of existing early-warning tools, it is evident that quantitative models should not be relied on as the sole instrument in practical conflict prevention efforts (Cederman and Weidmann, 2017). Instead, predicting the risk of conflict occurrence should be considered as one link in a chain of tools for conflict prevention. Predictive models can serve the purpose of delivering information on focus areas where additional quantitative and qualitative analysis would prove as most valuable and this way helps to use limited resources more efficiently.
In this context, another trade-off becomes evident by comparing the DL models’ performance with related studies. Compared to studies focusing on country-month data sets, the performance of the proposed method is substantially reduced. In relation to Kuzma et al. (2020), who used similar aggregation units on the sub-national scale, the performance is comparable. For a confirmation of this finding beyond doubt, a thorough investigation on the effect of scale is needed. However, there is some indication that a model’s performance will decrease for increasing the spatial detail. Scientists, as well as policymakers, require highly detailed information on future conflicts in the spatial and temporal domain (Chadefaux, 2017). The proposed method of only using data sets that are available in a gridded format allows for almost arbitrary spatial aggregation. It also reduces the complexity of data preparation because data sets with differing spatiotemporal dimensions can easily be harmonized by free and open source tools provided by the spatial research community (Brovelli et al., 2017). Additionally, for research focusing on the interaction between environmental change and human societies, remote sensing provides spatial and temporal comprehensive data sets that are currently used to derive a sheer magnitude of different environmental variables (Kwok, 2018). The proposed method thus seems beneficial to tailor prediction models to the specific spatiotemporal demands of real-world applications.
However, implementing DL models leads to reduced interpretability of a model’s prediction. While it is relatively easy to demonstrate why and how a linear model predicts a particular outcome, DL models are sometimes referred to as black-box models (Gilpin et al., 2019). This metaphor indicates that due to the complex internal structure of DL networks, it is not always explicable how a network predicts a specific outcome. This seriously limits the effective use of DL in conflict prevention efforts because political decision-makers require recommendations on how to lower the conflict risk at a particular location. The research community has not yet fully agreed on a concise definition of interpretability. It often depends strongly on the research domain (Molnar et al., 2020). In conflict research, relatively few studies apply machine learning techniques, so robust standards of interpretability have yet to be defined.
Another trade-off is found in the comparison between adm and bas districts. Given the presented problem formulation and predictor variables, evidence has been presented that bas districts perform better on the conflict prediction task. However, most humans are more familiar with administrative boundaries. Familiarity is an essential factor that helps people to more quickly process visual information (Manahova et al., 2019). Changing the representation of data to something people do not expect or are less familiar with makes it harder to interpret the data. In this sense, the trade-off consists of achieving higher performances versus making data interpretation more challenging for the audience. Again, this trade-off decision needs to be based on the application context of a model. The presented results show that the difference in performance between adm and bas districts can be quite substantial, indicating that changing to a less familiar representation of the Earth’s surface could prove beneficial for the conflict prediction task.
Within the literature of conflict prediction, various definitions of events of interest exist. Some have been focused on violent conflict (Rost et al., 2009), terrorism (Uddin et al., 2020), rebellions and insurgencies (Collier and Hoeffler, 2004), others on international wars (Beck et al., 2000) and irregular leadership changes (Ward and Beger, 2017). All of these applications require a careful semantic differentiation between different types of events. Besides focusing on a specific type of violence, various thresholds in terms of casualties have been applied to determine if an event is included in a study. In this thesis, a district-month was considered as belonging to the conflict class if at least one event in the UCDP data base was found. UCDP only includes events with at least 25 casualties (Pettersson and Öberg, 2020). The distinction between three different conflict classes found in the data base allows for some interpretation, however, a more profound semantic differentiation, e.g., in terms of involved actors, political goals, etc., was considered out of scope for the present analysis. However, backed by the finding that for cb and os conflict classes, the EV predictor set has a higher impact on increasing the prediction performance, some types of violence seem to be better predictable by environmental variables compared to others. Concentrating future investigations on these types of conflict seems promising to increase predictive performance. Additionally, the presented results do not distinguish between different modes of conflict, i.e., between a newly occurring conflict in a given district or the continuance of an ongoing conflict history. Analyzing these conflict modes distinctly can generate insights into a model’s ability to differentiate between emerging and ongoing conflicts, as shown by Kuzma et al. (2020), thereby increasing the confidence one can have in a model’s prediction. It requires an additional definition of emerging and ongoing conflicts to be applied to the response variable, increasing the complexity of the modeling approach, which is why it was not conducted in this thesis. Future improvements of the current approach should consider this distinction since the CH theme already, to some extent, proves capable of predicting conflict. Thus, a model’s ability to capture emerging conflicts is of high relevance, comparing the performance of different model configurations.
As it has been stated above, most of the previously cited studies rely on data sets which are collected per year on a national scale and are comprehensively provided by institutions such as the World Bank. One reason to refrain from sub-national analysis might be that considering sub-national units complicates the collection of predictor variables. Spatially disaggregated variables are hard to collect on a large scale and while disaggregating national statistics to a smaller scale is possible, it adds additional complexity in data preparation and is associated with additional assumptions not necessarily matchin real-world processes (Verstraete, 2017). Disaggregating these administrative-bound variables to sub-basin watersheds is even more challenging because these units tend to cross administrative boundaries. The presented approach of variable selection was thus restricted to gridded data formats at the consequence of the deliberate exclusion of several variables which have been found valuable predictors of violent conflict. Among these are variables associated with a population’s health and education status, such as infant mortality or rate of secondary education, the economic structure on the country level, such as the rate of primary commodity exports in terms of GDP, as well as information on the political system and ethnolinguistic composition of a society, represented by indicators such as the democracy index, level of repression, or the exclusion of power for certain groups. On the one hand, evidence has been presented that despite these simplifications notable performances in conflict prediction can be achieved. On the other hand, ignoring these indicators might have reduced the overall potential of the DL models to predict conflicts more accurately.
While most of the indicators mentioned above could have been easily collected for the adm representation of the data, this would have hindered the direct comparison to the performance to the bas representation. However, there are variables originating from the research on integrated water resource management that could be collected exclusively for bas districts, such as indicators on the (non-)consumptive use of water, water quality, and governance as well as additional hydrological indicators characterizing water availability (Pires et al., 2017). In the future the current approach could be augmented by including adm and bas specific indicators to compare the resulting predictive performance of these approaches.
Most of the environmental predictors were derived from the MODIS twin satellites. These were chosen because their mission time started in 2001 and is still ongoing, therefore covering an extensive time window by only two instruments. Mixing measurements on the same variable from multiple instruments with differing spatiotemporal extents would have opened additional complexities during data preparation (Pasetto et al., 2018). This underlines the importance that value-added remote sensing products play in research questions such as conflict prediction. Different research questions can be investigated much quicker and rigorously when institutions deliver standardized products ready for analysis. Currently, such efforts are observed moving towards digital twins of processes on the Earth’s surface and in the atmosphere (Bauer et al., 2021). The importance of analysis ready data sets holds for the spatial mapping of socio-economic variables such as populations counts and GDP, for which continuance and improvement over the next decades will play a decisive role in enabling innovative spatiotemporal analysis in many research fields (Head et al., 2017).
The basic CNN-LSTM architecture as presented in this thesis proved capable of learning the prediction task. The task has been formulated as a time series problem with an increasing length starting from 48 months. Other possibilities to frame the problem exist. For example, evaluating the predictive model with a fixed size of the time window could be one option. The window size would need optimization, but the results could inform conflict theory on how much knowledge of the past is needed to make accurate conflict predictions for the future. The training strategy was based on batch gradient descent, meaning that all districts are presented to the network before weights are updated. Optimizing for different batch sizes was deemed unfeasible for this thesis because it would considerably increase training time. Additionally, because the model is not expected to generalize beyond its current spatial extent, spatial-cross validation was not considered necessary. However, regionalized models, as shown by Kuzma et al. (2020), could improve performance because the conflict pathways can not be expected to be the same in the entire study domain.
There is potential for improvement in the network design choices especially considering the latest advances in DL. Shih et al. (2019) proposed a mechanism of temporal pattern attention for multivariate time series forecasting to overcome the shortcoming of recurrent networks to memorize long-term dependencies in the data. Their attention mechanism applies convolutional filters onto the hidden state of a recurrent layer at each time step so that the network can learn which variables to pay attention to and which variables to ignore. They achieve promising results for several multivariate time series problems with this approach. Since the conflict prediction task’s data structure is very similar, temporal pattern attention would lend itself to future investigations to improve performance. Another recurrent-based network architecture worth to be investigated for the conflict prediction task is Echo State Networks (ESN). These networks consist of a high number of randomly initiated recurrent cells with a trainable output layer. They are more light-weight during training than traditional LSTM and can model chaotic time-dependent systems (Jaeger, 2001). ESN have been successfully applied to highly complex time series prediction problems such as wind power forecasting (L’opez et al., 2018), rainfall estimation (Yen et al., 2019), or spatiotemporal modeling of sea surface temperature (McDermott and Wikle, 2017). Because of the high complexity associated with the occurrence of violent conflicts, ESNs could be tested as a viable alternative model architecture.
The influence of different model architectures on the observed differences in mean performance measured by the \(F_2\)-score can not completely be neglected with the current study setup. To account for variances due to model architecture, training all models on exactly the same architecture would be beneficial. However, the question would arise if there exists a model architecture which performs equally well for all model configurations and how to find it. DL, like many other research activities, is a process constrained by available computation power and time. In the setup of this study, hyperparameter optimization was implemented based on the most complex predictors sets. The rationale for this decision was that a model architecture capable of learning in a complex setting would also be capable of learning in simpler contexts and not necesarily vice-versa. Additionally, hyperparameter optimization was applied for adm and bas districts under the same computational constraints simoltaniously. With fewer constraints on computational resources, a more elaborated investigation on the influence of model architechture could yield interesting results. However, in the context of this thesis, the presented evidence should not be considered as a closure on the question of the importance of environmental variables in conflict prediction, but rather as the optimized outcome of a research process associated with computational and methodological constraints.
Bauer, P., Dueben, P.D., Hoefler, T., Quintino, T., Schulthess, T.C., Wedi, N.P., 2021. The digital revolution of Earth-system science. Nature Computational Science 1, 104–113. https://doi.org/10.1038/s43588-021-00023-0
Beck, N., King, G., Zeng, L., 2000. Improving Quantitative Studies of International Conflict: A Conjecture. American Political Science Review 94, 21–35. https://doi.org/10.1017/S0003055400220078
Brovelli, M.A., Minghini, M., Moreno-Sanchez, R., Oliveira, R., 2017. Free and open source software for geospatial applications (FOSS4G) to support Future Earth. International Journal of Digital Earth 10, 386–404. https://doi.org/10.1080/17538947.2016.1196505
Cederman, L.-E., Weidmann, N.B., 2017. Predicting armed conflict: Time to adjust our expectations? Science 355, 474–476. https://doi.org/10.1126/science.aal4483
Chadefaux, T., 2017. Conflict forecasting and its limits. Data Science 1, 7–17. https://doi.org/10.3233/DS-170002
Collier, P., Hoeffler, A., 2004. Greed and grievance in civil war 56, 563–595. https://doi.org/10.1093/oep/gpf064
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L., 2019. Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv:1806.00069 [cs, stat].
Head, A., Manguin, M., Tran, N., Blumenstock, J.E., 2017. Can Human Development be Measured with Satellite Imagery?, in: Proceedings of the Ninth International Conference on Information and Communication Technologies and Development, ICTD ’17. Association for Computing Machinery, New York, NY, USA, pp. 1–11. https://doi.org/10.1145/3136560.3136576
Jaeger, H., 2001. The "Echo state" approach to analysing and training recurrent neural networks-with an erratum note’. German National Research Center for Information Technology GMD. Technical Report. Bonn, Germany. 148.
Kuzma, S., Kerins, P., Saccoccia, E., Whiteside, C., Roos, H., Iceland, C., 2020. Leveraging Water Data in a Machine Learning-Based Model for Forecasting Violent Conflict. Technical note. [WWW Document]. URL https://www.wri.org/publication/leveraging-water-data
Kwok, R., 2018. Ecology’s remote-sensing revolution. Nature 556, 137–138. https://doi.org/10.1038/d41586-018-03924-9
L’opez, E., Valle, C., Allende, H., Gil, E., Madsen, H., 2018. Wind Power Forecasting Based on Echo State Networks and Long Short-Term Memory. Energies 11, 526. https://doi.org/10.3390/en11030526
Manahova, M.E., Spaak, E., de Lange, F.P., 2019. Familiarity Increases Processing Speed in the Visual System. Journal of Cognitive Neuroscience 32, 722–733. https://doi.org/10.1162/jocn_a_01507
McDermott, P.L., Wikle, C.K., 2017. An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting. arXiv:1708.05094 [stat].
Molnar, C., Casalicchio, G., Bischl, B., 2020. Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. arXiv:2010.09337 [cs, stat].
Pasetto, D., Arenas-Castro, S., Bustamante, J., Casagrandi, R., Chrysoulakis, N., Cord, A.F., Dittrich, A., Domingo-Marimon, C., Serafy, G.E., Karnieli, A., Kordelas, G.A., Manakos, I., Mari, L., Monteiro, A., Palazzi, E., Poursanidis, D., Rinaldo, A., Terzago, S., Ziemba, A., Ziv, G., 2018. Integration of satellite remote sensing data in ecosystem modelling at local scales: Practices and trends. Methods in Ecology and Evolution 9, 1810–1821. https://doi.org/10.1111/2041-210X.13018
Pettersson, T., Öberg, M., 2020. Organized violence, 1989–2019. Journal of Peace Research 57, 597–613. https://doi.org/10.1177/0022343320934986
Pires, A., Morato, J., Peixoto, H., Botero, V., Zuluaga, L., Figueroa, A., 2017. Sustainability Assessment of indicators for integrated water resources management. Science of The Total Environment 578, 139–147. https://doi.org/10.1016/j.scitotenv.2016.10.217
Rost, N., Schneider, G., Kleibl, J., 2009. A global risk assessment model for civil wars. Konstanzer Online-Publikations-System (KOPS). University of Konstanz, Germany. 13.
Shih, S.-Y., Sun, F.-K., Lee, H.-y., 2019. Temporal Pattern Attention for Multivariate Time Series Forecasting. arXiv:1809.04206 [cs, stat].
Uddin, M.I., Zada, N., Aziz, F., Saeed, Y., Zeb, A., Ali Shah, S.A., Al-Khasawneh, M.A., Mahmoud, M., 2020. Prediction of Future Terrorist Activities Using Deep Neural Networks. Complexity. https://doi.org/10.1155/2020/1373087
Verstraete, J., 2017. The Spatial Disaggregation Problem: Simulating Reasoning Using a Fuzzy Inference System. IEEE Transactions on Fuzzy Systems 25, 627–641. https://doi.org/10.1109/TFUZZ.2016.2567452
Ward, M.D., Beger, A., 2017. Lessons from near real-time forecasting of irregular leadership changes. Journal of Peace Research 54, 141–156. https://doi.org/10.1177/0022343316680858
Yen, M.-H., Liu, D.-W., Hsin, Y.-C., Lin, C.-E., Chen, C.-C., 2019. Application of the deep learning for the prediction of rainfall in Southern Taiwan. Scientific Reports 9. https://doi.org/10.1038/s41598-019-49242-6
sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Debian GNU/Linux 10 (buster)
Matrix products: default
BLAS/LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.3.5.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=C
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] lubridate_1.7.9.2 rgdal_1.5-18 countrycode_1.2.0 welchADF_0.3.2
[5] rstatix_0.6.0 ggpubr_0.4.0 scales_1.1.1 RColorBrewer_1.1-2
[9] latex2exp_0.4.0 cubelyr_1.0.0 gridExtra_2.3 ggtext_0.1.1
[13] magrittr_2.0.1 tmap_3.2 sf_0.9-7 raster_3.4-5
[17] sp_1.4-4 forcats_0.5.0 stringr_1.4.0 purrr_0.3.4
[21] readr_1.4.0 tidyr_1.1.2 tibble_3.0.6 tidyverse_1.3.0
[25] huwiwidown_0.0.1 kableExtra_1.3.1 knitr_1.31 rmarkdown_2.7.3
[29] bookdown_0.21 ggplot2_3.3.3 dplyr_1.0.2 devtools_2.3.2
[33] usethis_2.0.0
loaded via a namespace (and not attached):
[1] readxl_1.3.1 backports_1.2.0 workflowr_1.6.2
[4] lwgeom_0.2-5 splines_3.6.3 crosstalk_1.1.0.1
[7] leaflet_2.0.3 digest_0.6.27 htmltools_0.5.1.1
[10] memoise_1.1.0 openxlsx_4.2.3 remotes_2.2.0
[13] modelr_0.1.8 prettyunits_1.1.1 colorspace_2.0-0
[16] rvest_0.3.6 haven_2.3.1 xfun_0.21
[19] leafem_0.1.3 callr_3.5.1 crayon_1.4.0
[22] jsonlite_1.7.2 lme4_1.1-26 glue_1.4.2
[25] stars_0.4-3 gtable_0.3.0 webshot_0.5.2
[28] car_3.0-10 pkgbuild_1.2.0 abind_1.4-5
[31] DBI_1.1.0 Rcpp_1.0.5 viridisLite_0.3.0
[34] gridtext_0.1.4 units_0.6-7 foreign_0.8-71
[37] htmlwidgets_1.5.3 httr_1.4.2 ellipsis_0.3.1
[40] pkgconfig_2.0.3 XML_3.99-0.3 dbplyr_2.0.0
[43] tidyselect_1.1.0 rlang_0.4.10 later_1.1.0.1
[46] tmaptools_3.1 munsell_0.5.0 cellranger_1.1.0
[49] tools_3.6.3 cli_2.3.0 generics_0.1.0
[52] broom_0.7.2 evaluate_0.14 yaml_2.2.1
[55] processx_3.4.5 leafsync_0.1.0 fs_1.5.0
[58] zip_2.1.1 nlme_3.1-150 whisker_0.4
[61] xml2_1.3.2 compiler_3.6.3 rstudioapi_0.13
[64] curl_4.3 png_0.1-7 e1071_1.7-4
[67] testthat_3.0.1 ggsignif_0.6.0 reprex_0.3.0
[70] statmod_1.4.35 stringi_1.5.3 ps_1.5.0
[73] desc_1.2.0 lattice_0.20-41 Matrix_1.2-18
[76] nloptr_1.2.2.2 classInt_0.4-3 vctrs_0.3.6
[79] pillar_1.4.7 lifecycle_0.2.0 data.table_1.13.2
[82] httpuv_1.5.5 R6_2.5.0 promises_1.1.1
[85] KernSmooth_2.23-18 rio_0.5.16 sessioninfo_1.1.1
[88] codetools_0.2-16 dichromat_2.0-0 boot_1.3-25
[91] MASS_7.3-53 assertthat_0.2.1 pkgload_1.1.0
[94] rprojroot_2.0.2 withr_2.4.1 parallel_3.6.3
[97] hms_1.0.0 grid_3.6.3 minqa_1.2.4
[100] class_7.3-17 carData_3.0-4 git2r_0.27.1
[103] base64enc_0.1-3