3 Materials and methods

3.1 Softwares

3.1.1 Pipeline management

All steps of the process are being managed using the R package targets13 from data extraction to the final report. An example of a pipeline visualization created with targets is shown in Fig. 3.1. This package helps to keep record of the random seeds (allowing reproducibility), changes in some part of the code (or dependencies) and then running only the branches that need to be updated, and several other features to keep a reproducible workflow avoiding unnecessary repetitions.

Example aof pipeline visualization using `targets`. From left to right we see 'Stems' (steps that do not create branches) and 'Patterns' (that contains two or more branches) and the flow of the information. The green color means that the step is up to date to the current code and dependencies.

FIGURE 3.1: Example aof pipeline visualization using targets. From left to right we see ‘Stems’ (steps that do not create branches) and ‘Patterns’ (that contains two or more branches) and the flow of the information. The green color means that the step is up to date to the current code and dependencies.

3.1.2 Reports management

The report is available on the main webpage,14 allowing inspection of previous versions managed by the R package workflowr15. This package complements the targets package by taking care of the versioning of every report. It is like a Log Book that keeps track of every important milestone of the project, while summarize the computational environment where it was run. Fig. 3.2 shows only a fraction of the generated website, where we can see that this version passed the required checks (system is up-to-date, no caches, session information was recorded, and others) and we see a table of previous versions.

Fraction of the website generated by `workflowr`. On top we see that this version passed all checks, and in the middle we see a table referring to the previous versions of the report..

FIGURE 3.2: Fraction of the website generated by workflowr. On top we see that this version passed all checks, and in the middle we see a table referring to the previous versions of the report..

3.1.3 Modeling and parameter tuning

The well known package used for data science in R is the caret (short for Classification And REgression Training).16 Nevertheless, the author of caret recognizes several limitations of his (great) package, and is now in charge of the development of the tidymodels17 collection. For sure, there are other available frameworks and opinions.18 Notwithstanding, this project will follow the tidymodels road. Three significant arguments 1) constantly improving and constantly being re-checked for bugs; large community contribution; 2) allows to plug in a custom modeling algorithm that, in this case, will be the one needed for developing this work; 3) caret is not in active development.

3.1.4 Continuous integration

Meanwhile, the project pipeline has been set up on GitHub, Inc. 19 leveraging on Github Actions20 for the Continuous Integration lifecycle. The repository is available at,19 and the resulting report is available at.14 It is also public available the roadmap and tasks status of this thesis on Zenhub.21

3.2 Developed software

3.2.1 Matrix Profile

Matrix Profile (MP),22 is a state-of-the-art23,24 time series analysis technique that once computed, allows us to derive frameworks to all sorts of tasks, as motif discovery, anomaly detection, regime change detection and others.22

Before MP, time series analysis relied on what is called distance matrix (DM), a matrix that stores all the distances between two time series (or itself, in case of a Self-Join). This was very power consuming, and several methods of pruning and dimensionality reduction were researched.25

For brevity, let’s just understand that the MP and the companion Profile Index (PI) are two vectors that hold one floating point value and one integer value, respectively, regarding the original time series: (1) the similarity distance between that point on time (let’s call these points “indexes”) and its first nearest-neighbor (1-NN), (2) The index where this this 1-NN is located. The original paper has more detailed information.22 It is computed using a rolling window but instead of creating a whole DM, only the minimum values and the index of these minimum are stored (in the MP and PI respectively). We can have an idea of the relationship of both on Fig. 3.3.

A distance matrix (top), and a matrix profile (bottom). The matrix profile stores only the minimum values of the distance matrix.

FIGURE 3.3: A distance matrix (top), and a matrix profile (bottom). The matrix profile stores only the minimum values of the distance matrix.

This research has already yielded two R packages concerning the MP algorithms from UCR.26 The first package is called tsmp, and a paper has also been published in the R Journal27 (Journal Impact Factor™, 2020 of 3.984). The second package is called matrixprofiler and enhances the first one, using low-level language to improve computational speed. The author has also joined the Matrix Profile Foundation as co-founder together with contributors from Python and Go languages.28,29

This implementation in R is being used for computing the MP and MP-based algorithms of this thesis.

3.3 The data

The current dataset used is the CinC/Physionet Challenge 2015 public dataset, modified to include only the actual data and the header files in order to be read by the pipeline and is hosted by Zenodo30 under the same license as Physionet.

The dataset is composed of 750 patients with at least five minutes records. All signals have been resampled (using anti-alias filters) to 12 bit, 250 Hz and have had FIR band-pass (0.05 to 40Hz) and mains notch filters applied to remove noise. Pacemaker and other artifacts are still present on the ECG.6 Furthermore, this dataset contains at least two ECG derivations and one or more variables like arterial blood pressure, photoplethysmograph readings, and respiration movements.

The events we seek to identify are the life-threatening arrhythmias as defined by Physionet in Table 3.1.

TABLE 3.1: Definition of the five alarm types used in CinC/Physionet Challenge 2015.
Alarm Definition
Asystole No QRS for at least 4 seconds
Extreme Bradycardia Heart rate lower than 40 bpm for 5 consecutive beats
Extreme Tachycardia Heart rate higher than 140 bpm for 17 consecutive beats
Ventricular Tachycardia 5 or more ventricular beats with heart rate higher than 100 bpm
Ventricular Flutter/Fibrillation Fibrillatory, flutter, or oscillatory waveform for at least 4 seconds

The fifth minute is precisely where the alarm has been triggered on the original recording set. To meet the ANSI/AAMI EC13 Cardiac Monitor Standards,31 the onset of the event is within 10 seconds of the alarm (i.e., between 4:50 and 5:00 of the record). That doesn’t mean that there are no other arrhythmias before.

For comparison, on Table 3.2 we collected the score of the five best participants of the challenge.3236

TABLE 3.2: Challenge Results on real-time data. The scores were multiplied by 100.
Score Authors
81.39 Filip Plesinger, Petr Klimes, Josef Halamek, Pavel Jurak
79.44 Vignesh Kalidas
79.02 Paula Couto, Ruben Ramalho, Rui Rodrigues
76.11 Sibylle Fallet, Sasan Yazdani, Jean-Marc Vesin
75.55 Christoph Hoog Antink, Steffen Leonhardt

The equation used on this challenge to compute the score of the algorithms is in the Equation (3.1). This equation is the accuracy formula, with penalization of the false negatives. The reasoning pointed out by the authors6 is the clinical impact of existing a genuine life-threatening event that was considered unimportant. Accuracy is known to be misleading when there is a high class imbalance.37


\[\begin{equation} Score = \frac{TP+TN}{TP+TN+FP+5*FN} \tag{3.1} \end{equation}\]

Assuming that this is a finite dataset, the pathologic cases (1) \(\lim_{TP \to \infty}\) (whenever there is an event, it is positive) or (2) \(\lim_{TN \to \infty}\) (whenever there is an event, it is false), cannot happen. This dataset has 292 True alarms and 458 False alarms. Experimentally, this equation yields:

  • 0.24 if all guesses are on False class
    • 0.28 if random guesses
    • 0.39 if all guesses are on True class
    • 0.45 if no false positives plus random on True class
    • 0.69 if no false negatives plus random on False class

This small experiment (knowing the data in advance) shows that “a single line of code and a few minutes of effort”38 algorithm could achieve at most a score of 0.39 in this challenge (the last two lines, the algorithm must to be very good on one class).

Nevertheless, this equation will only be useful to allow us to compare the results of this thesis with other algorithms.

3.4 Work structure

3.4.1 Project start

The project started with a literature survey on the databases Scopus, PubMed, Web of Science, and Google Scholar with the following query (the syntax was adapted for each database):


TITLE-ABS-KEY ( algorithm OR ‘point of care’ OR ‘signal processing’ OR ‘computer assisted’ OR ‘support vector machine’ OR ‘decision support system’ OR ’neural network’ OR ‘automatic interpretation’ OR ‘machine learning’) AND TITLE-ABS-KEY ( electrocardiography OR cardiography OR ‘electrocardiographic tracing’ OR ecg OR electrocardiogram OR cardiogram ) AND TITLE-ABS-KEY ( ‘Intensive care unit’ OR ‘cardiologic care unit’ OR ‘intensive care center’ OR ‘cardiologic care center’ )

The inclusion and exclusion criteria were defined as in Table 3.3.

TABLE 3.3: Literature review criteria.
Inclusion criteria Exclusion criteria
ECG automatic interpretation Manual interpretation
ECG anomaly detection Publication older than ten years
ECG context change detection Do not attempt to identify life-threatening arrhythmias, namely asystole, extreme bradycardia, extreme tachycardia, ventricular tachycardia, and ventricular flutter/fibrillation
Online Stream ECG analysis No performance measurements reported
Specific diagnosis (like a flutter, hyperkalemia, etc.)

The survey is being conducted with peer review, all articles on full-text phase were obtained and assessed for the extraction phase, with exception of 5 articles that were not available. The survey is currently staled on the Data Extraction phase due to external factors.

Fig. 3.4 shows the flow diagram of the resulting screening using PRISMA format.

Flowchart of the literature survey.

FIGURE 3.4: Flowchart of the literature survey.

The peer review is being conducted by the author of this thesis together with another coleague, Dr. Andrew Van Benschoten from the Matrix Profile Foundation.28

The purpose of using Cohen’s \(\kappa\) in such review is to allow us to gauge the agreement of both reviewers on the task of selecting the articles according to the goal of the survey. The most naive way to verify this would be simply to measure the overall agreement (the number of articles included and excluded by both, divided by the total number of articles). Nevertheless, this would not take into account the agreement we could expect purely by chance.

However, the \(\kappa\) statistic must be assessed carefully. This topic is beyond the scope of this work therefore it will be explained briefly.

While it is widely used, the \(\kappa\) statistic is also well criticized. The direct interpretation of its value depends on several assumptions that are often violated. (1) It is assumed that both reviewers have the same level of experience; (2) The “codes” (include, exclude) are identified with same accuracy; (3) The “codes” prevalence are the same; (4) There is no reviewer bias towards one of the choices.39,40

In addition, the number of “codes” affects the relation between the value of \(\kappa\) and the actual agreement between the reviewers. For example, given equiprobable “codes” and reviewers who are 85% accurate, the value of \(\kappa\) are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and 10, respectively.40,41

In order to take these limitations in account, the agreement between reviewers was calculated using the KappaAcc42 from Professor Emeritus Roger Bakeman, Georgia State University, which computes the estimated accuracy of simulated reviewers.

3.4.2 RAW data

In order to better understand the data acquisition, it has been acquired a Single Lead Heart Rate Monitor breakout from Sparkfun™43 using the AD823244 microchip from Analog Devices Inc., compatible with Arduino®,45 for an in-house experiment (Fig. 3.5).

Single Lead Heart Rate MonitorSingle Lead Heart Rate Monitor

FIGURE 3.5: Single Lead Heart Rate Monitor

The output gives us a RAW signal, as shown in Fig. 3.6.

RAW output from Arduino at ~300hz

FIGURE 3.6: RAW output from Arduino at ~300hz

After applying the same settings as the Physionet database (collecting the data at 500hz, resample to 250hz, pass-filter, and notch filter), the signal is much better, as shown in Fig. 3.7.

Gray is RAW, Red is filtered

FIGURE 3.7: Gray is RAW, Red is filtered

3.4.3 Preparing the data

Usually, data obtained by sensors needs to be “cleaned” for proper evaluation. That is different from the initial filtering process where the purpose is to enhance the signal. Here we are dealing with artifacts, disconnected cables, wandering baselines and others.

Several SQIs (Signal Quality Indexes) are used in the literature,46 some trivial measures as kurtosis, skewness, median local noise level, other more complex as pcaSQI (the ratio of the sum of the five largest eigenvalues associated with the principal components over the sum of all eigenvalues obtained by principal component analysis applied to the time aligned ECG segments in the window). An assessment of several different methods to estimate electrocardiogram signal quality can was performed by Del Rio, et al.47

By experimentation (yet to be validated), a simple formula gives us the “complexity” of the signal and correlates well with the noisy data is shown in Equation (3.2).48


\[\begin{equation} \sqrt{\sum_{i=1}^w((x_{i+1}-x_i)^2)}, \quad \text{where}\; w \; \text{is the window size} \tag{3.2} \end{equation}\]

The Fig. 3.8 shows some SQIs and their relation with the data.

Green line is the "complexity" of the signal

FIGURE 3.8: Green line is the “complexity” of the signal

Fig. 3.9 shows that noisy data (probably patient muscle movements) are marked with a blue point and thus are ignored by the algorithm.

Noisy data marked by the "complexity" filter

FIGURE 3.9: Noisy data marked by the “complexity” filter

Although this step of “cleaning” the data is often used, this step will also be tested if it is really necessary and the performance with and without “cleaning” will be reported.

3.4.4 Detecting regime changes

The regime change approach will be using the Arc Counts concept, used on the FLUSS (Fast Low-cost Unipotent Semantic Segmentation) algorithm, as explained by Gharghabi, et al.,49.

The FLUSS (and FLOSS, the on-line version) algorithm is built on top of the Matrix Profile (MP)22, described on section 3.2.1. Recalling that the MP and the companion Profile Index (PI) are two vectors holding information about the 1-NN. One can imagine several “arcs” starting from one “index” to another. This algorithm is based on the assumption that between two regimes, the most similar shape (its nearest neighbor) is located on “the same side”, so the number of “arcs” decreases when there is a change on the regime, and increases again. As show on Fig. 3.10. This drop on the Arc Counts is a signal that a change on the shape of the signal has happened.

FLUSS algorithm, using arc counts.

FIGURE 3.10: FLUSS algorithm, using arc counts.

The choice of the FLOSS algorithm (on-line version of FLUSS) is founded on the following arguments:

  • Domain Agnosticism: the algorithm makes no assumptions about the data as opposed to most available algorithms to date.
  • Streaming: the algorithm can provide real-time information.
  • Real-World Data Suitability: the objective is not to explain all the data. Therefore, areas marked as “don’t know” areas are acceptable.
  • FLOSS is not: a change point detection algorithm.50 The interest here is changes in the shapes of a sequence of measurements.

Other algorithms we can cite are based on Hidden Markov Models (HMM) that require at least two parameters to be set by domain experts: cardinality and dimensionality reduction. The most attractive alternative could be the Autoplait,51 which is also domain agnostic and parameter-free. It segments the time series using Minimum Description Length (MDL) and recursively tests if the region is best modeled by one or two HMM. However, Autoplait is designed for batch operation, not streaming, and also requires discrete data. FLOSS was demonstrated to be superior in several datasets in its original paper. In addition, FLOSS is robust to several changes in data like downsampling, bit depth reduction, baseline wandering, noise, smoothing, and even deleting 3% of the data and filling with simple interpolation. Finally, the most important, the algorithm is light and suitable for low-power devices.

In the MP domain, it is worth also mentioning other possible algorithm: the Time Series Snippets,52 based on MPdist.53 The latter measures the distance between two sequences considering how many similar sub-sequences they share, no matter the order of matching. It proved to be a useful measure (not a metric) for meaningfully clustering similar sequences. Time Series Snippets exploits MPdist properties to summarize a dataset extracting the \(k\) sequences that represent most of the data. The final result seems to be an alternative for detecting regime changes, but it is not. The purpose of this algorithm is to find which pattern(s) explains most of the dataset. Also, it is not suitable for streaming data. Lastly, MPdist is quite expensive compared to the trivial Euclidean distance.

The regime change detection will be evaluated following the criterias explained on section 3.5.

3.4.5 Classification of the new regime

The next step towards the objective of this work is to verify if the new regime detected by the previous step is indeed a life-threatening pattern that we should trigger the alarm.

First let’s dismiss some apparent solutions: (1) Clustering. It is well understood that we cannot cluster time series subsequences meaningfully with any distance measure, or with any algorithm.54 The main argument is that in a meaningfull algorithm, the output depends on the input, and this has been proven to not happen in time series subsequence clustering.54 (2) Anomaly detection. In this work we are not looking for surprises, but for patterns that are known to be life-threatening. (3) Forecasting. We may be tempted to make predictions, but clearly this is not the idea here.

The method of choice is classification. The simplest algorithm could be a TRUE/FALSE binary classification. Nevertheless, the five life-threatening patterns have well defined characteristics that may seem more plausible to classify the new regime using some kind of ensamble of binary classifiers or a “six-class” classifier (being the sixth class the FALSE class).

Since the model doesn’t know which life-threatening pattern will be present in the regime (or if it will be a FALSE case), the model will need to check for all five TRUE cases and if none of these cases are identified, it will classify the regime as FALSE.

In order to avoid exceeding processor capacity, an initial set of shapelets55 can be sufficient to build the TRUE/FALSE classifier. And to build such set of shapelets, leveraging on the MP, we will use the Contrast Profile.56

The Contrast Profile (CP) looks for patterns that are at the same time very similar to its neighbors in class A while is very different from the nearest neighbor from class B. In other words, this means that such pattern represents well class A and may be taken as a “signature” of that class.

In this case we need to compute two MP, one self-join MP using the positive class \(MP^{(++)}\) (the class that has the signature we want to find) and one AB-join MP using the positive and negative classes \(MP^{(+-)}\). Then we subtract the first \(MP^{(++)}\) from the last \(MP^{(+-)}\), resulting in the \(CP\). The high values on \(CP\) are the locations for the signature candidates we look for (the author of CP calls these segments Plato’s).

Due to the nature of this approach, the MP’s (containing values in Euclidean Distance) are truncated for values above \(\sqrt{2w}\), where \(w\) is the window size. This because values above this threshold are negatively correlated in the Pearson Correlation space. Finally, we normalize the values by \(\sqrt{2w}\). The formula (3.3) synthesizes this computation.


\[\begin{equation} CP_w = \frac{MP_{w}^{(+-)} - MP_{w}^{(++)}}{\sqrt{2w}} \quad \text{where}\; w \; \text{is the window size} \tag{3.3} \end{equation}\]

For a more complete understanding of the process, Fig. 3.11 shows a practical example from the original article.56


Top to bottom: two weakly-labeled snippets of a larger time series. T(-) contains only normal beats. T(+) also contains PVC (premature ventricular contractions). Next, two Matrix Profiles with window size 91; AB-join is in red and self-join in blue. Bottom, the Contrast Profile showing the highest location.

FIGURE 3.11: Top to bottom: two weakly-labeled snippets of a larger time series. T(-) contains only normal beats. T(+) also contains PVC (premature ventricular contractions). Next, two Matrix Profiles with window size 91; AB-join is in red and self-join in blue. Bottom, the Contrast Profile showing the highest location.

After extracting candidates for each class signature, a classification algorithm will be fitted and evaluated using the criterias explained on section 3.5.

3.4.6 Summary of the methodology

In order to summarize the steps taken on this thesis to accomplish the main objective, Figs. 3.13, 3.14 and 3.15 show the overview of the processes involved.

First let us introduce the concept of Nested Resampling.57 It is known that when increasing model complexity, overfitting on the training set becomes more likely to happen.58 This is an issue that this work has to countermeasure as there are many steps that requires parameter tuning, even for algorithms that are almost parameter-free like the MP.

The rule that must be followed is simple: do not evaluate a model on the same resampling split used to perform its own parameter tuning. Using simple cross-validation, the information about the test set “leaks” into the evaluation, which leads to overfitting/overtuning, and gives us an optimistic biased estimative of the performance. Bernd Bischl, 201257 describes more deeply these factors, and also gives us a countermeasure for that: (1) from preprocessing the data to model selection use the training set; (2) the test set should be touched once, on the evaluation step; (3) repeat. This guarantees that a “new” separated data is only used after the model is trained/tuned.

Fig. 3.12 shows us this principle. The steps (1) and (2) described above are part of the Outer resampling, which in each loop splits the data in two sets: the training set and the test set. The training set is then used in the Inner resampling where, for example, the usual cross-validation may be used (creating an Analysis set and an Assessment set, to avoid conflict of terminology), and the best model/parameters is selected. Then, this best model is evaluated against the unseen test set that was created for this resampling.

The resulting (aggregated) performance of all outer samples gives us a more honest estimative of the expected performance on new data.

Nested resampling. The full dataset is resampled several times (outer resampling), so each branch has its own Test set (yellow). On each branch, the Training set is used as if it were a full dataset, being resampled again (inner resampling); here the Assessment set (blue) is used to test the learning model and tune parameters. The best model then, is finally evaluated on its own Test set.

FIGURE 3.12: Nested resampling. The full dataset is resampled several times (outer resampling), so each branch has its own Test set (yellow). On each branch, the Training set is used as if it were a full dataset, being resampled again (inner resampling); here the Assessment set (blue) is used to test the learning model and tune parameters. The best model then, is finally evaluated on its own Test set.


After the understanding of the Nested Resampling,57 the following flowcharts can be better interpreted. Fig. 3.13 starts with the “Full Dataset” that contains all time series from the dataset described on section 3.3. Each time series represents one file from the database, and represents one patient.

The regime change detection will use subsampling (bootstrapping can lead to substantial bias toward more complex models) in the Outer resampling and cross-validation in the Inner resampling. How the evaluation will be performed and why the use of cross-validation will be explained on section 3.5.

Pipeline for regime change detection. The full dataset (containing several patients) is divided on a Training set and a Test set. The Training set is then resampled in an Analysis set and an Assessment set. The former is used for training/parameter tuning and the latter for assessing the result. The best parameters are then used for evaluation on the Test set. This may be repeated several times.

FIGURE 3.13: Pipeline for regime change detection. The full dataset (containing several patients) is divided on a Training set and a Test set. The Training set is then resampled in an Analysis set and an Assessment set. The former is used for training/parameter tuning and the latter for assessing the result. The best parameters are then used for evaluation on the Test set. This may be repeated several times.

Fig. 3.14 shows the processes for training the classification model. First, the last ten seconds of each time series will be identified (the even occurs in this segment). Then the dataset will be grouped by class (type of event) and TRUE/FALSE (alarm), so the Outer/Inner resampling will produce a Training/Analysis set and Test/Assessment set with similar frequency of the full dataset.

The next step will be to extract shapelet candidates using the Contrast Profile and train the classifier.

This pipeline will use subsampling (for the same reason above) in the Outer resampling and cross-validation in the Inner resampling. How the evaluation will be performed and why the use of cross-validation will be explained on section 3.5.

Pipeline for alarm classification. The full dataset (containing several patients) is grouped by class and by TRUE/FALSE alarm. This grouping allows resampling to keep a similar frequency of classes and TRUE/FALSE of the full dataset. Then the full dataset is divided on a Training set and a Test set. The Training set is then resampled in an Analysis set and an Assessment set. The former is used for extracting shapelets, training the model and parameter tuning; the latter for assessing the performance of the model. Finally, the best model is evaluated on the Test set. This may be repeated several times.

FIGURE 3.14: Pipeline for alarm classification. The full dataset (containing several patients) is grouped by class and by TRUE/FALSE alarm. This grouping allows resampling to keep a similar frequency of classes and TRUE/FALSE of the full dataset. Then the full dataset is divided on a Training set and a Test set. The Training set is then resampled in an Analysis set and an Assessment set. The former is used for extracting shapelets, training the model and parameter tuning; the latter for assessing the performance of the model. Finally, the best model is evaluated on the Test set. This may be repeated several times.

Finally, Fig. 3.15 shows how the final model will be used on the field. In a streaming scenario, the data will be collected and processed in real-time to maintain an up to date Matrix Profile. The FLOSS algorithm will be looking for a regime change. When a regime change is detected, a sample of this new regime will be presented to the trained classifier that will evaluate if this new regime is a life-threatening condition or not.

Pipeline of the final process. The streaming data, coming from one patient, is processed to create its Matrix Profile. Then, the FLOSS algorithm is computed for detecting a regime change. When a new regime is detected, a sample of this new regime is analysed by the model and a decision is made. If the new regime is life-threatening, the alarm will be fired.

FIGURE 3.15: Pipeline of the final process. The streaming data, coming from one patient, is processed to create its Matrix Profile. Then, the FLOSS algorithm is computed for detecting a regime change. When a new regime is detected, a sample of this new regime is analysed by the model and a decision is made. If the new regime is life-threatening, the alarm will be fired.

3.5 Evaluation of the algorithms

The subsampling method used on both algorithms, regime change and classification, will be the Cross Validation, as the learning task will be in batches.

Other options dismissed:57

  • Leave-One-Out Cross Validation: has better properties for regression than for classification. It has a high variance as an estimator of the mean loss. It also is asymptotically inconsistent and tends to select too complex models. It is demonstrates empirically that 10-fold CV is often superior.

  • Bootstrapping: while it has low variance, it may be optimistic biased on more complex models. Also, its resampling method with replacement can leak information into the assessment set.

  • Subsampling: is like bootstrapping, but without replacement. The only argument for not choosing it, is that with Cross Validation we make sure all the data is used for analysis and assessment.

3.5.1 Regime change

A detailed discussion about the evaluation process of segmentation algorithms is made by the FLUSS/FLOSS author.49 Previous researches have used precision/recall or derived measures for performance. The main issue is how to assume that the algorithm was correct? If the ground truth says the change occurred at location 10,000, and the algorithm detects a change at location 10,001, is this a miss?

As pointed out by the author, several independent researchers have suggested a temporal tolerance, that solves one issue, but also has a hard time on penalize any tiny miss beyond this tolerance.

The second issue is a over-penalization of an algorithm in which most of the detections are good, but just one (or a few) is poor.

The author proposes the solution depicted in Fig. 3.16. It gives 0 as the best score and 1 as the worst. The function sums the distances between the ground truth locations and the locations suggested by the algorithm. The sum is then divided by the length of the time series to normalize the range to [0, 1].

The goal is minimizing this score.

Regime change evaluation. The top line illustrates the ground truth, and the bottom line the locations reported by the algorithm. Note that multiple proposed locations can be mapped to a single ground truth point.

FIGURE 3.16: Regime change evaluation. The top line illustrates the ground truth, and the bottom line the locations reported by the algorithm. Note that multiple proposed locations can be mapped to a single ground truth point.

3.5.2 Classification

As described on section 3.4.5, the model for classification will use a set of shapelets to identify if we have a TRUE (life-threatening) regime or a FALSE (non life-threatening) regime.

Although the implementation of the final process will be using streaming data, the classification algorithm will work in batches, because it will not be applied on every single data point, but on samples that are extracted when a regime change is detected. During the training phase, the data is also analyzed in batches.

One important factor we must consider is that, on real world, the majority of regime changes will be FALSE (i.e., not life-threatening). Thus, a performance measure that is robust to class imbalance is needed if we want to be able to assess the model after it was trained, on the field.

It is well known that the Accuracy measure is not reliable for unbalanced data37,59 as it returns optimistic results for a classifier on the majority class. A description of common measures used on classification is available.37,60 Here we will focus on three candidate measures that can be used: F-score (well discussed on60), Matthew’s Correlation Coefficient (MCC)61 and \(\kappa_m\) statistic.62

The F-score (let’s abbreviate to F1 as this is the more common setting), is widely used on information retrieval, where the classes are usually classified as “relevant” and “irrelevant”, and combines the recall (also known as sensitivity) and the precision (the positive predicted value). Recall assess how well the algorithm retrieves relevant examples among the (usually few) relevant items in dataset, while precision assess the proportion of indeed relevant items are contained in the retrieved examples. It ranges from [0, 1]. It ignores completely the irrelevant items that were not retrieved (usually this set contain lots of items). In classification tasks, its main weakness is not evaluate the True Negatives, and if the proportion of a random classifier gets towards the TRUE class (increasing the False Positives significantly), this score actually gets better, thus not suitable to our case. The F1 score is defined on equation (3.4).

\[\begin{equation} F_1 score = \frac{2 \cdot TP}{2 \cdot TP + FP + FN} = 2 \cdot \frac{precision \cdot recall}{precision + recall} \tag{3.4} \end{equation}\]

The MCC is a good alternative to the F1 when we do care about the True negatives (both were considered to “provide more realistic estimates of real-world model performance”63). It is a method to compute the Pearson product-moment correlation coefficient64 between the actual and predicted values. It ranges from [-1, 1]. The MCC is the only binary classification rate that only gives a high score if the binary classifier was able to correctly classify the majority of the positive and negative instances.60 One may argue that Cohen’s \(\kappa\) has the same behavior, but there are two main differences (1) MCC is undefined in the case of a majority voter while Cohen’s \(\kappa\) doesn’t discriminates this case from the random classifier (\(\kappa\) is zero for both cases) (2) It is proven that in an special case when the classifier is increasing the False Negatives, Cohen’s \(\kappa\) doesn’t get worse as spected, MCC doesn’t have this issue.64 MCC is defined on equation (3.5).

\[\begin{equation} MCC = \frac{TP \cdot TN - FP \cdot FN}{\sqrt{(TP + FP) \cdot (TP + FN) \cdot (TN + FP) \cdot (TN + FN)}} \tag{3.5} \end{equation}\]

The \(\kappa_m\) statistic62 is a measure that takes in account not the random classifier but the majority voter (a classifier that only votes on the larger class). It was introduced by Bifet et al.62 for being used in online settings, where the class balance may change over time. It is defined on equation (3.6), where \(p_0\) is the observed accuracy and \(p_m\) is the accuracy of the majority voter. The score ranges from (\(-\infty\), 1], theoretically, but in practice you see negative numbers if the classifier is performing worse than the majority voter and positive numbers if performing better than the majority number, until the maximum of 1, when the classifier is optimal.

\[\begin{equation} \kappa_m = \frac{p_0 - p_m}{1 - p_m} \tag{3.6} \end{equation}\]

In the inner resampling (model training/tuning), the classification will be binary, and in our case we know that the data is slightly unbalanced (60% false alarms). For this step, the metric for model selection will be the MCC. Nevertheless, during the optimization process, the algorithm will seek to minimize the False Negative Rate (\(FNR = \frac{FN}{TP+FN}\)), and between ties, the smaller FNR wins.

In the outer resampling, the MCC and \(\kappa_m\) of all winning models will aggregated and reported using the median and interquartile range.

For different classifiers, we will use the Wilcoxon’s signed-rank test for comparing their performances, as this method is known to have low Type I and Type II errors in this kind of comparison.62

3.5.3 Full model (streaming setting)

For the final assessment, the best and the average model of the previous pipelines will be assembled and tested using the whole original dataset.

The algorithm will be tested in each of the five life-threatening event split individually, in order to evaluate its strengths and weakness.

For more transparency, the whole confusion matrix will be reported, as well as the MCC, \(\kappa_m\), and the FLOSS evaluation.