Last updated: 2024-06-19

Checks: 7 0

Knit directory: muse/

This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20200712) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version f08c5d1. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    r_packages_4.3.3/
    Ignored:    r_packages_4.4.0/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/confusion_matrix_rates.Rmd) and HTML (docs/confusion_matrix_rates.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd f08c5d1 Dave Tang 2024-06-19 R function for calculating confusion matrix rates

I often forget the names and aliases (and how to calculate them) of confusion matrix rates and have to look them up. Finally, I had enough and was looking for a single function that could calculate the most commonly used rates, like sensitivity or precision, but I couldn’t find one that didn’t require me to install some R package. Therefore I wrote my own called table_metrics and will briefly talk about it in this post.

I have had this Simple guide to confusion matrix terminology bookmarked for many years and I keep referring back to it. It does a great job of explaining the list of rates that are often calculated from a confusion matrix for a binary classifier. If you need a refresher on the confusion matrix rates/metrics, check it out.

We can generate the same confusion matrix as the Simple guide with the following code.

generate_example <- function(){
  dat <- data.frame(
    n = 1:165,
    truth = c(rep("no", 60), rep("yes", 105)),
    pred = c(rep("no", 50), rep("yes", 10), rep("no", 5), rep("yes", 100))
  )
  table(dat$truth, dat$pred)
}

confusion <- generate_example()
confusion
     
       no yes
  no   50  10
  yes   5 100

I wrote the function confusion_matrix() to generate a confusion matrix based on case numbers. The same confusion matrix can be generated with the function by sourcing it from GitHub.

source("https://raw.githubusercontent.com/davetang/learning_r/main/code/confusion_matrix.R")
eg <- confusion_matrix(TP=100, TN=50, FN=5, FP=10)
eg$cm
     
       no yes
  no   50  10
  yes   5 100

To use the table_metrics function I wrote, you also source it directly from GitHub.

source("https://raw.githubusercontent.com/davetang/learning_r/main/code/table_metrics.R")

The function has four parameters, which are described below using roxygen2 syntax (copied and pasted from the source code of the table_metrics function).

#' @param tab Confusion matrix of class table
#' @param pos Name of the positive label
#' @param neg Name of the negative label
#' @param truth Where the truth/known set is stored, `row` or `col`

To use table_metrics() on the example data we generated, we have to provide arguments for the four parameters.

The first parameter is the confusion matrix stored as a table.

The second and third parameters are the names of the positive and negative labels. The example used yes and no, so those are our input arguments.

If you have generated a confusion matrix with the predictions as the rows and truth labels as the columns then change the fourth argument to ‘col’. Our truth labels are on the rows, so ‘row’ is specified.

table_metrics(confusion, 'yes', 'no', 'row')
$accuracy
[1] 0.909

$misclassifcation_rate
[1] 0.0909

$error_rate
[1] 0.0909

$true_positive_rate
[1] 0.952

$sensitivity
[1] 0.952

$recall
[1] 0.952

$false_positive_rate
[1] 0.167

$true_negative_rate
[1] 0.833

$specificity
[1] 0.833

$precision
[1] 0.909

$prevalance
[1] 0.636

$f1_score
[1] 0.9300032

The function returns a list with the confusion matrix rates/metrics. You can save the list and subset for the rate/metric you are interested in.

my_metrics <- table_metrics(confusion, 'yes', 'no', 'row')
my_metrics$sensitivity
[1] 0.952

Finally, if you want more significant digits (default is set to 3), supply it as the fifth argument.

I have some additional notes on machine learning evaluation that may also be of interest. And that’s it!

F1 score

Generate labels.

true_label <- factor(c(rep(1, 80), rep(2, 10), rep(3, 10)), levels = 1:3)
true_label
  [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 [75] 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
Levels: 1 2 3

Predictions.

pred_label <- factor(c(2, 3, rep(1, 98)), levels = 1:3)
pred_label
  [1] 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 [75] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Levels: 1 2 3

Generate confusion matrix.

cm <- table(truth = true_label, predict = pred_label)
cm
     predict
truth  1  2  3
    1 78  1  1
    2 10  0  0
    3 10  0  0

Using yardstick::f_meas.

if(!require("yardstick")){
  install.packages("yardstick")
}
Loading required package: yardstick

Attaching package: 'yardstick'
The following object is masked from 'package:readr':

    spec
yardstick::f_meas(cm)
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 f_meas  macro          0.292

Using f_meas_vec().

yardstick::f_meas_vec(truth = true_label, estimate = pred_label)
[1] 0.2921348

High accuracy but low \(F_1\).

yardstick::accuracy(cm)
# A tibble: 1 × 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass      0.78

sessionInfo()
R version 4.4.0 (2024-04-24)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] yardstick_1.3.1 lubridate_1.9.3 forcats_1.0.0   stringr_1.5.1  
 [5] dplyr_1.1.4     purrr_1.0.2     readr_2.1.5     tidyr_1.3.1    
 [9] tibble_3.2.1    ggplot2_3.5.1   tidyverse_2.0.0 workflowr_1.7.1

loaded via a namespace (and not attached):
 [1] sass_0.4.9        utf8_1.2.4        generics_0.1.3    stringi_1.8.4    
 [5] hms_1.1.3         digest_0.6.35     magrittr_2.0.3    timechange_0.3.0 
 [9] evaluate_0.23     grid_4.4.0        fastmap_1.2.0     rprojroot_2.0.4  
[13] jsonlite_1.8.8    processx_3.8.4    whisker_0.4.1     ps_1.7.6         
[17] promises_1.3.0    httr_1.4.7        fansi_1.0.6       scales_1.3.0     
[21] jquerylib_0.1.4   cli_3.6.2         rlang_1.1.3       munsell_0.5.1    
[25] withr_3.0.0       cachem_1.1.0      yaml_2.3.8        tools_4.4.0      
[29] tzdb_0.4.0        colorspace_2.1-0  httpuv_1.6.15     vctrs_0.6.5      
[33] R6_2.5.1          lifecycle_1.0.4   git2r_0.33.0      fs_1.6.4         
[37] pkgconfig_2.0.3   callr_3.7.6       pillar_1.9.0      bslib_0.7.0      
[41] later_1.3.2       gtable_0.3.5      glue_1.7.0        Rcpp_1.0.12      
[45] xfun_0.44         tidyselect_1.2.1  rstudioapi_0.16.0 knitr_1.46       
[49] htmltools_0.5.8.1 rmarkdown_2.27    compiler_4.4.0    getPass_0.2-4