Last updated: 2026-01-21
Checks: 7 0
Knit directory: muse/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712) was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version cd7dde9. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rproj.user/
Ignored: data/1M_neurons_filtered_gene_bc_matrices_h5.h5
Ignored: data/293t/
Ignored: data/293t_3t3_filtered_gene_bc_matrices.tar.gz
Ignored: data/293t_filtered_gene_bc_matrices.tar.gz
Ignored: data/5k_Human_Donor1_PBMC_3p_gem-x_5k_Human_Donor1_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
Ignored: data/5k_Human_Donor2_PBMC_3p_gem-x_5k_Human_Donor2_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
Ignored: data/5k_Human_Donor3_PBMC_3p_gem-x_5k_Human_Donor3_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
Ignored: data/5k_Human_Donor4_PBMC_3p_gem-x_5k_Human_Donor4_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
Ignored: data/97516b79-8d08-46a6-b329-5d0a25b0be98.h5ad
Ignored: data/Parent_SC3v3_Human_Glioblastoma_filtered_feature_bc_matrix.tar.gz
Ignored: data/brain_counts/
Ignored: data/cl.obo
Ignored: data/cl.owl
Ignored: data/jurkat/
Ignored: data/jurkat:293t_50:50_filtered_gene_bc_matrices.tar.gz
Ignored: data/jurkat_293t/
Ignored: data/jurkat_filtered_gene_bc_matrices.tar.gz
Ignored: data/pbmc20k/
Ignored: data/pbmc20k_seurat/
Ignored: data/pbmc3k.csv
Ignored: data/pbmc3k.csv.gz
Ignored: data/pbmc3k.h5ad
Ignored: data/pbmc3k/
Ignored: data/pbmc3k_bpcells_mat/
Ignored: data/pbmc3k_export.mtx
Ignored: data/pbmc3k_matrix.mtx
Ignored: data/pbmc3k_seurat.rds
Ignored: data/pbmc4k_filtered_gene_bc_matrices.tar.gz
Ignored: data/pbmc_1k_v3_filtered_feature_bc_matrix.h5
Ignored: data/pbmc_1k_v3_raw_feature_bc_matrix.h5
Ignored: data/refdata-gex-GRCh38-2020-A.tar.gz
Ignored: data/seurat_1m_neuron.rds
Ignored: data/t_3k_filtered_gene_bc_matrices.tar.gz
Ignored: r_packages_4.4.1/
Ignored: r_packages_4.5.0/
Untracked files:
Untracked: analysis/bioc.Rmd
Untracked: analysis/bioc_scrnaseq.Rmd
Untracked: analysis/likelihood.Rmd
Untracked: bpcells_matrix/
Untracked: data/Caenorhabditis_elegans.WBcel235.113.gtf.gz
Untracked: data/GCF_043380555.1-RS_2024_12_gene_ontology.gaf.gz
Untracked: data/arab.rds
Untracked: data/astronomicalunit.csv
Untracked: data/femaleMiceWeights.csv
Untracked: data/lung_bcell.rds
Untracked: m3/
Untracked: women.json
Unstaged changes:
Modified: analysis/icc.Rmd
Modified: analysis/isoform_switch_analyzer.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/information.Rmd) and HTML
(docs/information.html) files. If you’ve configured a
remote Git repository (see ?wflow_git_remote), click on the
hyperlinks in the table below to view the files as they were in that
past version.
| File | Version | Author | Date | Message |
|---|---|---|---|---|
| Rmd | cd7dde9 | Dave Tang | 2026-01-21 | Coin flip example to illustrate mutual information |
| html | bb05a90 | Dave Tang | 2026-01-21 | Build site. |
| Rmd | 661277b | Dave Tang | 2026-01-21 | Mutual information example with coin flips |
| html | 078fd7b | Dave Tang | 2026-01-21 | Build site. |
| Rmd | 7155a31 | Dave Tang | 2026-01-21 | Explaining the unit of measurement in information theory |
| html | 131e2a8 | Dave Tang | 2026-01-20 | Build site. |
| Rmd | d4f7727 | Dave Tang | 2026-01-20 | Explain dmi() results |
| html | 4d49404 | Dave Tang | 2026-01-20 | Build site. |
| Rmd | b96f798 | Dave Tang | 2026-01-20 | Explain the dmi() function from the mpmi package |
| html | 85ef613 | Dave Tang | 2025-09-04 | Build site. |
| Rmd | a67ade5 | Dave Tang | 2025-09-04 | Add background information and working examples |
| html | 06c1935 | Dave Tang | 2025-08-29 | Build site. |
| Rmd | 2e7d525 | Dave Tang | 2025-08-29 | Information theory |
According to Claude Shannon, information is present whenever a signal is transmitted from one place (sender) to another (receiver).
Information theory, founded by Shannon, studies the quantification, transmission, storage, and processing of information. At its core, it answers:
Key concepts include:
Information theory quantifies uncertainty using bits (binary digits) as the unit of measurement. One bit represents the amount of information needed to resolve a binary choice, such as answering a single yes/no question. This binary framework serves as the fundamental building block because any complex decision can be decomposed into a series of binary choices.
For example, consider flipping a coin. Before the flip, there are two equally likely outcomes: heads or tails. This uncertainty can be resolved with a single binary question: “Is it heads?” Once you observe the result, this question is answered, and all uncertainty is eliminated. Since resolving this uncertainty required one binary question, the coin flip provides exactly 1 bit of information.
In general:
log2(2)
[1] 1
log2(4)
[1] 2
log2(8)
[1] 3
Mutual information between two random variables \(X\) and \(Y\) measures how much knowing one reduces uncertainty about the other.
\[ I(X;Y) = H(X) + H(Y) - H(X, Y) \]
or equivalently,
\[ I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X). \]
where \(H(X)\) is the entropy (uncertainty) of \(X\), and \(H(X|Y)\) is the conditional entropy of \(X\) given \(Y\). If \(I(X;Y) = 0\), \(X\) and \(Y\) are independent (no shared information); larger values indicate stronger statistical dependence.
Consider two coins, \(X\) and \(Y\):
Independent coins: If both coins are fair and flipped independently, knowing the outcome of \(X\) (heads or tails) tells you nothing about \(Y\). Here, \(H(X) = H(Y) = 1\) bit, \(H(X,Y) = 2\) bits, and \(I(X;Y) = 1 + 1 - 2 = 0\) bits.
Identical coins: If \(Y\) always shows the same result as \(X\) (they are linked), then knowing \(X\) completely determines \(Y\). Here, \(H(X) = H(Y) = 1\) bit, but \(H(X,Y) = 1\) bit (only one bit of combined uncertainty since they always match), giving \(I(X;Y) = 1 + 1 - 1 = 1\) bit. Knowing \(X\) eliminates all uncertainty about \(Y\).
Opposite coins: If \(Y\) always shows the opposite of \(X\), mutual information is also \(I(X;Y) = 1\) bit; knowing \(X\) still completely determines \(Y\), just with the opposite value.
The Mixed-Pair Mutual Information Estimators {mpmi} package:
Uses a kernel smoothing approach to calculate Mutual Information for comparisons between all types of variables including continuous vs continuous, continuous vs discrete and discrete vs discrete. Uses a nonparametric bias correction giving Bias Corrected Mutual Information (BCMI). Implemented efficiently in Fortran 95 with OpenMP and suited to large genomic datasets.
install.packages('mpmi')
The dmi() function calculates MI and BCMI between a set
of discrete variables held as columns in a matrix. It also performs
jackknife bias correction and provides a z-score for the hypothesis of
no association. Also included are the *.pw functions that
calculate MI between two vectors only. The *njk functions
do not perform the jackknife and are therefore faster.
MI quantifies the reduction in uncertainty about one variable given knowledge of another. It’s measured in bits (or nats, depending on the logarithm base) and ranges from 0 (variables are independent) to min(H(X), H(Y)) where H is entropy. Unlike correlation, MI captures non-linear relationships and works naturally with categorical data.
MI estimates from finite samples are positively biased; they tend to overestimate the true MI. The jackknife procedure systematically removes each observation, recalculates MI, and uses these values to estimate and subtract the bias. This is particularly important for small sample sizes or sparse contingency tables.
The results of dmi() are in many ways similar to a
correlation matrix, with each row and column index corresponding to a
given variable.
Independent coins.
set.seed(1984)
n <- 1000
X <- rbinom(n, 1, 0.5)
Y <- rbinom(n, 1, 0.5)
mat_independent <- cbind(X, Y)
dmi(mat_independent)
$mi
[,1] [,2]
[1,] 0.6926971130 0.0008345847
[2,] 0.0008345847 0.6923469671
$bcmi
[,1] [,2]
[1,] 0.6931976142 0.0003330715
[2,] 0.0003330715 0.6928474687
$zvalues
[,1] [,2]
[1,] 729.7075031 0.2572701
[2,] 0.2572701 547.0678525
Identical coins.
set.seed(1984)
n <- 1000
X <- rbinom(n, 1, 0.5)
Y <- X
mat_identical <- cbind(X, Y)
dmi(mat_identical)
$mi
[,1] [,2]
[1,] 0.6926971 0.6926971
[2,] 0.6926971 0.6926971
$bcmi
[,1] [,2]
[1,] 0.6931976 0.6931976
[2,] 0.6931976 0.6931976
$zvalues
[,1] [,2]
[1,] 729.7075 729.7075
[2,] 729.7075 729.7075
Opposite coins.
set.seed(1984)
n <- 1000
X <- rbinom(n, 1, 0.5)
Y <- 1 - X
mat_opposite <- cbind(X, Y)
dmi(mat_opposite)
$mi
[,1] [,2]
[1,] 0.6926971 0.6926971
[2,] 0.6926971 0.6926971
$bcmi
[,1] [,2]
[1,] 0.6931976 0.6931976
[2,] 0.6931976 0.6931976
$zvalues
[,1] [,2]
[1,] 729.7075 729.7075
[2,] 729.7075 729.7075
{mpmi} uses natural log instead of log2?
log(2)
[1] 0.6931472
Four outcomes.
set.seed(1984)
n <- 10000
X <- sample(0:3, n, replace = TRUE)
Y <- X
mat_test <- cbind(X, Y)
dmi(mat_test)
$mi
[,1] [,2]
[1,] 1.386195 1.386195
[2,] 1.386195 1.386195
$bcmi
[,1] [,2]
[1,] 1.386345 1.386345
[2,] 1.386345 1.386345
$zvalues
[,1] [,2]
[1,] 9858.231 9858.231
[2,] 9858.231 9858.231
log(4)
[1] 1.386294
Exploring a group of categorical variables (from the examples in the
documentation of the dmi() function).
cyl - Number of cylindersvs - Engine (0 = V-shaped, 1 = straight)am - Transmission (0 = automatic, 1 = manual)gear - Number of forward gearscarb - Number of carburetors (a device used by a
gasoline internal combustion engine to control and mix air and fuel
entering the engine)my_vars <- c("cyl","vs","am","gear","carb")
dat <- mtcars[, my_vars]
discresults <- dmi(dat)
add_names <- function(res, names){
purrr::map(res, \(x){
row.names(x) <- names
colnames(x) <- names
x
})
}
add_names(discresults, my_vars)
$mi
cyl vs am gear carb
cyl 1.0612040 0.43120940 0.14523133 0.3634430 0.5097002
vs 0.4312094 0.68531421 0.01417347 0.2036022 0.3123300
am 0.1452313 0.01417347 0.67546458 0.4367718 0.1248672
gear 0.3634430 0.20360224 0.43677177 1.0130227 0.2391776
carb 0.5097002 0.31232996 0.12486719 0.2391776 1.4979575
$bcmi
cyl vs am gear carb
cyl 1.0939730 0.397633050 0.105802510 0.2755075 0.48789448
vs 0.3976330 0.701457431 -0.003241008 0.1510687 0.29175135
am 0.1058025 -0.003241008 0.691622603 0.4355686 0.08710974
gear 0.2755075 0.151068658 0.435568574 1.0460800 0.16759348
carb 0.4878945 0.291751354 0.087109744 0.1675935 1.61116674
$zvalues
cyl vs am gear carb
cyl 21.798246 3.3933783 1.0582216 2.244308 7.474051
vs 3.393378 30.3263950 -0.1011464 1.223818 3.409049
am 1.058222 -0.1011464 19.9920905 5.522984 1.381430
gear 2.244308 1.2238177 5.5229835 14.478527 1.583226
carb 7.474051 3.4090490 1.3814296 1.583226 10.791836
Each matrix is symmetric (5×5), where rows and columns represent the same variables in the same order. The diagonal represents each variable with itself.
$mi is the raw mutual information and these are the
uncorrected MI values in bits.
Diagonal (e.g., cyl with itself = 1.06): This is the
entropy of each variable—how much uncertainty/information it
contains
Off-diagonal (e.g., cyl-vs = 0.43): How much
information they share
cyl-carb (0.51): Knowing cylinders tells you a lot
about carburetors
cyl-vs (0.43): Cylinders and engine shape are
related
gear-am (0.44): Gears and transmission type are
connected
vs-am (0.014): Engine shape and transmission are
nearly independent
$bcmi is the bias-corrected MI; after the jackknife
correction:
Values are generally similar to raw MI
vs-am = -0.003: The negative value indicates the raw
MI was entirely due to sampling bias; these variables are essentially
independent
The correction is more pronounced for weaker associations
$zvalues show the statistical significance
The null hypothesis is that there is no association between variables.
Rule of thumb:
Highly significant associations (|z| > 3) include:
cyl-carb (z = 7.47): Strong evidence cylinders and
carburetors are relatedcyl-vs (z = 3.39): Cylinders and engine shape are
associatedgear-am (z = 5.52): Gears and transmission are
relatedvs-am (z = -0.10): Confirms independencecyl-am (z = 1.06): Weak/no associationCars with more cylinders tend to have more carburetors and V-shaped engines. Manual transmissions are associated with different gear configurations but engine shape doesn’t predict transmission type.
sessionInfo()
R version 4.5.0 (2025-04-11)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.3 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] mpmi_0.43.2.1 KernSmooth_2.23-26 lubridate_1.9.4 forcats_1.0.0
[5] stringr_1.5.1 dplyr_1.1.4 purrr_1.0.4 readr_2.1.5
[9] tidyr_1.3.1 tibble_3.3.0 ggplot2_3.5.2 tidyverse_2.0.0
[13] workflowr_1.7.1
loaded via a namespace (and not attached):
[1] sass_0.4.10 generics_0.1.4 stringi_1.8.7 hms_1.1.3
[5] digest_0.6.37 magrittr_2.0.3 timechange_0.3.0 evaluate_1.0.3
[9] grid_4.5.0 RColorBrewer_1.1-3 fastmap_1.2.0 rprojroot_2.0.4
[13] jsonlite_2.0.0 processx_3.8.6 whisker_0.4.1 ps_1.9.1
[17] promises_1.3.3 httr_1.4.7 scales_1.4.0 jquerylib_0.1.4
[21] cli_3.6.5 rlang_1.1.6 withr_3.0.2 cachem_1.1.0
[25] yaml_2.3.10 tools_4.5.0 tzdb_0.5.0 httpuv_1.6.16
[29] vctrs_0.6.5 R6_2.6.1 lifecycle_1.0.4 git2r_0.36.2
[33] fs_1.6.6 pkgconfig_2.0.3 callr_3.7.6 pillar_1.10.2
[37] bslib_0.9.0 later_1.4.2 gtable_0.3.6 glue_1.8.0
[41] Rcpp_1.0.14 xfun_0.52 tidyselect_1.2.1 rstudioapi_0.17.1
[45] knitr_1.50 farver_2.1.2 htmltools_0.5.8.1 rmarkdown_2.29
[49] compiler_4.5.0 getPass_0.2-4