Last updated: 2023-03-28

Checks: 7 0

Knit directory: PODFRIDGE/

This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20230302) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 7d6708b. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    data/.DS_Store

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/analyses.Rmd) and HTML (docs/analyses.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd 7d6708b Tina Lasisi 2023-03-28 adding captions
html b44fa2c Tina Lasisi 2023-03-28 Build site.
Rmd 3c181d1 Tina Lasisi 2023-03-28 updating analyses for all populations
Rmd f759e40 Tina Lasisi 2023-03-24 Return analyses to original
html f759e40 Tina Lasisi 2023-03-24 Return analyses to original
Rmd c9daa86 Tina Lasisi 2023-03-24 Update analyses.Rmd
html cdd09b6 Tina Lasisi 2023-03-23 Build site.
Rmd b24ae94 Tina Lasisi 2023-03-23 Generating figures across variable combos
Rmd 6605057 Tina Lasisi 2023-03-20 Update parameters for generations
html 6605057 Tina Lasisi 2023-03-20 Update parameters for generations
html 5f805fe Tina Lasisi 2023-03-06 Build site.
Rmd 5106bab Tina Lasisi 2023-03-06 add analyses
html c3948af Tina Lasisi 2023-03-05 Build site.
html f02bc38 Tina Lasisi 2023-03-03 Build site.
html c9130d5 Tina Lasisi 2023-03-03 wflow_git_commit(all = TRUE)
html a4a7d45 Tina Lasisi 2023-03-03 Build site.
html 00073fd Tina Lasisi 2023-03-03 Build site.
html 51ed5a6 Tina Lasisi 2023-03-03 Build site.
Rmd 13ed9ae Tina Lasisi 2023-03-03 Publishing POPFORGE
html 13ed9ae Tina Lasisi 2023-03-03 Publishing POPFORGE

# Load necessary packages
library(wesanderson) # for color palettes
library(tidyverse) #data wrangling etc
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.1     ✔ readr     2.1.4
✔ forcats   1.0.0     ✔ stringr   1.5.0
✔ ggplot2   3.4.1     ✔ tibble    3.2.1
✔ lubridate 1.9.2     ✔ tidyr     1.3.0
✔ purrr     1.0.1     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(RColorBrewer)

# Set path to the data file
path <- file.path(".", "data")
savepath <- file.path(".", "output")

# Set up vector for cousin degree
p <- c(1:8) 

# Set up initial population size
N <- 76e6 # what does this do?
# Read in data on US population sizes by year
US_pop_fp <- file.path(path, "est-pop-combo.csv")
US_pop_coop_fp <- file.path(path, "US_popsize.csv")

# Read in data on US population sizes by year
pop1 <- read.csv(US_pop_fp) %>% 
  select(Year, Black, White, Total)

pop2 <- read.csv(US_pop_coop_fp) %>%
  rename(Coop = Population)

pop <- left_join(pop1, pop2)
Joining with `by = join_by(Year)`
# Set up vector of database sizes to test
DB.sizes <- c(1e6, 5e6, 10e6)

# Set color palette for graphs
my.cols <- wes_palette("Darjeeling1")

# set starting generation
start_gen <- 1950
# Calculate number of grandparents by generation
calc_grandparent_gen <- function(year, p = c(1:8)) {
  yrs_grandparent_gen <- year - 30 * (p + 1)
  return(yrs_grandparent_gen)
}


# Define a function to calculate final population sizes based on input data file and generation of grandparents
calc_final_N <- function(pop=pop, start_gen=start_gen) {

  # Calculate number of grandparents by generation
  yrs_grandpar_gen <- calc_grandparent_gen(start_gen)

  # Get population sizes by year for grandparents' generation
  gp_Ns <- pop %>% 
    filter(Year %in% yrs_grandpar_gen)

  # Scale population size down by 50% (assumed number of potential parents) and 90% of those have children + set minimum for populations
  # scaled_N <- gp_Ns
  scaled_N <- gp_Ns %>%
  mutate(across(!Year, ~ case_when(. * 0.5 * 0.9 < 1e6 ~ 1e6,
                                   TRUE ~ . * 0.5 * 0.9)))

  return(scaled_N)
}
N <- calc_final_N(pop, start_gen) %>% 
  arrange(desc(Year))

# N2 <- calc_final_N(pop2, start_gen)
# Define the population sizes and names
populations <- N %>% 
  pivot_longer(-Year, names_to = "Population", values_to = "N") %>% 
  mutate(Population = factor(Population,
                             levels = c("Black", "White", "Total", "Coop")))

# Create a ggplot object to visualize population sizes
pop_size_plot <- ggplot(populations, aes(x = Population, y = N)) +
  geom_bar(stat = "identity", fill = "steelblue") +
  labs(x = "Population", y = "Population size") +
  theme_minimal() +
  facet_wrap(~Year, scales = "free")

# Display the plot
pop_size_plot

Version Author Date
b44fa2c Tina Lasisi 2023-03-28
cdd09b6 Tina Lasisi 2023-03-23

Probability of p-th degree cousin

calc_prob <- function(p, DB.sizes, N) {
  pop_columns <- N[, -1, drop = FALSE]

  results_df <- tibble(Population = character(), p.degree = numeric(), DB.size = numeric(), Prob = numeric())

  for (pop_name in colnames(pop_columns)) {
    pop_vector <- pop_columns[[pop_name]]

    for (db_size in DB.sizes) {
      prob.no.rellys <- exp(-2^(2 * p - 2) * db_size / pop_vector)
        prob.rel <- 1 - prob.no.rellys

        results_df <- rbind(results_df, tibble(Population = pop_name, p.degree = p, DB.size = db_size, Prob = prob.rel))
    }
  }

  return(results_df)
}




plt_df <- calc_prob(p, DB.sizes, N) %>% 
  mutate(Population = factor(Population,
                             levels = c("Black", "White", "Total", "Coop")))
ggplot_prob <- function(data, my.cols=my.cols) {
  # Group data by Population and DB.size
  data_grouped <- data %>%
    group_by(Population, DB.size)

  plot <- ggplot(data_grouped, aes(x = p.degree, y = Prob, color = factor(DB.size), group = factor(DB.size))) +
    geom_point() +
    geom_line() +
    scale_color_manual(values = my.cols, name = "Database size\n(Millions)",
                       labels = format(unique(data$DB.size) / 1e6, dig = 1)) +
    labs(x = "p (degree of cousin)", y = "Probability of at least one p-th cousin in database") +
    theme_minimal() +
    facet_wrap(~ Population, labeller = labeller(Population = as_labeller(function(x) x))) +
    scale_x_continuous(breaks = 1:8) + # Show every value from 1 to 8 on the x-axis
    scale_y_continuous(limits = c(0.0, 1.0)) # Set y-axis limits to 0.0 and 1.0

  return(plot)
}


ggplot_prob(plt_df, my.cols)
Probability of finding at least one p-th degree cousin in a database of varying sizes for different populations. Each panel represents a population, and the lines within each panel correspond to different database sizes

Probability of finding at least one p-th degree cousin in a database of varying sizes for different populations. Each panel represents a population, and the lines within each panel correspond to different database sizes

Version Author Date
b44fa2c Tina Lasisi 2023-03-28
# generate ggplot object with populations in different colors


ggplot_prob_combined <- function(data){
  palette <- brewer.pal(n = length(unique(data$Population)), "Dark2")
  
  
  data <- data %>%
    mutate(DB.size.formatted = factor(scales::label_number(scale = 1/1e6, accuracy = 1, suffix = " Million")(DB.size)))
  
  plot <- ggplot(data, aes(x = p.degree, y = Prob, color = Population, group = interaction(Population, DB.size.formatted))) +
    geom_point() +
    geom_line() +
    scale_color_manual(values = palette, name = "Population") +
    labs(x = "p (degree of cousin)", y = "Probability of at least one p-th cousin in database") +
    theme_minimal() +
    facet_wrap(~DB.size.formatted)
  
  return(plot)
}

ggplot_prob_combined(plt_df)
Differences between populations in the probability of finding at least one p-th degree cousin in a database of varying sizes. Each panel represents a population, and the lines within each panel correspond to different database sizes

Differences between populations in the probability of finding at least one p-th degree cousin in a database of varying sizes. Each panel represents a population, and the lines within each panel correspond to different database sizes

Version Author Date
b44fa2c Tina Lasisi 2023-03-28
cdd09b6 Tina Lasisi 2023-03-23

Number of p-th degree cousins

calc_cousins <- function(p, DB.sizes, N) {
  pop_columns <- N[, -1, drop = FALSE]

  results_df <- tibble(Population = character(), p.degree = numeric(), DB.size = numeric(), Num_Cousins = numeric())

  for (pop_name in colnames(pop_columns)) {
    pop_vector <- pop_columns[[pop_name]]

    for (db_size in DB.sizes) {
      num_cousins <- 4^(p) * db_size / (pop_vector / 2)

      results_df <- rbind(results_df, tibble(Population = pop_name, p.degree = p, DB.size = db_size, Num_Cousins = num_cousins))
    }
  }

  return(results_df)
}

plt_numcousins_df <- calc_cousins(p, DB.sizes, N) %>% 
  mutate(Population = factor(Population,
                             levels = c("Black", "White", "Total", "Coop")))
ggplot_cousins <- function(data, my.cols=my.cols) {
  # Group data by Population and DB.size
  data_grouped <- data %>%
    group_by(Population, DB.size)

  plot <- ggplot(data_grouped, aes(x = p.degree, y = Num_Cousins, color = factor(DB.size), group = factor(DB.size))) +
    geom_point() +
    geom_line() +
    scale_color_manual(values = my.cols, name = "Database size\n(Millions)",
                       labels = format(unique(data$DB.size) / 1e6, dig = 1)) +
    labs(x = "p (degree of cousin)", y = "Number of p-th degree cousins in database") +
    theme_minimal() +
    facet_wrap(~ Population, labeller = labeller(Population = as_labeller(function(x) x))) +
    scale_x_continuous(breaks = 1:8) + # Show every value from 1 to 8 on the x-axis
    scale_y_continuous(limits = c(0, max(data$Num_Cousins))) # Set y-axis limits to 0 and maximum number of cousins

  return(plot)
}

ggplot_cousins(plt_numcousins_df, my.cols)

Version Author Date
b44fa2c Tina Lasisi 2023-03-28

Probability of a genetically detectable cousin

Below, we calculate the expected number of shared blocks of genetic material between cousins of varying degrees of relatedness. This is important because the probability of detecting genetic material that is shared between two individuals decreases as the degree of relatedness between them decreases. The code uses a Poisson distribution assumption to estimate the probability of two cousins sharing at least one, two, or three blocks of genetic material, based on the expected number of shared blocks of genetic material calculated from previous research.

# The variable 'meiosis' represents the number of meiosis events between cousins, where 'p' is the degree of relatedness (i.e. p = 1 for first cousins, p = 2 for second cousins, etc.)
meiosis <- p + 1

## Expected number of blocks shared between cousins
# 'E.num.blocks' is the expected number of blocks of shared genetic material between cousins based on the degree of relatedness and the number of meiosis events between them. This value is calculated based on previous research and is not calculated in this code.
E.num.blocks <- 2 * (33.8 * (2 * meiosis) + 22) / (2^(2 * meiosis - 1))

## Use Poisson assumption
# 'Prob.genetic' is the probability of two cousins sharing at least one block of genetic material based on the expected number of shared blocks calculated in the previous step. The calculation uses a Poisson distribution assumption.
Prob.genetic <- 1 - exp(-E.num.blocks)

# 'prob.g.e.2.blocks' is the probability of two cousins sharing at least two blocks of genetic material based on the expected number of shared blocks calculated in the previous step. The calculation uses a Poisson distribution assumption.
prob.g.e.2.blocks <- 1 - sapply(E.num.blocks, function(expected.num) {sum(dpois(0:1, expected.num))})

# 'prob.g.e.3.blocks' is the probability of two cousins sharing at least three blocks of genetic material based on the expected number of shared blocks calculated in the previous step. The calculation uses a Poisson distribution assumption.
prob.g.e.3.blocks <- 1 - sapply(E.num.blocks, function(expected.num) {sum(dpois(0:2, expected.num))})

General

# Create a data frame for the plot
genetic_blocks_df <- data.frame(p = p,
                                Prob.genetic = Prob.genetic,
                                prob.g.e.2.blocks = prob.g.e.2.blocks,
                                prob.g.e.3.blocks = prob.g.e.3.blocks)

# Pivot the data frame into a long format
genetic_blocks_long <- genetic_blocks_df %>%
  pivot_longer(-p, names_to = "GeneticBlocks", values_to = "Probability")

# Set color palette for plot
my.cols2 <- wes_palette("FantasticFox1")[3:5]

# Create the ggplot2 plot
ggplot_genetic_blocks <- ggplot(genetic_blocks_long, aes(x = p, y = Probability, color = GeneticBlocks, group = GeneticBlocks)) +
  geom_point() +
  geom_line(size = 2) +
  scale_color_manual(values = my.cols2,
                     labels = c("Cousins (w. >0 genomic blocks)",
                                "Cousins (w. >1 genomic blocks)",
                                "Cousins (w. >2 genomic blocks)"),
                     name = "Type of Cousin") +
  labs(x = "p (degree of cousin)", y = "Probability p-th cousin \"detectable\"") +
  theme_minimal() +
  scale_x_continuous(breaks = 1:8) + # Show every value from 1 to 8 on the x-axis
  scale_y_continuous(limits = c(0, 1)) # Set y-axis limits to 0 and 1
Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
ℹ Please use `linewidth` instead.
This warning is displayed once every 8 hours.
Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
generated.
# Display the plot
ggplot_genetic_blocks
Probabilities of detecting a genetic cousin in a database based on shared genomic blocks.

Probabilities of detecting a genetic cousin in a database based on shared genomic blocks.

Version Author Date
b44fa2c Tina Lasisi 2023-03-28
5f805fe Tina Lasisi 2023-03-06

Relative to database size

calc_numgen_cousins_df <- function(p, DB.sizes, N, prob) {
  pop_columns <- N[, -1, drop = FALSE]
  results_df <- tibble(Population = character(), p.degree = numeric(), DB.size = numeric(), Num_Cousins = numeric())

  for (pop_name in colnames(pop_columns)) {
    pop_vector <- pop_columns[[pop_name]]

    for (db_size in DB.sizes) {
      num_cousins <- 4^(p) * db_size / (pop_vector / 2)
      gen_related_cousins <- num_cousins * prob

      results_df <- rbind(results_df, tibble(Population = pop_name, p.degree = p, DB.size = db_size, Num_Cousins = gen_related_cousins))
    }
  }

  return(results_df)
}


plt_numgen_cousins_df <- calc_numgen_cousins_df(p, DB.sizes, N, prob.g.e.3.blocks) %>% 
  mutate(Population = factor(Population,
                             levels = c("Black", "White", "Total", "Coop")))
plt_numgen_cousins <- function(data, my.cols) {
  plot <- ggplot(data, aes(x = p.degree, y = Num_Cousins, color = factor(DB.size), group = factor(DB.size))) +
    geom_point() +
    geom_line() +
    scale_color_manual(values = my.cols, name = "Database size\n(Millions)",
                       labels = format(unique(data$DB.size) / 1e6, dig = 1)) +
    labs(x = "p (degree of cousin)", y = "Expected number of genetic p-th cousins in database") +
    theme_minimal() +
    facet_wrap(~ Population, labeller = labeller(Population = as_labeller(function(x) x))) +
    scale_x_continuous(breaks = 1:8) # Show every value from 1 to 8 on the x-axis

  return(plot)
}


plt_numgen_cousins(plt_numgen_cousins_df, my.cols)
Expected number of genetic p-th cousins in databases of varying sizes for different populations. Each panel represents a population, and the lines within each panel correspond to different database sizes

Expected number of genetic p-th cousins in databases of varying sizes for different populations. Each panel represents a population, and the lines within each panel correspond to different database sizes

Version Author Date
b44fa2c Tina Lasisi 2023-03-28

sessionInfo()
R version 4.2.2 (2022-10-31)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.2.1

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] RColorBrewer_1.1-3 lubridate_1.9.2    forcats_1.0.0      stringr_1.5.0     
 [5] dplyr_1.1.1        purrr_1.0.1        readr_2.1.4        tidyr_1.3.0       
 [9] tibble_3.2.1       ggplot2_3.4.1      tidyverse_2.0.0    wesanderson_0.3.6 
[13] workflowr_1.7.0   

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.0 xfun_0.35        bslib_0.4.1      colorspace_2.0-3
 [5] vctrs_0.6.1      generics_0.1.3   htmltools_0.5.3  yaml_2.3.6      
 [9] utf8_1.2.2       rlang_1.1.0      jquerylib_0.1.4  later_1.3.0     
[13] pillar_1.8.1     glue_1.6.2       withr_2.5.0      lifecycle_1.0.3 
[17] munsell_0.5.0    gtable_0.3.1     evaluate_0.18    labeling_0.4.2  
[21] knitr_1.41       tzdb_0.3.0       callr_3.7.3      fastmap_1.1.0   
[25] httpuv_1.6.6     ps_1.7.2         fansi_1.0.3      highr_0.9       
[29] Rcpp_1.0.9       promises_1.2.0.1 scales_1.2.1     cachem_1.0.6    
[33] jsonlite_1.8.4   farver_2.1.1     fs_1.5.2         hms_1.1.2       
[37] digest_0.6.30    stringi_1.7.8    processx_3.8.0   getPass_0.2-2   
[41] rprojroot_2.0.3  grid_4.2.2       cli_3.6.1        tools_4.2.2     
[45] magrittr_2.0.3   sass_0.4.4       crayon_1.5.2     whisker_0.4     
[49] pkgconfig_2.0.3  ellipsis_0.3.2   timechange_0.1.1 rmarkdown_2.18  
[53] httr_1.4.4       rstudioapi_0.14  R6_2.5.1         git2r_0.30.1    
[57] compiler_4.2.2