Last updated: 2025-03-04
Checks: 6 1
Knit directory: QUAIL-Mex/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
The R Markdown file has unstaged changes. To know which version of
the R Markdown file created these results, you’ll want to first commit
it to the Git repo. If you’re still working on the analysis, you can
ignore this warning. When you’re finished, you can run
wflow_publish
to commit the R Markdown file and build the
HTML.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20241009)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 2284e7f. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rproj.user/
Unstaged changes:
Modified: QUAIL-Mex.Rproj
Modified: analysis/HBA2025_cleaning.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/HBA2025_cleaning.Rmd
) and
HTML (docs/HBA2025_cleaning.html
) files. If you’ve
configured a remote Git repository (see ?wflow_git_remote
),
click on the hyperlinks in the table below to view the files as they
were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 1b8f52e | Paloma | 2025-03-03 | cleaning |
wflow— title: “HBA2025_results” author: “Paloma” date: “2025-01-27” output: workflowr::wflow_html editor_options: chunk_output_type: console —
Here you will find the code used to obtain results shown in the annual meeting of the HBA, 2025.
Abstract:
Coping with water insecurity: women’s strategies and emotional responses in Iztapalapa, Mexico City
Water insecurity in urban areas presents distinctive challenges, particularly in marginalized communities. While past studies have documented how households adapt to poor water services, many of these coping strategies come at a significant personal cost. Here we examine the coping strategies and emotional impacts of unreliable water services among 400 women in Iztapalapa, Mexico City. Data were collected through surveys over the Fall of 2022 and Spring of 2023. We assessed household water access, water management practices, and emotional responses to local water services.
Results indicate that during acute water shortages, women can spend extended periods (several hours, or sometimes days) waiting for water trucks. Additionally, 57% of respondents reported feeling frustrated or angry about their water situation, while around 20% experienced family conflicts over water use or community-level conflicts around water management, often involving water vendors or government services.
This study offers one of the first in-depth examinations of how water insecurity specifically affects women in Iztapalapa, a densely populated region of Mexico City with severe water access challenges. The findings highlight the urgent need for policy interventions that address water insecurity with a gender-sensitive approach, recognizing the disproportionate burden placed on women as primary water managers in their households.
# Load necessary libraries
library(dplyr)
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
library(tidyr)
# Load the dataset
file_path <- "./data/01.SCREENING.csv"
df <- read.csv(file_path, stringsAsFactors = FALSE, na.strings = c("", "N/A", "NA", "pending"))
# Convert all character columns to NA where values match missing patterns
df <- df %>%
mutate(across(where(is.character), ~ ifelse(. %in% c("", "N/A", "NA", "pending"), NA, .)))
# Remove duplicate IDs and keep rows with at least 10 non-NA values
df_filtered <- df %>%
mutate(Non_NA_Count = rowSums(!is.na(.))) %>%
filter(!duplicated(ID) & Non_NA_Count >= 10) %>%
select(-Non_NA_Count) # Remove helper column
# Display the filtered dataset
print(dim(df_filtered)) # Show dimensions for confirmation
[1] 385 35
write.csv(df_filtered, "./analysis/Filtered_Screening.csv", row.names = FALSE)
d <- df_filtered
# Confirm total number of unique participants
cat("Total unique participants:", length(unique(d$ID)), "\n")
Total unique participants: 385
# Identify missing ID numbers in the first and second trip ranges
missing_first_trip <- setdiff(1:204, d$ID)
cat("Missing IDs in the first trip:", missing_first_trip, "\n")
Missing IDs in the first trip: 71 164
missing_second_trip <- setdiff(301:497, d$ID)
cat("Missing IDs in the second trip:", missing_second_trip, "\n")
Missing IDs in the second trip: 346 347 348 349 350 351 352 354 355 356 357 358 360 375
# Count participants within ID ranges
cat("Num of Participants first trip ≤ 250:", sum(d$ID <= 250, na.rm = TRUE), "\n")
Num of Participants first trip ≤ 250: 202
cat("Num of Participants second trip ≥ 250:", sum(d$ID >= 250, na.rm = TRUE), "\n")
Num of Participants second trip ≥ 250: 183
# Confirm total number of rows
cat("Total number of rows:", nrow(d), "\n")
Total number of rows: 385
# Set ID as row names (ensuring IDs are unique and non-NA)
if (!any(duplicated(d$ID)) && !any(is.na(d$ID))) {
rownames(d) <- as.character(d$ID)
} else {
warning("Some IDs are duplicated or NA, row names were not set.")
}
# Count and report rows containing NAs
num_na_rows <- sum(!complete.cases(d))
cat("Rows with missing values:", num_na_rows, "\n")
Rows with missing values: 385
# Identify columns that start with "SES" and end with "SC"
ses_sc_columns <- grep("^SES.*SC$", names(d), value = TRUE)
# Count total number of NA values in each SES_SC column
na_counts <- colSums(is.na(d[ses_sc_columns]))
# Count total NA values across all SES_SC columns
total_na_count <- sum(na_counts)
# Print results
cat("Number of NA values per SES_SC column:\n")
Number of NA values per SES_SC column:
print(na_counts)
SES_EDU_SC SES_BTHR_SC SES_CAR_SC SES_INT_SC SES_WRK_SC SES_BEDR_SC
14 16 5 2 2 4
cat("\nTotal NA values across all SES_SC columns:", total_na_count, "\n")
Total NA values across all SES_SC columns: 43
# Check if SES_SC columns exist before proceeding
if (length(ses_sc_columns) > 0) {
# Convert SES_SC columns to numeric (handles factors and character types)
d[ses_sc_columns] <- lapply(d[ses_sc_columns], function(x) as.numeric(as.character(x)))
# Sum SES_SC columns row-wise, keeping NA if any SES_SC value is missing
d <- d %>%
rowwise() %>%
mutate(SES_SC_Total = if_else(any(is.na(c_across(all_of(ses_sc_columns)))),
NA_real_,
sum(c_across(all_of(ses_sc_columns)), na.rm = TRUE))) %>%
ungroup()
# Display confirmation message
cat("Added SES_SC_Total column, maintaining NA where any SES_SC column has NA.\n")
} else {
cat("No SES_SC columns found in the dataset.\n")
}
Warning in FUN(X[[i]], ...): NAs introduced by coercion
Warning in FUN(X[[i]], ...): NAs introduced by coercion
Added SES_SC_Total column, maintaining NA where any SES_SC column has NA.
# Count the number of NAs in the SES_SC_Total column
num_na_ses_sc_total <- sum(is.na(d$SES_SC_Total))
# Print the count
cat("Number of NA values in SES_SC_Total:", num_na_ses_sc_total, "\n")
Number of NA values in SES_SC_Total: 36
print(d[is.na(d$SES_SC_Total),c("ID",ses_sc_columns)], n=50)
# A tibble: 36 × 7
ID SES_EDU_SC SES_BTHR_SC SES_CAR_SC SES_INT_SC SES_WRK_SC SES_BEDR_SC
<int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 8 31 NA 18 31 15 12
2 11 35 NA 0 31 46 23
3 12 35 47 18 31 61 NA
4 18 NA NA NA NA NA NA
5 22 31 NA NA NA NA NA
6 57 31 NA 0 31 15 6
7 58 43 NA 0 31 31 23
8 81 NA 47 0 0 31 23
9 85 31 47 18 31 61 NA
10 95 NA 24 18 0 61 12
11 101 NA 47 0 31 46 17
12 103 NA 24 18 31 31 17
13 107 NA 47 0 31 31 12
14 108 NA 24 18 31 31 12
15 109 35 NA 0 31 46 23
16 125 43 NA 0 31 31 23
17 127 73 NA 0 31 61 23
18 156 NA 0 0 0 31 6
19 182 31 NA 18 31 15 17
20 194 23 NA 0 0 15 6
21 198 NA 47 0 31 46 23
22 302 31 24 NA 0 31 6
23 312 31 0 NA 31 61 6
24 325 NA 0 0 0 15 6
25 362 43 NA 0 0 15 12
26 371 73 NA 0 31 31 12
27 387 NA 24 0 31 46 12
28 395 31 24 NA 31 31 12
29 397 23 NA 0 0 15 6
30 411 43 NA 0 31 15 6
31 417 35 NA 0 31 31 12
32 427 NA 0 0 0 15 6
33 461 NA 24 0 31 15 6
34 470 31 NA 0 31 46 12
35 486 NA 24 0 31 15 12
36 497 NA 47 0 31 61 17
# Calculate the percentage of missing values per column
missing_percentage <- colMeans(is.na(d)) * 100
# Get columns with more than 10% missing data
columns_to_remove <- names(missing_percentage[missing_percentage > 10])
length(columns_to_remove)
[1] 11
dim(d)
[1] 385 36
# Print the selected columns
cat("Columns with more than 10% missing values:\n")
Columns with more than 10% missing values:
print(columns_to_remove)
[1] "D_LOC" "D_LOC_CODE" "D_LOC_INFO" "HLTH_CPAIN_CAT"
[5] "HLTH_CDIS_CAT" "D_HH_SIZE_INFO" "D_EMP_DAYS" "W_HH_SIZE"
[9] "W_STORAGE" "D_INFO" "SES_INFO"
d <- d %>% select(-all_of(columns_to_remove))
dim(d)
[1] 385 25
head(d)
# A tibble: 6 × 25
ID D_YRBR D_AGE D_LOC_TIME HLTH_SMK HLTH_TTM HLTH_MED HLTH_CPAIN HLTH_CDIS
<int> <chr> <int> <chr> <chr> <chr> <chr> <chr> <chr>
1 1 1987 35 35 "No" No "No" No "Tuve co…
2 2 1990 32 32 "Yes" No "Lorata… No "No"
3 3 1992 30 8 "No" No "No" No "No"
4 4 1982 40 32 "No" No "No" No "No"
5 5 1976 46 45 "No" No "Yes,om… Yes "Dolor c…
6 6 1990 32 8 "a veces… no "no" No "no"
# ℹ 16 more variables: D_HH_SIZE <chr>, D_CHLD <chr>, D_EMP_TYPE <chr>,
# SES_EDU_CAT <chr>, SES_EDU_SC <dbl>, SES_BTHR_CAT <chr>, SES_BTHR_SC <dbl>,
# SES_CAR_CAT <chr>, SES_CAR_SC <dbl>, SES_INT_CAT <chr>, SES_INT_SC <dbl>,
# SES_WRK_CAT <chr>, SES_WRK_SC <dbl>, SES_BEDR_CAT <chr>, SES_BEDR_SC <dbl>,
# SES_SC_Total <dbl>
# Function to convert non-numeric values to "other"
convert_to_other <- function(x) {
ifelse(grepl("^[0-9]+$", x) | x == "other" | is.na(x), x, "other")
}
# Apply transformation from yes and no to 1 and 0, to some columns
selected_columns <- c("D_CHLD", "D_HH_SIZE")
d <- d %>%
mutate(across(all_of(selected_columns), ~ convert_to_other(as.character(.))))
# Specify the columns where transformation should be applied
columns_to_transform <- c("HLTH_SMK", "HLTH_TTM", "HLTH_CPAIN") # Replace with actual column names
# Convert "yes" to 1, "no" to 0, and everything else to "other"
d <- d %>%
mutate(across(all_of(columns_to_transform), ~ case_when(
. == "Yes" ~ "1",
. == "No" ~ "0",
!is.na(.) ~ "other", # Assign "other" to all non-missing values that are not "yes" or "no"
TRUE ~ NA_character_
)))
# Convert numeric-coded columns to numeric where applicable
d <- d %>%
mutate(across(where(~ all(. %in% c("0", "1", "other", NA_character_))), as.character)) # Ensure consistency
print(d)
# A tibble: 385 × 25
ID D_YRBR D_AGE D_LOC_TIME HLTH_SMK HLTH_TTM HLTH_MED HLTH_CPAIN HLTH_CDIS
<int> <chr> <int> <chr> <chr> <chr> <chr> <chr> <chr>
1 1 1987 35 35 0 0 "No" 0 "Tuve co…
2 2 1990 32 32 1 0 "Lorata… 0 "No"
3 3 1992 30 8 0 0 "No" 0 "No"
4 4 1982 40 32 0 0 "No" 0 "No"
5 5 1976 46 45 0 0 "Yes,om… 1 "Dolor c…
6 6 1990 32 8 other other "no" 0 "no"
7 7 1997 25 <NA> other other "no" 1 "dolor c…
8 8 1985 37 10 0 other "no" 0 "No diab…
9 9 1982 40 40 0 0 "No" other "No"
10 10 1979 49 43 0 0 "No" 0 "No"
# ℹ 375 more rows
# ℹ 16 more variables: D_HH_SIZE <chr>, D_CHLD <chr>, D_EMP_TYPE <chr>,
# SES_EDU_CAT <chr>, SES_EDU_SC <dbl>, SES_BTHR_CAT <chr>, SES_BTHR_SC <dbl>,
# SES_CAR_CAT <chr>, SES_CAR_SC <dbl>, SES_INT_CAT <chr>, SES_INT_SC <dbl>,
# SES_WRK_CAT <chr>, SES_WRK_SC <dbl>, SES_BEDR_CAT <chr>, SES_BEDR_SC <dbl>,
# SES_SC_Total <dbl>
#Keep columns relevant/cleaned
columns_to_keep2 <- c("ID", "D_YRBR", "D_AGE", "D_HH_SIZE", "D_CHLD", "HLTH_SMK", "SES_SC_Total")
d <- d %>% select(all_of(columns_to_keep2))
write.csv(d,"./data/00.SCREENING_V2.csv")
file2 <- "./data/02.HWISE_PSS.csv"
file3 <- "./data/Chronic_pain_illness.csv"
df1 <- d
df2 <- read.csv(file2, stringsAsFactors = FALSE, na.strings = c("", "N/A", "NA", "pending"))
df3 <- read.csv(file3, stringsAsFactors = FALSE, na.strings = c("", "N/A", "NA", "pending"))
# Identify the common column for merging
common_column <- "ID" # Change this if needed
# Ensure the common column is of the same type in all datasets
df1[[common_column]] <- as.character(df1[[common_column]])
df2[[common_column]] <- as.character(df2[[common_column]])
df3[[common_column]] <- as.character(df3[[common_column]])
# Merge datasets sequentially by the common column
merged_df <- df1 %>%
full_join(df2, by = common_column) %>%
full_join(df3, by = common_column)
cols_to_remove<- c("HW_INFO", "PSS_INFO", "PSS_TOTAL", "HW_TOTAL")
merged_df <- merged_df %>% select(-all_of(cols_to_remove))
##### Calculate final HW score
# Identify columns that start with "HW_"
hw_columns <- grep("^HW_", names(merged_df), value = TRUE)
# Sum up all HW_ columns row-wise, keeping NA if any HW_ column has NA
merged_df <- merged_df %>%
rowwise() %>%
mutate(HW_TOTAL = if_else(any(is.na(c_across(all_of(hw_columns)))),
NA_real_,
sum(c_across(all_of(hw_columns)),
na.rm = TRUE))) %>%
ungroup()
# Print merged dataset dimensions
cat("Merged dataset dimensions:", dim(merged_df), "\n")
Merged dataset dimensions: 400 38
print(merged_df, n=20)
# A tibble: 400 × 38
ID D_YRBR D_AGE D_HH_SIZE D_CHLD HLTH_SMK SES_SC_Total SEASON W_WS_LOC
<chr> <chr> <int> <chr> <chr> <chr> <dbl> <chr> <chr>
1 1 1987 35 4 0 0 149 Oct-Dec WI
2 2 1990 32 12 2 1 196 Oct-Dec WI
3 3 1992 30 7 2 0 52 Oct-Dec WI
4 4 1982 40 4 1 0 214 Oct-Dec WI
5 5 1976 46 4 other 0 117 Oct-Dec WI
6 6 1990 32 6 1 other 220 Oct-Dec WI
7 7 1997 25 4 2 other 130 Oct-Dec WI
8 8 1985 37 4 2 0 NA Oct-Dec WI
9 9 1982 40 6 4 0 71 Oct-Dec WI
10 10 1979 49 3 1 0 117 Oct-Dec WI
11 11 1992 30 10 2 0 NA Oct-Dec WI
12 12 1996 26 17 2 0 NA Oct-Dec WI
13 13 1998 24 6 2 0 159 Oct-Dec WI
14 14 2000 22 6 0 0 132 Oct-Dec WI
15 15 1997 25 5 1 1 113 Oct-Dec WI
16 16 1990 32 5 other 0 104 Oct-Dec WI
17 17 1988 34 6 3 0 127 Oct-Dec WI
18 18 1978 44 <NA> <NA> <NA> NA Oct-Dec WI
19 19 1991 31 4 2 0 110 Oct-Dec WI
20 20 1992 30 10 2 0 199 Oct-Dec WI
# ℹ 380 more rows
# ℹ 29 more variables: HW_WORRY <int>, HW_INTERR <int>, HW_CLOTHES <int>,
# HW_PLANS <int>, HW_FOOD <int>, HW_HANDS <int>, HW_BODY <int>,
# HW_DRINK <int>, HW_ANGRY <int>, HW_SLEEP <int>, HW_NONE <int>,
# HW_SHAME <int>, PSS1 <int>, PSS2 <int>, PSS3 <int>, PSS4 <int>, PSS5 <int>,
# PSS6 <int>, PSS7 <int>, PSS8 <int>, PSS9 <int>, PSS10 <int>, PSS11 <int>,
# PSS12 <int>, PSS13 <int>, PSS14 <int>, HLTH_CPAIN_CAT <int>, …
# Save the merged dataset
write.csv(merged_df, "./data/Cleaned_Dataset_Screening_HWISE_PSS.csv", row.names = FALSE)
cat("Merged dataset saved as 'Cleaned_Dataset_Screening_HWISE_PSS.csv'.\n")
Merged dataset saved as 'Cleaned_Dataset_Screening_HWISE_PSS.csv'.
sessionInfo()
R version 4.4.2 (2024-10-31)
Platform: aarch64-apple-darwin20
Running under: macOS Sequoia 15.3.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
time zone: America/Detroit
tzcode source: internal
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] tidyr_1.3.1 dplyr_1.1.4 workflowr_1.7.1
loaded via a namespace (and not attached):
[1] jsonlite_1.8.9 compiler_4.4.2 promises_1.3.2 tidyselect_1.2.1
[5] Rcpp_1.0.14 stringr_1.5.1 git2r_0.35.0 callr_3.7.6
[9] later_1.4.1 jquerylib_0.1.4 yaml_2.3.10 fastmap_1.2.0
[13] R6_2.5.1 generics_0.1.3 knitr_1.49 tibble_3.2.1
[17] rprojroot_2.0.4 bslib_0.9.0 pillar_1.10.1 rlang_1.1.5
[21] utf8_1.2.4 cachem_1.1.0 stringi_1.8.4 httpuv_1.6.15
[25] xfun_0.50 getPass_0.2-4 fs_1.6.5 sass_0.4.9
[29] cli_3.6.3 withr_3.0.2 magrittr_2.0.3 ps_1.8.1
[33] digest_0.6.37 processx_3.8.5 rstudioapi_0.17.1 lifecycle_1.0.4
[37] vctrs_0.6.5 evaluate_1.0.3 glue_1.8.0 whisker_0.4.1
[41] purrr_1.0.4 rmarkdown_2.29 httr_1.4.7 tools_4.4.2
[45] pkgconfig_2.0.3 htmltools_0.5.8.1