class: center, middle, inverse, title-slide .title[ # Data Literacy: Introduction to R ] .subtitle[ ## Data Wrangling - Part 2 ] .author[ ### Veronika Batzdorfer ] .date[ ### 2025-05-24 ] --- layout: true --- ## Data wrangling continued 🤠 Before we focused on the structure of our data by **selecting**, **renaming**, and **relocating** columns and **filtering** and **arranging** rows, in this part we focus on : - creating and computing new variables (in various ways) - recoding the values of a variable - dealing with missing values --- ## Creating & transforming variables Add new variables by changing the data type of an existing variable. ``` r stackoverflow_survey_single_response$country <- as.factor(stackoverflow_survey_single_response$country) typeof(stackoverflow_survey_single_response$country) ``` ``` ## [1] "integer" ``` --- ## Creating & transforming variables The `dplyr` package provides: `mutate()`, which you can also use to create a new variable that is a constant, ... ``` r library(conflicted) library(tidyverse) # Specify which version of a function to use when there's a conflict conflict_prefer("filter", "dplyr") conflict_prefer("mutate", "dplyr") #-----------mutate--------------------------------------- tuesdata_2024 <- stackoverflow_survey_single_response %>% dplyr::mutate(year = 2024) tuesdata_2024 %>% dplyr::select(year) %>% head() ``` ``` ## # A tibble: 6 × 1 ## year ## <dbl> ## 1 2024 ## 2 2024 ## 3 2024 ## 4 2024 ## 5 2024 ## 6 2024 ``` --- ## Creating & transforming variables ... applies a simple transformation to an existing variable, ... ``` r tuesdata_2024 <- stackoverflow_survey_single_response %>% dplyr::mutate(freq_new = so_part_freq - 1) tuesdata_2024 %>% dplyr::select(starts_with("freq")) %>% head ``` ``` ## # A tibble: 6 × 1 ## freq_new ## <dbl> ## 1 NA ## 2 5 ## 3 5 ## 4 NA ## 5 5 ## 6 5 ``` --- ## Creating & transforming variables ... or changes the data type of an existing variable. ``` r tuesdata_2024 <- tuesdata_2024 %>% dplyr::mutate(age_fac = as.factor(age)) tuesdata_2024 %>% dplyr::select(age, age_fac) %>% glimpse() ``` ``` ## Rows: 65,437 ## Columns: 2 ## $ age <dbl> 8, 3, 4, 1, 1, 8, 3, 1, 4, 3, 3, 4, 3, 3, 2, 4, … ## $ age_fac <fct> 8, 3, 4, 1, 1, 8, 3, 1, 4, 3, 3, 4, 3, 3, 2, 4, … ``` --- ## Recoding values For example, we want to recode the item on organisation size (`org_size`) so that higher values represent higher employee size. For that purpose, we can combine the two `dplyr` functions `mutate()` and `recode()`. See `qname_levels_single_response_crosswalk` ``` r stackoverflow_survey_single_response <- stackoverflow_survey_single_response %>% dplyr::mutate(org_size_R = dplyr::recode(org_size, `5` = 1, # `old value` = new value `2` = 2, `6` = 3, `4` = 4, `8` = 5, `1` = 6, `7` = 7, `3` = 8, `9` = 99, )) ``` --- ## Wrangling missing values When we prepare our data for analysis there are generally two things we might want/have to do with regard to missing values: - define specific values as missings (i.e., set them to `NA`) - recode `NA` values into something else --- ## The missings of `naniar` 🦁 The `naniar` is useful for handling missing data in `R` . We can use the function `replace_with_na_all` to code every value in our data set that is < 0 as `NA`. ``` r tuesdata <- stackoverflow_survey_single_response %>% replace_with_na_all(condition = ~.x < 0) ``` Using the functions `replace_with_na_at()` and `replace_with_na_if()`, we can also recode values as `NA` for a selection or specific type of variables (e.g., all numeric variables). --- ## Dealing with missing values in `R` As with everything in `R`, there are also many online resources on dealing with missing data. A fairly new and interesting one is the [chapter on missing values on the work-in progress 2nd edition of *R for Data Science*](https://r4ds.hadley.nz/missing-values.html). There also are various packages for different imputation techniques. A popular one is the [`mice` package](https://amices.org/mice/). However, we won't cover the topic of imputation in this course. --- ## Excluding cases with missing values You can use `!is.na(variable_name)` with your filtering method of choice. However, there are also methods for only keeping complete cases (i.e., cases without missing data). The `base R` function for that is `na.omit()` ``` r tuesdata_complete <- na.omit(stackoverflow_survey_single_response) nrow(tuesdata_complete) ``` ``` ## [1] 10166 ``` *NB*: Of course, the number of excluded/included cases depends on how you have defined your missings values before. --- ## Excluding cases with missing values The `tidyverse` equivalent of `na.omit()` is `drop_na()` from the `tidyr` package. You can use this function to remove cases that have missings on any variable in a data set or only on specific variables. ``` r stackoverflow_survey_single_response %>% drop_na() %>% nrow() ``` ``` ## [1] 10166 ``` ``` r stackoverflow_survey_single_response %>% drop_na(ai_threat) %>% nrow() ``` ``` ## [1] 44689 ``` *NB*: Of course, the number of excluded/included cases depends on how you have defined your missings values before. --- ## Recode `NA` into something else An easy option for replacing `NA` with another value for a single variable is the `replace_na()` function from the `tidyr` package in combination with `mutate()`. ``` r tuesdata <- stackoverflow_survey_single_response %>% mutate(ai_threat = replace_na(ai_threat, -99)) ``` --- ## Conditional variable transformation Sometimes we need to make the values of a new variable conditional on values of (multiple) other variables. Such cases require conditional transformations. --- ## Simple conditional transformation The simplest version of a conditional variable transformation is using an `ifelse()` statement. ``` r stackoverflow_survey_single_response <- stackoverflow_survey_single_response %>% dplyr::mutate(ed_char = ifelse(ed_level == 1, "professional", "beginner")) stackoverflow_survey_single_response %>% dplyr::select(ed_level, ed_char) %>% dplyr::sample_n(5) # randomly sample 5 cases from the df ``` ``` ## # A tibble: 5 × 2 ## ed_level ed_char ## <dbl> <chr> ## 1 3 beginner ## 2 3 beginner ## 3 2 beginner ## 4 6 beginner ## 5 2 beginner ``` .small[ *Note*: A more versatile option for creating dummy variables is the [`fastDummies` package](https://jacobkap.github.io/fastDummies/). ] --- ## Advanced conditional transformation For more flexible (or complex) conditional transformations, the `case_when()` function from `dyplyr` is a powerful tool. ``` r stackoverflow_survey_single_response <- stackoverflow_survey_single_response %>% dplyr::mutate(ed_level_cat = dplyr::case_when( dplyr::between(ed_level, 2, 4) ~ "beginner", dplyr::between(ed_level, 0, 1) ~ "expert", ed_level > 5 ~ "other" )) stackoverflow_survey_single_response %>% dplyr::select(ed_level, ed_level_cat) %>% dplyr::sample_n(5) ``` ``` ## # A tibble: 5 × 2 ## ed_level ed_level_cat ## <dbl> <chr> ## 1 NA <NA> ## 2 2 beginner ## 3 5 <NA> ## 4 2 beginner ## 5 1 expert ``` --- ## `dplyr::case_when()` A few things to note about `case_when()`: - you can have multiple conditions per value - conditions are evaluated *consecutively* - when none of the specified conditions are met for an observation, by default, the new variable will have a missing value **`NA`** for that case - if you want some other value in the new variables when the specified conditions are not met, you need to add .highlight[`TRUE ~ value`] as the last argument of the `case_when()` call - to explore the full range of options for `case_when()` check out its [online documentation](https://dplyr.tidyverse.org/reference/case_when.html) --- ## Recode values `across()` defined variables We can also use .highlight[`across()`] to recode multiple variables. Here, we want to recode the items measuring trust so that they reflect distrust instead. In this case, we probably want to create new variables. We can do so by using the `.names` argument of the `across()` function ``` r stackoverflow_survey_single_response <- stackoverflow_survey_single_response %>% dplyr::mutate( #-- apply the same operation across multiple columns-- across( ai_acc:ai_complex, #-- recode values within selected columns (reverse likert scale)-- ~dplyr::recode( .x, # .x is current column value `5` = 1, # old value 5 becomes 1 `4` = 2, `3` = 3, `2` = 4, `1` = 5, ), #-- name the new columns by appending _R to the original name-- .names = "{.col}_R")) ``` --- ## Other options for using `across()` It can be used with logical conditions (such as `is.numeric()`) or the `dplyr` selection helpers we encountered in the previous session (such as `starts_with()`) to apply transformations to variables of a specific type or meeting some other criteria (as well as all variables in a data set). To explore more options, you can check the [documentation for the `across()` function](https://dplyr.tidyverse.org/reference/across.html). --- ## Aggregate variables What is important to keep in mind here is that `dplyr` operations are applied **per column**. This is a common sources of confusion and errors as what we want to do in the case of creating aggregate variables requires transformations to be applied per row (respondent). --- ## Aggregate variables The most common type of aggregate variables are sum and mean scores.<sup>1</sup> An easy way to create those is combining the `base R` functions `rowSums()` and `rowMeans()` with `across()` from `dplyr`. .small[ .footnote[ [1] Of course, `R` offers many other options for dimension reduction, such as PCA, factor analyis, etc. However, we won't cover those in this course. ] ] --- ## Mean score In this example, we create a mean score for trust in ai in the workflow. ``` r stackoverflow_survey_single_response <- stackoverflow_survey_single_response %>% dplyr::mutate(mean_ai_trust = rowMeans(dplyr::across( ai_acc:ai_threat), na.rm = TRUE)) ``` --- ## More options for aggregate variables If you want to use other functions than just `mean()` or `sum()` for creating aggregate variables, you need to use the [`rowwise()` function from `dplyr`](https://dplyr.tidyverse.org/articles/rowwise.html) in combination with [`c_across()`](https://dplyr.tidyverse.org/reference/c_across.html) which is a special variant of the `dplyr` function `across()` for row-wise operations/aggregations. --- ## Outlook: Other variable types In the examples in this session, we almost exclusively worked with numeric variables. There are, however, other variable types that occur frequently in data sets in the social sciences: - factors - strings - time and dates --- ## Factors Factor are a special type of variable in `R` that represent categorical data. Before `R` version `4.0.0.` the default for `base R` was that all characters variables are imported as factors. Internally, factors are stored as integers, but they have (character) labels (so-called *levels*) associated with them. Notably, as factors are a native data type to `R`, they do not cause the issues that labelled variables often do (as labels represent an additional attribute, making them a special class that many functions cannot work with). --- ## Factors Factors in `R` can be **unordered** - in which case they are similar to **nominal** level variables or **ordered** - in which case they are similar to **ordinal** level variables Using factors can be necessary for certain statistical analysis and plots (e.g., if you want to compare groups). Working with factors in `R` is a big topic, and we will only briefly touch upon it in this workshop. For a more in-depth discussion of factors in `R` you can, e.g., have a look at the [chapter on factors](https://r4ds.had.co.nz/factors.html) in *R for Data Science*. --- ## Factors 4 🐱s There are many functions for working with factors in `base R`, such as `factor()` or `as.factor()`. However, a generally more versatile and easier-to-use option is the [`forcats` package](https://forcats.tidyverse.org/) from the `tidyverse`. <img src="data:image/png;base64,#https://forcats.tidyverse.org/logo.png" width="25%" style="display: block; margin: auto;" /> *Note*: There is a good [introduction to working with factors using `forcats` by Vebash Naidoo](https://sciencificity-blog.netlify.app/posts/2021-01-30-control-your-factors-with-forcats/) and *RStudio* also offers a [`forcats` cheatsheet](https://raw.githubusercontent.com/rstudio/cheatsheets/master/factors.pdf). --- ## Unordered factor Using the `recode_factor()` function (together with `mutate()`) from `dplyr`, we can create a factor from a numeric (or a character) variable. ``` r tuesdata_ai_threat <- stackoverflow_survey_single_response %>% dplyr::mutate(ai_threat_fac = dplyr::recode_factor(ai_threat, `1` = "I'm not sure", `2` = "No", `3` = "Yes")) tuesdata_ai_threat %>% dplyr::select(ai_threat, ai_threat_fac) %>% dplyr::filter(!is.na(ai_threat)) %>% dplyr::sample_n(5) ``` ``` ## # A tibble: 5 × 2 ## ai_threat ai_threat_fac ## <dbl> <fct> ## 1 1 I'm not sure ## 2 2 No ## 3 1 I'm not sure ## 4 1 I'm not sure ## 5 2 No ``` --- ## Outlook: Working with strings in `R` As stated before, we won't be able to cover the specifics of working with strings in `R` in this course. However, it may be good to know that the `tidyverse` package [`stringr`](https://stringr.tidyverse.org/index.html) offers a collection of convenient functions for working with strings. <img src="data:image/png;base64,#https://stringr.tidyverse.org/logo.png" width="25%" style="display: block; margin: auto;" /> The `stringr` package provides a good [introduction vignette](https://cran.r-project.org/web/packages/stringr/vignettes/stringr.html), the book *R for Data Science* has a whole section on [strings with `stringr`](https://r4ds.had.co.nz/strings.html). --- ## Sidenote: Regular expressions If you want (or have) to work with [regular expressions](https://en.wikipedia.org/wiki/Regular_expression), there are several packages that can facilitate this process by allowing you to create regular expressions in in a (more) human-readable: e.g., [`rex`](https://github.com/r-lib/rex), [`RVerbalExpressions `](https://rverbalexpressions.netlify.app/index.html), or [`rebus` package](https://github.com/richierocks/rebus) which allows you to create regular expressions in R in a human-readable way. --- ## Outlook: Times and dates [Working with times and dates can be quite a pain in programming](https://www.youtube.com/watch?v=-5wpm-gesOY) (as well as data analysis). Luckily, there are a couple of neat options for working with times and dates in `R` that can reduce the headache. --- ## Outlook: Times and dates If you want/need to work with times and dates in `R`, you may want to look into the [`lubridate` package](https://lubridate.tidyverse.org/) which is part of the `tidyverse`, and for which *RStudio* also provides a [cheatsheet](https://raw.githubusercontent.com/rstudio/cheatsheets/master/lubridate.pdf). <img src="data:image/png;base64,#https://lubridate.tidyverse.org/logo.png" width="25%" style="display: block; margin: auto;" /> *Note*: If you work with time series data, it is also worth checking out the [`tsibble` package](https://tsibble.tidyverts.org/) for your wrangling tasks. --- ## Dates ``` r library(lubridate) library(nycflights13) flights_df <- flights %>% dplyr::select(year, month, day, hour, minute, dep_delay) %>% dplyr::mutate(depart_time = make_datetime(year, month, day, hour, minute)) flights_df %>% head() ``` ``` ## # A tibble: 6 × 7 ## year month day hour minute dep_delay depart_time ## <int> <int> <int> <dbl> <dbl> <dbl> <dttm> ## 1 2013 1 1 5 15 2 2013-01-01 05:15:00 ## 2 2013 1 1 5 29 4 2013-01-01 05:29:00 ## 3 2013 1 1 5 40 2 2013-01-01 05:40:00 ## 4 2013 1 1 5 45 -1 2013-01-01 05:45:00 ## 5 2013 1 1 6 0 -6 2013-01-01 06:00:00 ## 6 2013 1 1 5 58 -4 2013-01-01 05:58:00 ``` --- ## Rounding ``` r library(dplyr) flights_df %>% dplyr::count(week = floor_date(depart_time, "week")) %>% ggplot(aes(week,n))+ geom_line() #--filter by delay: dep_delay------------------- flights_df %>% mutate( delay_type = ifelse(dep_delay < -10, "early", "delay"), # Categorize delay week = floor_date(depart_time, "week") # Extract week ) %>% count(week, delay_type) ``` ``` ## # A tibble: 159 × 3 ## week delay_type n ## <dttm> <chr> <int> ## 1 2012-12-30 00:00:00 delay 4261 ## 2 2012-12-30 00:00:00 early 42 ## 3 2012-12-30 00:00:00 <NA> 31 ## 4 2013-01-06 00:00:00 delay 5918 ## 5 2013-01-06 00:00:00 early 167 ## 6 2013-01-06 00:00:00 <NA> 33 ## 7 2013-01-13 00:00:00 delay 5852 ## 8 2013-01-13 00:00:00 early 127 ## 9 2013-01-13 00:00:00 <NA> 97 ## 10 2013-01-20 00:00:00 delay 5800 ## # ℹ 149 more rows ```