Last updated: 2021-04-02
Checks: 6 1
Knit directory:
fa_sim_cal/
This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
The R Markdown is untracked by Git.
To know which version of the R Markdown file created these
results, you’ll want to first commit it to the Git repo. If
you’re still working on the analysis, you can ignore this
warning. When you’re finished, you can run
wflow_publish
to commit the R Markdown file and
build the HTML.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20201104)
was run prior to running the code in the R Markdown file.
Setting a seed ensures that any results that rely on randomness, e.g.
subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version ec5d588. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the
analysis have been committed to Git prior to generating the results (you can
use wflow_publish
or wflow_git_commit
). workflowr only
checks the R Markdown file, but you know if there are other scripts or data
files that it depends on. Below is the status of the Git repository when the
results were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: .tresorit/
Ignored: _targets/
Ignored: data/VR_20051125.txt.xz
Ignored: output/blk_char.fst
Ignored: output/ent_blk.fst
Ignored: output/ent_cln.fst
Ignored: output/ent_raw.fst
Ignored: renv/library/
Ignored: renv/local/
Ignored: renv/staging/
Untracked files:
Untracked: analysis/m_01_6_check_resid.Rmd
Unstaged changes:
Deleted: R/file_paths.R
Modified: R/functions.R
Deleted: R/setup_01.R
Deleted: R/setup_project.R
Deleted: analysis/01-1_get_data.Rmd
Deleted: analysis/01-2_check_admin.Rmd
Modified: analysis/index.Rmd
Deleted: analysis/m_02_check_entity_data.Rmd
Modified: renv.lock
Modified: renv/activate.R
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
There are no past versions. Publish this analysis with
wflow_publish()
to start tracking its development.
# NOTE this notebook can be run manually or automatically by {targets}
# So load the packages required by this notebook here
# rather than relying on _targets.R to load them.
# Set up the project environment, because {workflowr} knits each Rmd file
# in a new R session, and doesn't execute the project .Rprofile
library(targets) # access data from the targets cache
library(tictoc) # capture execution time
library(here) # construct file paths relative to project root
library(fs) # file system operations
library(vroom) # fast reading of delimited text files
library(tibble) # enhanced data frames
library(stringr) # string matching
library(skimr) # compact summary of each variable
library(lubridate) # date parsing
Attaching package: 'lubridate'
The following objects are masked from 'package:base':
date, intersect, setdiff, union
library(forcats) # manipulation of factors
library(ggplot2) # graphics
# start the execution time clock
tictoc::tic("Computation time (excl. render)")
# Get the path to the raw entity data file
# This is a target managed by {targets}
f_entity_raw_tsv <- tar_read(c_raw_entity_data_file)
The aim of this set of meta notebooks is to work out how to read the raw
entity data. and get it sufficiently neatened so that we can construct
standardised names and modelling features without needing any further
neatening. To be clear, the target (c_raw_entity_data
) corresponding
to the objective of this set of notebooks is the neatened raw data,
before constructing any modelling features.
This notebook documents the checking of the “residential” variables for any issues that need fixing. These are the residential address and the phone number (which is tied to the address if the telephone is a land-line). The subsequent notebooks in this set will checking the other variables for any issues that need fixing.
Regardless of whether there are any issues that need to be fixed, the analyses here may inform our use of these variables in later analyses.
We have no intention of using the residence variables as predictors for entity resolution. However, they may be of use for manually checking the results of entity resolution. Consequently, the checking done here is minimal.
Define the residential variables:
unit_num
- Residential address unit numberhouse_num
- Residential address street numberhalf_code
- Residential address street number half codestreet_dir
- Residential address street direction (N,S,E,W,NE,SW,
etc.)street_name
- Residential address street namestreet_type_cd
- Residential address street type (RD, ST, DR,
BLVD, etc.)street_sufx_cd
- Residential address street suffix (BUS, EXT, and
directional)res_city_desc
- Residential address city namestate_cd
- Residential address state codezip_code
- Residential address zip codearea_cd
- Area code for phone numberphone_num
- Telephone numbervars_resid <- c(
"unit_num", "house_num",
"half_code", "street_dir", "street_name", "street_type_cd", "street_sufx_cd",
"res_city_desc", "state_cd", "zip_code",
"area_cd", "phone_num"
)
Read the raw entity data file using the previously defined functions
raw_entity_data_read()
, raw_entity_data_excl_status()
,
raw_entity_data_excl_test()
, raw_entity_data_drop_novar()
,
raw_entity_data_parse_dates()
, and raw_entity_data_drop_cancel_dt()
.
# Show the data file name
fs::path_file(f_entity_raw_tsv)
[1] "VR_20051125.txt.xz"
d <- raw_entity_data_read(f_entity_raw_tsv) %>%
raw_entity_data_excl_status() %>%
raw_entity_data_excl_test() %>%
raw_entity_data_drop_novar() %>%
raw_entity_data_parse_dates() %>%
raw_entity_data_drop_cancel_dt()
dim(d)
[1] 4099699 24
unit_num
- Residential address unit numberhouse_num
- Residential address street numberhalf_code
- Residential address street number half coded %>%
dplyr::select(unit_num, house_num, half_code) %>%
skimr::skim()
Name | Piped data |
Number of rows | 4099699 |
Number of columns | 3 |
_______________________ | |
Column type frequency: | |
character | 3 |
________________________ | |
Group variables | None |
Variable type: character
skim_variable | n_missing | complete_rate | min | max | empty | n_unique | whitespace |
---|---|---|---|---|---|---|---|
unit_num | 3755239 | 0.08 | 1 | 7 | 0 | 16116 | 0 |
house_num | 0 | 1.00 | 1 | 6 | 0 | 27534 | 0 |
half_code | 4088996 | 0.00 | 1 | 1 | 0 | 41 | 0 |
We are mostly interested in how much these fields are used, so concentrate on complete_rate
.
All these variables are character variables, so min
and max
refer to the minimum and maximum lengths of the values as character strings.
The number of unique values, n_unique
, is also of interest.
unit_num
house_num
half_code
0.3% filled
unit_num
Look at some examples grouped by length
d %>%
dplyr::select(unit_num) %>%
dplyr::filter(!is.na(unit_num)) %>%
dplyr::mutate(length = stringr::str_length(unit_num)) %>%
dplyr::group_by(length) %>%
dplyr::count(unit_num) %>% # count occurrences of each unique value
dplyr::slice_max(order_by = n, n = 5) %>%
knitr::kable()
length | unit_num | n |
---|---|---|
1 | A | 28214 |
1 | B | 26535 |
1 | C | 14240 |
1 | D | 12452 |
1 | E | 7956 |
2 | 10 | 2090 |
2 | 11 | 1878 |
2 | 12 | 1844 |
2 | 14 | 1390 |
2 | 13 | 1378 |
3 | 102 | 2579 |
3 | 101 | 2499 |
3 | 103 | 2296 |
3 | 201 | 2205 |
3 | 104 | 2201 |
4 | APTB | 194 |
4 | APTA | 185 |
4 | APTC | 106 |
4 | APT2 | 73 |
4 | APT4 | 73 |
5 | APT-A | 813 |
5 | APT-B | 680 |
5 | APT-C | 165 |
5 | APT-D | 119 |
5 | APT-1 | 109 |
6 | APT-1B | 30 |
6 | APT-2B | 24 |
6 | APT-1A | 22 |
6 | APT-4A | 20 |
6 | APT-4B | 19 |
7 | APT 205 | 6 |
7 | APT-204 | 6 |
7 | CONOVER | 6 |
7 | APT-106 | 5 |
7 | APT-203 | 5 |
7 | APT-302 | 5 |
house_num
Look at some examples grouped by length
d %>%
dplyr::select(house_num) %>%
dplyr::filter(!is.na(house_num)) %>%
dplyr::mutate(length = stringr::str_length(house_num)) %>%
dplyr::group_by(length) %>%
dplyr::count(house_num) %>% # count occurrences of each unique value
dplyr::slice_max(order_by = n, n = 5) %>%
knitr::kable()
length | house_num | n |
---|---|---|
1 | 0 | 36335 |
1 | 1 | 8601 |
1 | 5 | 5488 |
1 | 6 | 5356 |
1 | 4 | 5143 |
2 | 10 | 5649 |
2 | 15 | 5332 |
2 | 11 | 4730 |
2 | 20 | 4243 |
2 | 12 | 4115 |
3 | 105 | 18195 |
3 | 104 | 17159 |
3 | 100 | 15605 |
3 | 102 | 15147 |
3 | 103 | 15070 |
4 | 1000 | 3826 |
4 | 1200 | 3674 |
4 | 1005 | 3238 |
4 | 1001 | 3158 |
4 | 1801 | 3064 |
5 | 10400 | 238 |
5 | 10000 | 230 |
5 | 30005 | 229 |
5 | 10001 | 188 |
5 | 10301 | 183 |
6 | 100000 | 9 |
6 | 100001 | 1 |
6 | 102099 | 1 |
6 | 103580 | 1 |
6 | 601708 | 1 |
half_code
.d %>%
dplyr::select(half_code) %>%
dplyr::filter(!is.na(half_code)) %>%
dplyr::count(half_code) %>% # count occurrences of each unique value
dplyr::arrange(desc(n)) %>%
# knitr::kable() # strange multibyte character kills kable()
print(n = Inf)
# A tibble: 41 x 2
half_code n
<chr> <int>
1 "A" 3313
2 "B" 2725
3 "\xbd" 1730
4 "C" 948
5 "D" 569
6 "E" 273
7 "F" 214
8 "H" 174
9 "G" 154
10 "J" 78
11 "K" 58
12 "L" 48
13 "M" 48
14 "1" 44
15 "S" 38
16 "I" 36
17 "2" 35
18 "N" 33
19 "+" 32
20 "W" 24
21 "P" 21
22 "R" 13
23 "T" 13
24 "4" 10
25 "/" 8
26 "Q" 7
27 "5" 6
28 "6" 6
29 "O" 6
30 "V" 6
31 "`" 5
32 "3" 5
33 "7" 5
34 "8" 4
35 "X" 4
36 "U" 3
37 "\xab" 3
38 "-" 1
39 "0" 1
40 "9" 1
41 "Y" 1
half_code
appears to indicate where there are multiple dwellings on
one street-numbered block. Typical values would be A, B, …d %>%
dplyr::select(starts_with("street_")) %>%
skimr::skim()
Name | Piped data |
Number of rows | 4099699 |
Number of columns | 4 |
_______________________ | |
Column type frequency: | |
character | 4 |
________________________ | |
Group variables | None |
Variable type: character
skim_variable | n_missing | complete_rate | min | max | empty | n_unique | whitespace |
---|---|---|---|---|---|---|---|
street_dir | 3812561 | 0.07 | 1 | 2 | 0 | 8 | 0 |
street_name | 7 | 1.00 | 1 | 30 | 0 | 83244 | 0 |
street_type_cd | 154594 | 0.96 | 2 | 4 | 0 | 119 | 0 |
street_sufx_cd | 3941004 | 0.04 | 1 | 3 | 0 | 11 | 0 |
We are mostly interested in how much these fields are used, so concentrate on complete_rate
.
All these variables are character variables, so min
and max
refer to the minimum and maximum lengths of the values as character strings.
The number of unique values, n_unique
, is also of interest.
street_dir
street_name
street_type_cd
street_sufx_cd
knitr::knit_exit()