Last updated: 2024-07-17
Checks: 7 0
Knit directory: muse/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 282646f. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rproj.user/
Ignored: r_packages_4.3.0/
Ignored: r_packages_4.3.2/
Ignored: r_packages_4.3.3/
Ignored: r_packages_4.4.0/
Untracked files:
Untracked: analysis/breast_cancer.Rmd
Untracked: data/293t/
Untracked: data/dataset.h5ad
Untracked: data/jurkat/
Untracked: data/jurkat_293t/
Untracked: data/lung_bcell.rds
Untracked: data/pbmc3k.csv
Untracked: data/pbmc3k.csv.gz
Untracked: data/pbmc3k/
Untracked: data/seattle-library-checkouts.csv
Untracked: data/seattle-library-checkouts/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/arrow.Rmd
) and HTML
(docs/arrow.html
) files. If you’ve configured a remote Git
repository (see ?wflow_git_remote
), click on the hyperlinks
in the table below to view the files as they were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 282646f | Dave Tang | 2024-07-17 | Using parquet files |
html | 1d3b697 | Dave Tang | 2024-07-17 | Build site. |
Rmd | 1b02e91 | Dave Tang | 2024-07-17 | Getting started with the R arrow package |
The {arrow} package provides an interface to the Arrow C++ library.
Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware.
An example file was downloaded using curl.
outdir <- 'data'
library_file <- "seattle-library-checkouts.csv"
outfile <- paste0(outdir, '/', library_file)
stopifnot(file.exists(outfile))
File size.
file.size(outfile) |> utils:::format.object_size(units = 'Gb')
[1] "8.6 Gb"
arrow::open_dataset()
will scan a the input file and
figure out the structure of the dataset; it will only read further rows
if specified. Code below from a GitHub
issue.
opts <- CsvConvertOptions$create(col_types = schema(ISBN = string()))
seattle_csv <- open_dataset(
sources = "data/seattle-library-checkouts.csv",
format = "csv",
convert_options = opts
)
seattle_csv
FileSystemDataset with 1 csv file
12 columns
UsageClass: string
CheckoutType: string
MaterialType: string
CheckoutYear: int64
CheckoutMonth: int64
Checkouts: int64
Title: string
ISBN: string
Creator: string
Subjects: string
Publisher: string
PublicationYear: string
Get a glimpse of the data.
seattle_csv |> dplyr::glimpse()
FileSystemDataset with 1 csv file
41,389,465 rows x 12 columns
$ UsageClass <string> "Physical", "Physical", "Digital", "Physical", "Physi…
$ CheckoutType <string> "Horizon", "Horizon", "OverDrive", "Horizon", "Horizo…
$ MaterialType <string> "BOOK", "BOOK", "EBOOK", "BOOK", "SOUNDDISC", "BOOK",…
$ CheckoutYear <int64> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016,…
$ CheckoutMonth <int64> 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,…
$ Checkouts <int64> 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 2, 3, 2, 1, 3, 2, 3,…
$ Title <string> "Super rich : a guide to having it all / Russell Simm…
$ ISBN <string> "", "", "", "", "", "", "", "", "", "", "", "", "", "…
$ Creator <string> "Simmons, Russell", "Barclay, James, 1965-", "Tim Par…
$ Subjects <string> "Self realization, Conduct of life, Attitude Psycholo…
$ Publisher <string> "Gotham Books,", "Pyr,", "Random House, Inc.", "Dial …
$ PublicationYear <string> "c2011.", "2010.", "2015", "2005.", "c2004.", "c2005.…
Use collect()
to force arrow to perform a computation to
return some data.
seattle_csv |>
dplyr::count(CheckoutYear, wt = Checkouts) |>
dplyr::arrange(CheckoutYear) |>
dplyr::collect()
# A tibble: 18 × 2
CheckoutYear n
<int> <int>
1 2005 3798685
2 2006 6599318
3 2007 7126627
4 2008 8438486
5 2009 9135167
6 2010 8608966
7 2011 8321732
8 2012 8163046
9 2013 9057096
10 2014 9136081
11 2015 9084179
12 2016 9021051
13 2017 9231648
14 2018 9149176
15 2019 9199083
16 2020 6053717
17 2021 7361031
18 2022 7001989
The parquet format is used for rectangular data and is a custom binary format designed specifically for large datasets.
Partition the Seattle library data by CheckoutYear
,
since it is likely some analyses will want to only look at recent data
and partitioning by year yields 18 chunks of reasonable size.
pq_path <- 'data/seattle-library-checkouts'
seattle_csv |>
dplyr::group_by(CheckoutYear) |>
arrow::write_dataset(path = pq_path, format = "parquet")
Examine files.
tibble::tibble(
files = list.files(pq_path, recursive = TRUE),
size_MB = file.size(file.path(pq_path, files)) / 1024^2
)
# A tibble: 18 × 2
files size_MB
<chr> <dbl>
1 CheckoutYear=2005/part-0.parquet 109.
2 CheckoutYear=2006/part-0.parquet 164.
3 CheckoutYear=2007/part-0.parquet 178.
4 CheckoutYear=2008/part-0.parquet 195.
5 CheckoutYear=2009/part-0.parquet 214.
6 CheckoutYear=2010/part-0.parquet 222.
7 CheckoutYear=2011/part-0.parquet 239.
8 CheckoutYear=2012/part-0.parquet 249.
9 CheckoutYear=2013/part-0.parquet 269.
10 CheckoutYear=2014/part-0.parquet 282.
11 CheckoutYear=2015/part-0.parquet 294.
12 CheckoutYear=2016/part-0.parquet 300.
13 CheckoutYear=2017/part-0.parquet 304.
14 CheckoutYear=2018/part-0.parquet 292.
15 CheckoutYear=2019/part-0.parquet 288.
16 CheckoutYear=2020/part-0.parquet 151.
17 CheckoutYear=2021/part-0.parquet 229.
18 CheckoutYear=2022/part-0.parquet 241.
Open parquet files.
seattle_pq <- open_dataset(pq_path)
Write a dplyr query.
seattle_pq |>
dplyr::count(CheckoutYear, wt = Checkouts) |>
dplyr::arrange(CheckoutYear) -> query
Collect.
query |> dplyr::collect() |> system.time()
user system elapsed
1.233 0.139 0.389
Compare runtime.
seattle_csv |>
dplyr::count(CheckoutYear, wt = Checkouts) |>
dplyr::arrange(CheckoutYear) |>
dplyr::collect() |>
system.time()
user system elapsed
15.260 1.793 14.668
sessionInfo()
R version 4.4.0 (2024-04-24)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] arrow_16.1.0 workflowr_1.7.1
loaded via a namespace (and not attached):
[1] bit_4.0.5 jsonlite_1.8.8 dplyr_1.1.4 compiler_4.4.0
[5] promises_1.3.0 tidyselect_1.2.1 Rcpp_1.0.12 stringr_1.5.1
[9] git2r_0.33.0 assertthat_0.2.1 callr_3.7.6 later_1.3.2
[13] jquerylib_0.1.4 yaml_2.3.8 fastmap_1.2.0 R6_2.5.1
[17] generics_0.1.3 knitr_1.46 tibble_3.2.1 rprojroot_2.0.4
[21] bslib_0.7.0 pillar_1.9.0 rlang_1.1.3 utf8_1.2.4
[25] cachem_1.1.0 stringi_1.8.4 httpuv_1.6.15 xfun_0.44
[29] getPass_0.2-4 fs_1.6.4 sass_0.4.9 bit64_4.0.5
[33] cli_3.6.2 withr_3.0.0 magrittr_2.0.3 ps_1.7.6
[37] digest_0.6.35 processx_3.8.4 rstudioapi_0.16.0 lifecycle_1.0.4
[41] vctrs_0.6.5 evaluate_0.23 glue_1.7.0 whisker_0.4.1
[45] fansi_1.0.6 purrr_1.0.2 rmarkdown_2.27 httr_1.4.7
[49] tools_4.4.0 pkgconfig_2.0.3 htmltools_0.5.8.1