Last updated: 2023-06-19
Checks: 7 0
Knit directory: muse/
This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 032e058. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: r_packages_4.1.2/
Ignored: r_packages_4.2.0/
Ignored: r_packages_4.2.2/
Ignored: r_packages_4.3.0/
Untracked files:
Untracked: analysis/.json_vs_yaml.Rmd.swp
Untracked: analysis/cell_ranger.Rmd
Untracked: analysis/tss_xgboost.Rmd
Untracked: code/multiz100way/
Untracked: data/HG00702_SH089_CHSTrio.chr1.vcf.gz
Untracked: data/HG00702_SH089_CHSTrio.chr1.vcf.gz.tbi
Untracked: data/ncrna_NONCODE[v3.0].fasta.tar.gz
Untracked: data/ncrna_noncode_v3.fa
Untracked: data/netmhciipan.out.gz
Untracked: women.json
Unstaged changes:
Modified: analysis/graph.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/json_vs_yaml.Rmd
) and HTML
(docs/json_vs_yaml.html
) files. If you’ve configured a
remote Git repository (see ?wflow_git_remote
), click on the
hyperlinks in the table below to view the files as they were in that
past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 032e058 | Dave Tang | 2023-06-19 | More details on JSON |
html | 643530f | Dave Tang | 2023-06-14 | Build site. |
Rmd | c7dcdf2 | Dave Tang | 2023-06-14 | Parsing JSON |
html | ca51b55 | Dave Tang | 2023-05-24 | Build site. |
Rmd | b72b5b3 | Dave Tang | 2023-05-24 | Check out additional packages |
html | 3bb3245 | Dave Tang | 2023-05-19 | Build site. |
Rmd | 50c92d7 | Dave Tang | 2023-05-19 | JSON and YAML formats |
JSON and YAML are popular serialisation formats.
In computing, serialization (or serialisation) is the process of translating a data structure or object state into a format that can be stored (e.g. files in secondary storage devices, data buffers in primary storage devices) or transmitted (e.g. data streams over computer networks) and reconstructed later (possibly in a different computer environment).
Install the following packages:
install.packages(c("jsonlite", "yaml", "tidyjson", "rjson"))
Installing packages into '/packages'
(as 'lib' is unspecified)
Load libraries.
library(jsonlite)
library(yaml)
library(tidyjson)
Attaching package: 'tidyjson'
The following object is masked from 'package:jsonlite':
read_json
The following object is masked from 'package:stats':
filter
library(rjson)
Attaching package: 'rjson'
The following objects are masked from 'package:jsonlite':
fromJSON, toJSON
As a first example, we will convert the women
data set,
which is a small data set with 15 observations for 2 variables.
women
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
Convert women
to JSON using jsonlite
.
women_json <- jsonlite::toJSON(women, pretty = TRUE)
women_json
[
{
"height": 58,
"weight": 115
},
{
"height": 59,
"weight": 117
},
{
"height": 60,
"weight": 120
},
{
"height": 61,
"weight": 123
},
{
"height": 62,
"weight": 126
},
{
"height": 63,
"weight": 129
},
{
"height": 64,
"weight": 132
},
{
"height": 65,
"weight": 135
},
{
"height": 66,
"weight": 139
},
{
"height": 67,
"weight": 142
},
{
"height": 68,
"weight": 146
},
{
"height": 69,
"weight": 150
},
{
"height": 70,
"weight": 154
},
{
"height": 71,
"weight": 159
},
{
"height": 72,
"weight": 164
}
]
read_json
does not parse the output of
toJSON
.
jsonlite::write_json(x = women_json, path = "women.json")
tidyjson::read_json(path = "women.json")
# A tbl_json: 1 x 2 tibble with a "JSON" attribute
..JSON document.id
<chr> <int>
1 "[\"[\\n {\\n \\..." 1
Converts into list.
str(rjson::fromJSON(women_json))
List of 15
$ :List of 2
..$ height: num 58
..$ weight: num 115
$ :List of 2
..$ height: num 59
..$ weight: num 117
$ :List of 2
..$ height: num 60
..$ weight: num 120
$ :List of 2
..$ height: num 61
..$ weight: num 123
$ :List of 2
..$ height: num 62
..$ weight: num 126
$ :List of 2
..$ height: num 63
..$ weight: num 129
$ :List of 2
..$ height: num 64
..$ weight: num 132
$ :List of 2
..$ height: num 65
..$ weight: num 135
$ :List of 2
..$ height: num 66
..$ weight: num 139
$ :List of 2
..$ height: num 67
..$ weight: num 142
$ :List of 2
..$ height: num 68
..$ weight: num 146
$ :List of 2
..$ height: num 69
..$ weight: num 150
$ :List of 2
..$ height: num 70
..$ weight: num 154
$ :List of 2
..$ height: num 71
..$ weight: num 159
$ :List of 2
..$ height: num 72
..$ weight: num 164
Convert women
to YAML.
women_yaml <- as.yaml(women, indent = 3)
writeLines(women_yaml)
height:
- 58.0
- 59.0
- 60.0
- 61.0
- 62.0
- 63.0
- 64.0
- 65.0
- 66.0
- 67.0
- 68.0
- 69.0
- 70.0
- 71.0
- 72.0
weight:
- 115.0
- 117.0
- 120.0
- 123.0
- 126.0
- 129.0
- 132.0
- 135.0
- 139.0
- 142.0
- 146.0
- 150.0
- 154.0
- 159.0
- 164.0
JSON to data frame.
jsonlite::fromJSON(women_json)
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
YAML to data frame. This does not work for more complex data structures (see below).
yaml.load(women_yaml, handlers = list(map = function(x) as.data.frame(x) ))
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
A data frame containing lists.
my_df <- data.frame(
id = 1:3,
title = letters[1:3]
)
my_df$keywords = list(
c('aa', 'aaa', 'aaaa'),
c('bb', 'bbb'),
c('cc', 'ccc', 'cccc', 'ccccc')
)
my_df
id title keywords
1 1 a aa, aaa, aaaa
2 2 b bb, bbb
3 3 c cc, ccc, cccc, ccccc
Convert my_df
to JSON.
my_df_json <- jsonlite::toJSON(my_df, pretty = TRUE)
my_df_json
[
{
"id": 1,
"title": "a",
"keywords": ["aa", "aaa", "aaaa"]
},
{
"id": 2,
"title": "b",
"keywords": ["bb", "bbb"]
},
{
"id": 3,
"title": "c",
"keywords": ["cc", "ccc", "cccc", "ccccc"]
}
]
Convert my_df
to YAML.
my_df_yaml <- as.yaml(my_df, indent = 3)
writeLines(my_df_yaml)
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
Converting from JSON to YAML is easy.
identical(writeLines(as.yaml(jsonlite::fromJSON(my_df_json))), writeLines(my_df_yaml))
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
[1] TRUE
Converting from YAML to JSON for my_df
is not as
straight-forward because of the different number of keywords.
my_df_list <- yaml.load(my_df_yaml)
my_df_list
$id
[1] 1 2 3
$title
[1] "a" "b" "c"
$keywords
$keywords[[1]]
[1] "aa" "aaa" "aaaa"
$keywords[[2]]
[1] "bb" "bbb"
$keywords[[3]]
[1] "cc" "ccc" "cccc" "ccccc"
This conversion is different from the original data frame to JSON conversion because this creates a single object, where as the original conversion creates an array with three objects.
jsonlite::toJSON(my_df_list, pretty = TRUE)
{
"id": [1, 2, 3],
"title": ["a", "b", "c"],
"keywords": [
["aa", "aaa", "aaaa"],
["bb", "bbb"],
["cc", "ccc", "cccc", "ccccc"]
]
}
my_df_json
[
{
"id": 1,
"title": "a",
"keywords": ["aa", "aaa", "aaaa"]
},
{
"id": 2,
"title": "b",
"keywords": ["bb", "bbb"]
},
{
"id": 3,
"title": "c",
"keywords": ["cc", "ccc", "cccc", "ccccc"]
}
]
I could probably write a hacky function to make the conversion but I won’t.
The ffq tool generates metadata in JSON:
ffq SRX079566 > data/SRX079566.json
ffq_json <- jsonlite::read_json(path = "data/SRX079566.json", simplifyVector = TRUE)
str(ffq_json)
List of 1
$ SRX079566:List of 5
..$ accession : chr "SRX079566"
..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
..$ platform : chr "ILLUMINA"
..$ instrument: chr "Illumina Genome Analyzer IIx"
..$ runs :List of 2
.. ..$ SRR292241:List of 7
.. .. ..$ accession : chr "SRR292241"
.. .. ..$ experiment: chr "SRX079566"
.. .. ..$ study : chr "SRP020237"
.. .. ..$ sample : chr "SRS212581"
.. .. ..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
.. .. ..$ attributes:List of 6
.. .. .. ..$ RUN : chr "94367"
.. .. .. ..$ instrument_model: chr "Illumina Genome Analyzer II"
.. .. .. ..$ ENA-SPOT-COUNT : int 9721384
.. .. .. ..$ ENA-BASE-COUNT : int 699939648
.. .. .. ..$ ENA-FIRST-PUBLIC: chr "2011-07-05"
.. .. .. ..$ ENA-LAST-UPDATE : chr "2019-10-07"
.. .. ..$ files :List of 4
.. .. .. ..$ ftp :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR292241" "SRR292241"
.. .. .. .. ..$ filename : chr [1:2] "SRR292241_1.fastq.gz" "SRR292241_2.fastq.gz"
.. .. .. .. ..$ filetype : chr [1:2] "fastq" "fastq"
.. .. .. .. ..$ filesize : int [1:2] 387227151 395115704
.. .. .. .. ..$ filenumber: int [1:2] 1 2
.. .. .. .. ..$ md5 : chr [1:2] "a5e0d2d51550127ea9ce3a0219deb375" "e9ce7abd3bce9d5ff194d6e045a36c1c"
.. .. .. .. ..$ urltype : chr [1:2] "ftp" "ftp"
.. .. .. .. ..$ url : chr [1:2] "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_1.fastq.gz" "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_2.fastq.gz"
.. .. .. ..$ aws :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR292241" "SRR292241"
.. .. .. .. ..$ filename : chr [1:2] "Run94367Lane6.srf" "SRR292241"
.. .. .. .. ..$ filetype : chr [1:2] "sra" "sra"
.. .. .. .. ..$ filesize : logi [1:2] NA NA
.. .. .. .. ..$ filenumber: int [1:2] 1 1
.. .. .. .. ..$ md5 : logi [1:2] NA NA
.. .. .. .. ..$ urltype : chr [1:2] "aws" "aws"
.. .. .. .. ..$ url : chr [1:2] "s3://sra-pub-src-13/SRR292241/Run94367Lane6.srf" "https://sra-pub-run-odp.s3.amazonaws.com/sra/SRR292241/SRR292241"
.. .. .. ..$ gcp :'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR292241"
.. .. .. .. ..$ filename : chr "SRR292241.3"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "gcp"
.. .. .. .. ..$ url : chr "gs://sra-pub-crun-3/SRR292241/SRR292241.3"
.. .. .. ..$ ncbi:'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR292241"
.. .. .. .. ..$ filename : chr "SRR292241.3"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "ncbi"
.. .. .. .. ..$ url : chr "https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos5/sra-pub-run-32/SRR000/292/SRR292241/SRR292241.3"
.. ..$ SRR390728:List of 7
.. .. ..$ accession : chr "SRR390728"
.. .. ..$ experiment: chr "SRX079566"
.. .. ..$ study : chr "SRP020237"
.. .. ..$ sample : chr "SRS212581"
.. .. ..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
.. .. ..$ attributes:List of 6
.. .. .. ..$ RUN : chr "94367"
.. .. .. ..$ assembly : chr "NCBI36_BCCAGSC_variant"
.. .. .. ..$ ENA-SPOT-COUNT : int 7178576
.. .. .. ..$ ENA-BASE-COUNT : int 516857472
.. .. .. ..$ ENA-FIRST-PUBLIC: chr "2011-12-23"
.. .. .. ..$ ENA-LAST-UPDATE : chr "2016-06-28"
.. .. ..$ files :List of 4
.. .. .. ..$ ftp :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR390728" "SRR390728"
.. .. .. .. ..$ filename : chr [1:2] "SRR390728_1.fastq.gz" "SRR390728_2.fastq.gz"
.. .. .. .. ..$ filetype : chr [1:2] "fastq" "fastq"
.. .. .. .. ..$ filesize : int [1:2] 170346275 168836179
.. .. .. .. ..$ filenumber: int [1:2] 1 2
.. .. .. .. ..$ md5 : chr [1:2] "9a3d37cbb3e47cf8930ed2ba6c8d2cef" "bc4e6304170876186522f4175ee39a8f"
.. .. .. .. ..$ urltype : chr [1:2] "ftp" "ftp"
.. .. .. .. ..$ url : chr [1:2] "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_1.fastq.gz" "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_2.fastq.gz"
.. .. .. ..$ aws :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR390728" "SRR390728"
.. .. .. .. ..$ filename : chr [1:2] "30KWMAAXX_6.sorted_withJunctionsOnGenome_dupsFlagged.bam.1" "SRR390728"
.. .. .. .. ..$ filetype : chr [1:2] "bam" "sra"
.. .. .. .. ..$ filesize : logi [1:2] NA NA
.. .. .. .. ..$ filenumber: int [1:2] 1 1
.. .. .. .. ..$ md5 : logi [1:2] NA NA
.. .. .. .. ..$ urltype : chr [1:2] "aws" "aws"
.. .. .. .. ..$ url : chr [1:2] "s3://sra-pub-src-15/SRR390728/30KWMAAXX_6.sorted_withJunctionsOnGenome_dupsFlagged.bam.1" "https://sra-pub-run-odp.s3.amazonaws.com/sra/SRR390728/SRR390728"
.. .. .. ..$ gcp :'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR390728"
.. .. .. .. ..$ filename : chr "SRR390728.lite.2"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "gcp"
.. .. .. .. ..$ url : chr "gs://sra-pub-zq-5/SRR390728/SRR390728.lite.2"
.. .. .. ..$ ncbi:'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR390728"
.. .. .. .. ..$ filename : chr "SRR390728.lite.2"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "ncbi"
.. .. .. .. ..$ url : chr "https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-zq-20/SRR000/390/SRR390728/SRR390728.lite.2"
Use a recursive apply to create a named character vector, which is convenient for plucking values.
test <- rapply(object = ffq_json, f = function(x) x)
class(test)
[1] "character"
Subset the FTP links.
test[grepl("ftp.url\\d+$", names(test))]
SRX079566.runs.SRR292241.files.ftp.url1
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_1.fastq.gz"
SRX079566.runs.SRR292241.files.ftp.url2
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_2.fastq.gz"
SRX079566.runs.SRR390728.files.ftp.url1
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_1.fastq.gz"
SRX079566.runs.SRR390728.files.ftp.url2
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_2.fastq.gz"
Notes from JSON Defined.
JavaScript Object Notation (JSON) is a data-exchange format that makes it possible to transfer populated data structures between different languages/tools.
JSON can be used in JavaScript programs without any need for parsing or serialising. It is a text-based way of representing JavaScript object literals, arrays, and scalar data.
JSON is relatively easy to read and write, while also easy for software to parse and generate. It is often used for serialising structured data and exchanging it over a network, typically between a server and web applications.
At the granular level, JSON consists of data types.
String - composed of Unicode characters, with backslash
(\
) escaping.
{ “name” : “Bob” }
Number - a JSON number follows JavaScript’s double-precision floating-point format.
{
"number_1" : 210,
"number_2" : 215,
"number_3" : 21.05,
"number_4" : 10.05
}
Boolean - either true
or false
, not
surrounded with quotes, and are treated as string values.
{ “AllowPartialShipment” : false }
Null - empty value and can be used when there is no value assigned to a key.
{ “Special Instructions” : null }
Object - a set of name or value pairs inserted between curly
braces ({}
). The keys must be strings and should be unique,
and separated by comma/s.
{
"Influencer" : { "name" : "Jaxon" , "age" : "42" , "city" , "New York" }
}
{
"Influencers" : [
{
"name" : "Jaxon",
"age" : 42,
"Works At" : "Tech News"
}
{
"name" : "Miller",
"age" : 35
"Works At" : "IT Day"
}
]
}
JSON is perfect for storing temporary data. For example, temporary data can be user-generated data, such as a submitted form on a website. JSON can also be used as a data format for any programming language to provide a high level of interoperability.
A website database has a customer’s mailing address, but the address needs to be verified via an API to make sure it is valid. Send the address data in JSON format to the address validation service API.
When developing applications, each application needs the credentials to connect to a database as well as a log file path. The credentials and the file path can be specified in a JSON file.
JSON simplifies complex documents down to the components that have been identified as being meaningful by converting the process of data extraction to a predictable and human readable JSON file.
JSON has gained momentum in API code programming and web services because it helps in faster data interchange and web service results.
It is text-based, lightweight, and has an easy-to-parse data format requiring no additional code for parsing. For web services, the need to return and display a lot of data makes JSON the ideal choice.
A document database is a type of nonrelational database designed to store, retrieve, and manage document-oriented information. Rather than having a schema defined upfront, document databases allow for storing data in collections consisting of documents. NoSQL databases and JSON databases are types of document databases.
Document databases are often popular among developers because they store data in a document-model format (semi-structured) rather than relational (structured).
Document databases offer more flexibility, because developers do not have to plan out the schemas ahead of time and they can use the same format they are using in their application code. This means the careful planning of a SQL database is not as necessary, which makes document databases useful for rapidly evolving schemas, which can be common in software development. However, this can come at the cost of speed, size, and specificity.
Applications that use different JSON data types and JSON-oriented query language can interact with data stored in a JSON document database. The JSON document database also provides native support for JSON.
Characterisitics that define a JSON document database:
BLOB
, VARCHAR2
, CLOB
, or binary
JSON in 21c.Storing JSON data in a JSON document database makes use of columns
whose data types are VARCHAR2
, CLOB
,
BLOB
, or binary JSON in 21c. The choice of which to use is
usually determined by the size of the JSON documents. Storing JSON data
in the database using standard SQL data types means that JSON data can
be manipulated like any other data type.
JSON data can be managed and manipulated with tables in a JSON document database, regardless of the data type. The choice of which table to use is typically motivated by the size of the JSON documents.
sessionInfo()
R version 4.3.0 (2023-04-21)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.2 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] rjson_0.2.21 tidyjson_0.3.2 yaml_2.3.7 jsonlite_1.8.5
[5] workflowr_1.7.0
loaded via a namespace (and not attached):
[1] dplyr_1.1.2 compiler_4.3.0 promises_1.2.0.1 tidyselect_1.2.0
[5] Rcpp_1.0.10 stringr_1.5.0 git2r_0.32.0 assertthat_0.2.1
[9] tidyr_1.3.0 callr_3.7.3 later_1.3.0 jquerylib_0.1.4
[13] fastmap_1.1.1 R6_2.5.1 generics_0.1.3 knitr_1.42
[17] tibble_3.2.1 rprojroot_2.0.3 bslib_0.4.2 pillar_1.9.0
[21] rlang_1.1.0 utf8_1.2.3 cachem_1.0.7 stringi_1.7.12
[25] httpuv_1.6.9 xfun_0.39 getPass_0.2-2 fs_1.6.2
[29] sass_0.4.5 cli_3.6.1 magrittr_2.0.3 ps_1.7.5
[33] digest_0.6.31 processx_3.8.1 rstudioapi_0.14 lifecycle_1.0.3
[37] vctrs_0.6.2 evaluate_0.20 glue_1.6.2 whisker_0.4.1
[41] fansi_1.0.4 purrr_1.0.1 rmarkdown_2.21 httr_1.4.5
[45] tools_4.3.0 pkgconfig_2.0.3 htmltools_0.5.5