Last updated: 2024-06-10
Checks: 7 0
Knit directory: muse/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version eaa2aca. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: r_packages_4.3.3/
Ignored: r_packages_4.4.0/
Untracked files:
Untracked: women.json
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/json_vs_yaml.Rmd
) and HTML
(docs/json_vs_yaml.html
) files. If you’ve configured a
remote Git repository (see ?wflow_git_remote
), click on the
hyperlinks in the table below to view the files as they were in that
past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | eaa2aca | Dave Tang | 2024-06-10 | Load nested lists |
html | 097bb55 | Dave Tang | 2024-06-10 | Build site. |
Rmd | 38bfda1 | Dave Tang | 2024-06-10 | YAML defined |
html | 97e60dd | Dave Tang | 2023-06-19 | Build site. |
Rmd | 032e058 | Dave Tang | 2023-06-19 | More details on JSON |
html | 643530f | Dave Tang | 2023-06-14 | Build site. |
Rmd | c7dcdf2 | Dave Tang | 2023-06-14 | Parsing JSON |
html | ca51b55 | Dave Tang | 2023-05-24 | Build site. |
Rmd | b72b5b3 | Dave Tang | 2023-05-24 | Check out additional packages |
html | 3bb3245 | Dave Tang | 2023-05-19 | Build site. |
Rmd | 50c92d7 | Dave Tang | 2023-05-19 | JSON and YAML formats |
JSON and YAML are popular serialisation formats.
In computing, serialization (or serialisation) is the process of translating a data structure or object state into a format that can be stored (e.g. files in secondary storage devices, data buffers in primary storage devices) or transmitted (e.g. data streams over computer networks) and reconstructed later (possibly in a different computer environment).
Install the following packages:
install.packages(c("jsonlite", "yaml", "tidyjson", "rjson"))
Load libraries.
library(jsonlite)
library(yaml)
library(tidyjson)
Attaching package: 'tidyjson'
The following object is masked from 'package:jsonlite':
read_json
The following object is masked from 'package:stats':
filter
library(rjson)
Attaching package: 'rjson'
The following objects are masked from 'package:jsonlite':
fromJSON, toJSON
As a first example, we will convert the women
data set,
which is a small data set with 15 observations for 2 variables.
women
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
Convert women
to JSON using jsonlite
.
women_json <- jsonlite::toJSON(women, pretty = TRUE)
women_json
[
{
"height": 58,
"weight": 115
},
{
"height": 59,
"weight": 117
},
{
"height": 60,
"weight": 120
},
{
"height": 61,
"weight": 123
},
{
"height": 62,
"weight": 126
},
{
"height": 63,
"weight": 129
},
{
"height": 64,
"weight": 132
},
{
"height": 65,
"weight": 135
},
{
"height": 66,
"weight": 139
},
{
"height": 67,
"weight": 142
},
{
"height": 68,
"weight": 146
},
{
"height": 69,
"weight": 150
},
{
"height": 70,
"weight": 154
},
{
"height": 71,
"weight": 159
},
{
"height": 72,
"weight": 164
}
]
read_json
does not parse the output of
toJSON
.
jsonlite::write_json(x = women_json, path = "women.json")
tidyjson::read_json(path = "women.json")
# A tbl_json: 1 x 2 tibble with a "JSON" attribute
..JSON document.id
<chr> <int>
1 "[\"[\\n {\\n \\..." 1
Converts into list.
str(rjson::fromJSON(women_json))
List of 15
$ :List of 2
..$ height: num 58
..$ weight: num 115
$ :List of 2
..$ height: num 59
..$ weight: num 117
$ :List of 2
..$ height: num 60
..$ weight: num 120
$ :List of 2
..$ height: num 61
..$ weight: num 123
$ :List of 2
..$ height: num 62
..$ weight: num 126
$ :List of 2
..$ height: num 63
..$ weight: num 129
$ :List of 2
..$ height: num 64
..$ weight: num 132
$ :List of 2
..$ height: num 65
..$ weight: num 135
$ :List of 2
..$ height: num 66
..$ weight: num 139
$ :List of 2
..$ height: num 67
..$ weight: num 142
$ :List of 2
..$ height: num 68
..$ weight: num 146
$ :List of 2
..$ height: num 69
..$ weight: num 150
$ :List of 2
..$ height: num 70
..$ weight: num 154
$ :List of 2
..$ height: num 71
..$ weight: num 159
$ :List of 2
..$ height: num 72
..$ weight: num 164
Convert women
to YAML.
women_yaml <- as.yaml(women, indent = 3)
writeLines(women_yaml)
height:
- 58.0
- 59.0
- 60.0
- 61.0
- 62.0
- 63.0
- 64.0
- 65.0
- 66.0
- 67.0
- 68.0
- 69.0
- 70.0
- 71.0
- 72.0
weight:
- 115.0
- 117.0
- 120.0
- 123.0
- 126.0
- 129.0
- 132.0
- 135.0
- 139.0
- 142.0
- 146.0
- 150.0
- 154.0
- 159.0
- 164.0
JSON to data frame.
jsonlite::fromJSON(women_json)
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
YAML to data frame. This does not work for more complex data structures (see below).
yaml.load(women_yaml, handlers = list(map = function(x) as.data.frame(x) ))
height weight
1 58 115
2 59 117
3 60 120
4 61 123
5 62 126
6 63 129
7 64 132
8 65 135
9 66 139
10 67 142
11 68 146
12 69 150
13 70 154
14 71 159
15 72 164
A data frame containing lists.
my_df <- data.frame(
id = 1:3,
title = letters[1:3]
)
my_df$keywords = list(
c('aa', 'aaa', 'aaaa'),
c('bb', 'bbb'),
c('cc', 'ccc', 'cccc', 'ccccc')
)
my_df
id title keywords
1 1 a aa, aaa, aaaa
2 2 b bb, bbb
3 3 c cc, ccc, cccc, ccccc
Convert my_df
to JSON.
my_df_json <- jsonlite::toJSON(my_df, pretty = TRUE)
my_df_json
[
{
"id": 1,
"title": "a",
"keywords": ["aa", "aaa", "aaaa"]
},
{
"id": 2,
"title": "b",
"keywords": ["bb", "bbb"]
},
{
"id": 3,
"title": "c",
"keywords": ["cc", "ccc", "cccc", "ccccc"]
}
]
Convert my_df
to YAML.
my_df_yaml <- as.yaml(my_df, indent = 3)
writeLines(my_df_yaml)
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
Converting from JSON to YAML is easy.
identical(writeLines(as.yaml(jsonlite::fromJSON(my_df_json))), writeLines(my_df_yaml))
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
id:
- 1
- 2
- 3
title:
- a
- b
- c
keywords:
- - aa
- aaa
- aaaa
- - bb
- bbb
- - cc
- ccc
- cccc
- ccccc
[1] TRUE
Converting from YAML to JSON for my_df
is not as
straight-forward because of the different number of keywords.
my_df_list <- yaml.load(my_df_yaml)
my_df_list
$id
[1] 1 2 3
$title
[1] "a" "b" "c"
$keywords
$keywords[[1]]
[1] "aa" "aaa" "aaaa"
$keywords[[2]]
[1] "bb" "bbb"
$keywords[[3]]
[1] "cc" "ccc" "cccc" "ccccc"
This conversion is different from the original data frame to JSON conversion because this creates a single object, where as the original conversion creates an array with three objects.
jsonlite::toJSON(my_df_list, pretty = TRUE)
{
"id": [1, 2, 3],
"title": ["a", "b", "c"],
"keywords": [
["aa", "aaa", "aaaa"],
["bb", "bbb"],
["cc", "ccc", "cccc", "ccccc"]
]
}
my_df_json
[
{
"id": 1,
"title": "a",
"keywords": ["aa", "aaa", "aaaa"]
},
{
"id": 2,
"title": "b",
"keywords": ["bb", "bbb"]
},
{
"id": 3,
"title": "c",
"keywords": ["cc", "ccc", "cccc", "ccccc"]
}
]
I could probably write a hacky function to make the conversion but I won’t.
The ffq tool generates metadata in JSON:
ffq SRX079566 > data/SRX079566.json
ffq_json <- jsonlite::read_json(path = "data/SRX079566.json", simplifyVector = TRUE)
str(ffq_json)
List of 1
$ SRX079566:List of 5
..$ accession : chr "SRX079566"
..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
..$ platform : chr "ILLUMINA"
..$ instrument: chr "Illumina Genome Analyzer IIx"
..$ runs :List of 2
.. ..$ SRR292241:List of 7
.. .. ..$ accession : chr "SRR292241"
.. .. ..$ experiment: chr "SRX079566"
.. .. ..$ study : chr "SRP020237"
.. .. ..$ sample : chr "SRS212581"
.. .. ..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
.. .. ..$ attributes:List of 6
.. .. .. ..$ RUN : chr "94367"
.. .. .. ..$ instrument_model: chr "Illumina Genome Analyzer II"
.. .. .. ..$ ENA-SPOT-COUNT : int 9721384
.. .. .. ..$ ENA-BASE-COUNT : int 699939648
.. .. .. ..$ ENA-FIRST-PUBLIC: chr "2011-07-05"
.. .. .. ..$ ENA-LAST-UPDATE : chr "2019-10-07"
.. .. ..$ files :List of 4
.. .. .. ..$ ftp :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR292241" "SRR292241"
.. .. .. .. ..$ filename : chr [1:2] "SRR292241_1.fastq.gz" "SRR292241_2.fastq.gz"
.. .. .. .. ..$ filetype : chr [1:2] "fastq" "fastq"
.. .. .. .. ..$ filesize : int [1:2] 387227151 395115704
.. .. .. .. ..$ filenumber: int [1:2] 1 2
.. .. .. .. ..$ md5 : chr [1:2] "a5e0d2d51550127ea9ce3a0219deb375" "e9ce7abd3bce9d5ff194d6e045a36c1c"
.. .. .. .. ..$ urltype : chr [1:2] "ftp" "ftp"
.. .. .. .. ..$ url : chr [1:2] "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_1.fastq.gz" "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_2.fastq.gz"
.. .. .. ..$ aws :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR292241" "SRR292241"
.. .. .. .. ..$ filename : chr [1:2] "Run94367Lane6.srf" "SRR292241"
.. .. .. .. ..$ filetype : chr [1:2] "sra" "sra"
.. .. .. .. ..$ filesize : logi [1:2] NA NA
.. .. .. .. ..$ filenumber: int [1:2] 1 1
.. .. .. .. ..$ md5 : logi [1:2] NA NA
.. .. .. .. ..$ urltype : chr [1:2] "aws" "aws"
.. .. .. .. ..$ url : chr [1:2] "s3://sra-pub-src-13/SRR292241/Run94367Lane6.srf" "https://sra-pub-run-odp.s3.amazonaws.com/sra/SRR292241/SRR292241"
.. .. .. ..$ gcp :'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR292241"
.. .. .. .. ..$ filename : chr "SRR292241.3"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "gcp"
.. .. .. .. ..$ url : chr "gs://sra-pub-crun-3/SRR292241/SRR292241.3"
.. .. .. ..$ ncbi:'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR292241"
.. .. .. .. ..$ filename : chr "SRR292241.3"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "ncbi"
.. .. .. .. ..$ url : chr "https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos5/sra-pub-run-32/SRR000/292/SRR292241/SRR292241.3"
.. ..$ SRR390728:List of 7
.. .. ..$ accession : chr "SRR390728"
.. .. ..$ experiment: chr "SRX079566"
.. .. ..$ study : chr "SRP020237"
.. .. ..$ sample : chr "SRS212581"
.. .. ..$ title : chr "Illumina Genome Analyzer IIx paired end sequencing; RNA-Seq (polyA+) analysis of DLBCL cell line HS0798"
.. .. ..$ attributes:List of 6
.. .. .. ..$ RUN : chr "94367"
.. .. .. ..$ assembly : chr "NCBI36_BCCAGSC_variant"
.. .. .. ..$ ENA-SPOT-COUNT : int 7178576
.. .. .. ..$ ENA-BASE-COUNT : int 516857472
.. .. .. ..$ ENA-FIRST-PUBLIC: chr "2011-12-23"
.. .. .. ..$ ENA-LAST-UPDATE : chr "2016-06-28"
.. .. ..$ files :List of 4
.. .. .. ..$ ftp :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR390728" "SRR390728"
.. .. .. .. ..$ filename : chr [1:2] "SRR390728_1.fastq.gz" "SRR390728_2.fastq.gz"
.. .. .. .. ..$ filetype : chr [1:2] "fastq" "fastq"
.. .. .. .. ..$ filesize : int [1:2] 170346275 168836179
.. .. .. .. ..$ filenumber: int [1:2] 1 2
.. .. .. .. ..$ md5 : chr [1:2] "9a3d37cbb3e47cf8930ed2ba6c8d2cef" "bc4e6304170876186522f4175ee39a8f"
.. .. .. .. ..$ urltype : chr [1:2] "ftp" "ftp"
.. .. .. .. ..$ url : chr [1:2] "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_1.fastq.gz" "ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_2.fastq.gz"
.. .. .. ..$ aws :'data.frame': 2 obs. of 8 variables:
.. .. .. .. ..$ accession : chr [1:2] "SRR390728" "SRR390728"
.. .. .. .. ..$ filename : chr [1:2] "30KWMAAXX_6.sorted_withJunctionsOnGenome_dupsFlagged.bam.1" "SRR390728"
.. .. .. .. ..$ filetype : chr [1:2] "bam" "sra"
.. .. .. .. ..$ filesize : logi [1:2] NA NA
.. .. .. .. ..$ filenumber: int [1:2] 1 1
.. .. .. .. ..$ md5 : logi [1:2] NA NA
.. .. .. .. ..$ urltype : chr [1:2] "aws" "aws"
.. .. .. .. ..$ url : chr [1:2] "s3://sra-pub-src-15/SRR390728/30KWMAAXX_6.sorted_withJunctionsOnGenome_dupsFlagged.bam.1" "https://sra-pub-run-odp.s3.amazonaws.com/sra/SRR390728/SRR390728"
.. .. .. ..$ gcp :'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR390728"
.. .. .. .. ..$ filename : chr "SRR390728.lite.2"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "gcp"
.. .. .. .. ..$ url : chr "gs://sra-pub-zq-5/SRR390728/SRR390728.lite.2"
.. .. .. ..$ ncbi:'data.frame': 1 obs. of 8 variables:
.. .. .. .. ..$ accession : chr "SRR390728"
.. .. .. .. ..$ filename : chr "SRR390728.lite.2"
.. .. .. .. ..$ filetype : chr "sra"
.. .. .. .. ..$ filesize : logi NA
.. .. .. .. ..$ filenumber: int 1
.. .. .. .. ..$ md5 : logi NA
.. .. .. .. ..$ urltype : chr "ncbi"
.. .. .. .. ..$ url : chr "https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-zq-20/SRR000/390/SRR390728/SRR390728.lite.2"
Use a recursive apply to create a named character vector, which is convenient for plucking values.
test <- rapply(object = ffq_json, f = function(x) x)
class(test)
[1] "character"
Subset the FTP links.
test[grepl("ftp.url\\d+$", names(test))]
SRX079566.runs.SRR292241.files.ftp.url1
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_1.fastq.gz"
SRX079566.runs.SRR292241.files.ftp.url2
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR292/SRR292241/SRR292241_2.fastq.gz"
SRX079566.runs.SRR390728.files.ftp.url1
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_1.fastq.gz"
SRX079566.runs.SRR390728.files.ftp.url2
"ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR390/SRR390728/SRR390728_2.fastq.gz"
What is YAML? The YML File Format.
To create a YAML file, use either the .yaml
or
.yml
file extension. Before writing any YAML code, you can
add three dashes (---
) at the start of the file to allow
having multiple YAML documents in a single YAML file, making file
organisation much easier; separate each document with three dashes
(---
). You can use three dots (...
) to mark
the end of the document.
#
characterAlthough YAML auto-detects the data types in a file, you can specify
the type of data you want to use. To explicitly specify the type of
data, use the !!
symbol and the name of the data type
before the value:
# parse this value as a string
date: !!str 2022-11-11
## parse this value as a float (it will be 1.0 instead of 1)
fave_number: !!float 1
In YAML, strings in some cases can be left unquoted, but you can also
wrap them in single ('
) or double ("
)
quotation marks.
If you want to write a string that spans across multiple lines and
you want to preserve the line breaks, use the pipe symbol
(|
) and make sure that the message is indented!
|
I am message that spans multiple lines
I go on and on across lines
and lines
and more lines
If you have a string in a YAML file that spans across multiple lines
for readability, but you want the parser to interpret it as a single
line string, you can use the >
character instead of
|
, which will replace each line break with a space.
Numbers express numerical data, and in YAML, these include integers (whole numbers), floats (numbers with a decimal point), exponentials, octals, and hexadecimals.
# integer
19
# float
8.7
# exponential
4.5e+13
# octal
0o23
# hexadecimal
0xFF
Booleans in YAML, and other programming languages, have one of two
states and are expressed with either true
or
false
. Words like true
and false
are keywords in YAML, so don’t surround them with quotation marks if you
want them interpreted as Booleans. Null values are expressed with the
keyword null or the tilde character, ~
.
Collections in YAML can be:
To write a sequence, use a dash (-
) followed by a
space:
- HTML
- CSS
- JavaScript
Each item in the sequence (list) is placed on a separate line, with a dash in front of the value and each item in the list is on the same level.
You can create a nested sequence (remember, use spaces - not tabs - to create the levels of indentation):
- HTML
- CSS
- JavaScript
- React
- Angular
- Vue
Mappings allow you to list keys with values. Key/value pairs are the
building blocks of YAML documents. Use a colon (:
) followed
by a space to create key/value pairs:
Employees:
name: John Doe
age: 23
country: USA
Combining the two to create a list of objects:
Employees:
- name: John Doe
department: Engineering
country: USA
- name: Kate Kateson
department: IT support
country: United Kingdom
Read a test YAML file.
dat <- yaml::read_yaml("data/nested_list.yml")
dat$root
$simple_list
[1] "one" "two" "three"
$simple_list_2
[1] "four" "five" "six"
$nested_list
$nested_list$one
[1] "one" "two" "three"
$nested_list$two
[1] "four" "five" "six"
$nested_list_2
$nested_list_2$one
$nested_list_2$one$two
[1] "one" "two" "three"
$nested_list_2$one$three
[1] "four" "five" "six"
Return only specific “level” by using names()
.
names(dat$root)
[1] "simple_list" "simple_list_2" "nested_list" "nested_list_2"
names(dat$root$nested_list)
[1] "one" "two"
Notes from JSON Defined.
JavaScript Object Notation (JSON) is a data-exchange format that makes it possible to transfer populated data structures between different languages/tools.
JSON can be used in JavaScript programs without any need for parsing or serialising. It is a text-based way of representing JavaScript object literals, arrays, and scalar data.
JSON is relatively easy to read and write, while also easy for software to parse and generate. It is often used for serialising structured data and exchanging it over a network, typically between a server and web applications.
At the granular level, JSON consists of data types.
String - composed of Unicode characters, with backslash
(\
) escaping.
{ “name” : “Bob” }
Number - a JSON number follows JavaScript’s double-precision floating-point format.
{
"number_1" : 210,
"number_2" : 215,
"number_3" : 21.05,
"number_4" : 10.05
}
Boolean - either true
or false
, not
surrounded with quotes, and are treated as string values.
{ “AllowPartialShipment” : false }
Null - empty value and can be used when there is no value assigned to a key.
{ “Special Instructions” : null }
Object - a set of name or value pairs inserted between curly
braces ({}
). The keys must be strings and should be unique,
and separated by comma/s.
{
"Influencer" : { "name" : "Jaxon" , "age" : "42" , "city" , "New York" }
}
{
"Influencers" : [
{
"name" : "Jaxon",
"age" : 42,
"Works At" : "Tech News"
}
{
"name" : "Miller",
"age" : 35
"Works At" : "IT Day"
}
]
}
JSON is perfect for storing temporary data. For example, temporary data can be user-generated data, such as a submitted form on a website. JSON can also be used as a data format for any programming language to provide a high level of interoperability.
A website database has a customer’s mailing address, but the address needs to be verified via an API to make sure it is valid. Send the address data in JSON format to the address validation service API.
When developing applications, each application needs the credentials to connect to a database as well as a log file path. The credentials and the file path can be specified in a JSON file.
JSON simplifies complex documents down to the components that have been identified as being meaningful by converting the process of data extraction to a predictable and human readable JSON file.
JSON has gained momentum in API code programming and web services because it helps in faster data interchange and web service results.
It is text-based, lightweight, and has an easy-to-parse data format requiring no additional code for parsing. For web services, the need to return and display a lot of data makes JSON the ideal choice.
A document database is a type of nonrelational database designed to store, retrieve, and manage document-oriented information. Rather than having a schema defined upfront, document databases allow for storing data in collections consisting of documents. NoSQL databases and JSON databases are types of document databases.
Document databases are often popular among developers because they store data in a document-model format (semi-structured) rather than relational (structured).
Document databases offer more flexibility, because developers do not have to plan out the schemas ahead of time and they can use the same format they are using in their application code. This means the careful planning of a SQL database is not as necessary, which makes document databases useful for rapidly evolving schemas, which can be common in software development. However, this can come at the cost of speed, size, and specificity.
Applications that use different JSON data types and JSON-oriented query language can interact with data stored in a JSON document database. The JSON document database also provides native support for JSON.
Characterisitics that define a JSON document database:
BLOB
, VARCHAR2
, CLOB
, or binary
JSON in 21c.Storing JSON data in a JSON document database makes use of columns
whose data types are VARCHAR2
, CLOB
,
BLOB
, or binary JSON in 21c. The choice of which to use is
usually determined by the size of the JSON documents. Storing JSON data
in the database using standard SQL data types means that JSON data can
be manipulated like any other data type.
JSON data can be managed and manipulated with tables in a JSON document database, regardless of the data type. The choice of which table to use is typically motivated by the size of the JSON documents.
sessionInfo()
R version 4.4.0 (2024-04-24)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] rjson_0.2.21 tidyjson_0.3.2 yaml_2.3.8 jsonlite_1.8.8
[5] workflowr_1.7.1
loaded via a namespace (and not attached):
[1] dplyr_1.1.4 compiler_4.4.0 promises_1.3.0 tidyselect_1.2.1
[5] Rcpp_1.0.12 stringr_1.5.1 git2r_0.33.0 assertthat_0.2.1
[9] tidyr_1.3.1 callr_3.7.6 later_1.3.2 jquerylib_0.1.4
[13] fastmap_1.2.0 R6_2.5.1 generics_0.1.3 knitr_1.46
[17] tibble_3.2.1 rprojroot_2.0.4 bslib_0.7.0 pillar_1.9.0
[21] rlang_1.1.3 utf8_1.2.4 cachem_1.1.0 stringi_1.8.4
[25] httpuv_1.6.15 xfun_0.44 getPass_0.2-4 fs_1.6.4
[29] sass_0.4.9 cli_3.6.2 magrittr_2.0.3 ps_1.7.6
[33] digest_0.6.35 processx_3.8.4 rstudioapi_0.16.0 lifecycle_1.0.4
[37] vctrs_0.6.5 evaluate_0.23 glue_1.7.0 whisker_0.4.1
[41] fansi_1.0.6 purrr_1.0.2 rmarkdown_2.27 httr_1.4.7
[45] tools_4.4.0 pkgconfig_2.0.3 htmltools_0.5.8.1