Last updated: 2023-11-02
Checks: 7 0
Knit directory: muse/ 
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712) was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version e8b8d76. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    analysis/cbioportal_cache/
    Ignored:    r_packages_4.3.2/
Untracked files:
    Untracked:  analysis/cell_ranger.Rmd
    Untracked:  analysis/sleuth.Rmd
    Untracked:  analysis/tss_xgboost.Rmd
    Untracked:  code/multiz100way/
    Untracked:  data/HG00702_SH089_CHSTrio.chr1.vcf.gz
    Untracked:  data/HG00702_SH089_CHSTrio.chr1.vcf.gz.tbi
    Untracked:  data/ncrna_NONCODE[v3.0].fasta.tar.gz
    Untracked:  data/ncrna_noncode_v3.fa
    Untracked:  data/netmhciipan.out.gz
    Untracked:  export/davetang039sblog.WordPress.2023-06-30.xml
    Untracked:  export/output/
    Untracked:  women.json
Unstaged changes:
    Modified:   analysis/graph.Rmd
    Modified:   analysis/gsva.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/tm.Rmd) and HTML
(docs/tm.html) files. If you’ve configured a remote Git
repository (see ?wflow_git_remote), click on the hyperlinks
in the table below to view the files as they were in that past version.
| File | Version | Author | Date | Message | 
|---|---|---|---|---|
| Rmd | e8b8d76 | Dave Tang | 2023-11-02 | Vignette | 
| html | 4011b46 | Dave Tang | 2023-11-02 | Build site. | 
| Rmd | 5acae29 | Dave Tang | 2023-11-02 | Text mining using the tm package | 
The tm package is a framework for text mining applications within R.
library(tm)Loading required package: NLPpackageVersion("tm")[1] '0.7.11'The crude corpus:
This data set holds 20 news articles with additional meta information from the Reuters-21578 data set. All documents belong to the topic crude dealing with crude oil.
data(crude)
class(crude)[1] "VCorpus" "Corpus" inspect can be used to display detailed information on a
corpus, a term-document matrix, or a text document.
inspect(crude[1:3])<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 3
$`reut-00001.xml`
<<PlainTextDocument>>
Metadata:  15
Content:  chars: 527
$`reut-00002.xml`
<<PlainTextDocument>>
Metadata:  15
Content:  chars: 2634
$`reut-00004.xml`
<<PlainTextDocument>>
Metadata:  15
Content:  chars: 330Create a Term Document Matrix.
tdm <- TermDocumentMatrix(crude)
tdm<<TermDocumentMatrix (terms: 1266, documents: 20)>>
Non-/sparse entries: 2255/23065
Sparsity           : 91%
Maximal term length: 17
Weighting          : term frequency (tf)Convert to matrix.
crude_matrix <- as.matrix(tdm)
dim(crude_matrix)[1] 1266   20Check out the matrix. We need to remove the symbols!
crude_matrix[1:6, 1:6]            Docs
Terms        127 144 191 194 211 236
  ...          0   0   0   0   0   0
  "(it)        0   0   0   0   0   0
  "demand      0   1   0   0   0   0
  "expansion   0   0   0   0   0   0
  "for         0   0   0   0   0   0
  "growth      0   0   0   0   0   0Sparsity is the number of zeros, i.e., words that are not present in documents.
prop.table(table(crude_matrix == 0))
     FALSE       TRUE 
0.08906003 0.91093997 Functions that can be used on a TermDocumentMatrix.
methods(class = "TermDocumentMatrix") [1] [                     as.DocumentTermMatrix as.TermDocumentMatrix
 [4] c                     dimnames<-            Docs                 
 [7] findAssocs            findMostFreqTerms     inspect              
[10] nDocs                 nTerms                plot                 
[13] print                 t                     Terms                
[16] tm_term_score        
see '?methods' for accessing help and source codeClean corpus.
clean_corpus <- function(x){
  x |>
    tm_map(removePunctuation) |>
    tm_map(stripWhitespace) |>
    tm_map(content_transformer(function(x) iconv(x, to='UTF-8', sub='byte'))) |>
    tm_map(removeNumbers) |>
    tm_map(removeWords, stopwords("en")) |>
    tm_map(content_transformer(tolower)) |>
    tm_map(removeWords, c("etc","ie", "eg", stopwords("english")))
}
tdm <- TermDocumentMatrix(clean_corpus(crude))
crude_matrix <- as.matrix(tdm)
crude_matrix[1:6, 1:6]           Docs
Terms       127 144 191 194 211 236
  abdulaziz   0   0   0   0   0   0
  ability     0   2   0   0   0   3
  able        0   0   0   0   0   0
  abroad      0   0   0   0   0   1
  accept      0   0   0   0   0   0
  accord      0   0   0   0   0   0findFreqTerms finds frequent terms in a document-term or
term-document matrix.
findFreqTerms(x = tdm, lowfreq = 10) [1] "barrel"     "barrels"    "bpd"        "crude"      "dlrs"      
 [6] "government" "industry"   "kuwait"     "last"       "market"    
[11] "meeting"    "minister"   "mln"        "new"        "official"  
[16] "oil"        "one"        "opec"       "pct"        "price"     
[21] "prices"     "production" "reuter"     "said"       "saudi"     
[26] "sheikh"     "will"       "world"     Limit matrix to specific words.
inspect(
  x = DocumentTermMatrix(
    x = crude,
    list(dictionary = c("government", "market", "official")
    )
  )
)<<DocumentTermMatrix (documents: 20, terms: 3)>>
Non-/sparse entries: 15/45
Sparsity           : 75%
Maximal term length: 10
Weighting          : term frequency (tf)
Sample             :
     Terms
Docs  government market official
  144          0      3        0
  236          0      0        5
  237          5      0        0
  242          0      1        1
  246          6      0        0
  248          0      4        1
  273          0      1        4
  349          0      1        2
  352          0      1        1
  704          0      1        0findAssocs finds associations in a document-term or
term-document matrix.
findAssocs(x = tdm, terms = 'government', corlimit = 0.8)$government
agriculture       early    positive         say       years       since 
       1.00        1.00        1.00        0.91        0.83        0.82 Simple analysis on the matrix.
Most common words.
head(sort(rowSums(crude_matrix), decreasing = TRUE))   oil   said prices   opec    mln   last 
    85     73     48     42     31     24 Correlation.
cor(crude_matrix[,1], crude_matrix[,2])[1] 0.351976Clustering.
set.seed(31)
my_cluster <- kmeans(x = t(crude_matrix), centers = 4)
my_cluster$cluster127 144 191 194 211 236 237 242 246 248 273 349 352 353 368 489 502 543 704 708 
  4   3   4   4   4   3   2   4   1   3   3   4   4   4   4   4   4   4   4   4 Check headings of cluster 3.
meta(crude, 'heading')[my_cluster$cluster == 3]$`144`
[1] "OPEC MAY HAVE TO MEET TO FIRM PRICES - ANALYSTS"
$`236`
[1] "KUWAIT SAYS NO PLANS FOR EMERGENCY OPEC TALKS"
$`248`
[1] "SAUDI ARABIA REITERATES COMMITMENT TO OPEC PACT"
$`273`
[1] "SAUDI FEBRUARY CRUDE OUTPUT PUT AT 3.5 MLN BPD"Following the vignette.
The main structure for managing documents in the tm
package is a Corpus, which represents a collection of text
documents. A corpus is an abstract concept and there can exist several
implementations in parallel. The default implementation is the
VCorpus, which is short for Volatile Corpus. The
PCorpus implements a Permanent Corpus and the documents are
physically stored outside of R.
Within the corpus constructor, x must be a Source object which
abstracts the input location. tm provides a set of
predefined sources, e.g., DirSource,
VectorSource, or DataframeSource, which handle
a directory, a vector interpreting each component as document, or data
frame like structures (like CSV files), respectively.
Below is an example of reading in a plain text file in the directory
txt containing Latin (lat) texts by the Roman poet
Ovid.
txt <- system.file("texts", "txt", package = "tm")
(ovid <- VCorpus(DirSource(txt, encoding = "UTF-8"), readerControl = list(language = "lat")))<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 5For simple examples VectorSource is quite useful, as it
can create a corpus from character vectors.
docs <- c("This is a text.", "This another one.")
VCorpus(VectorSource(docs))<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 2The tm package ships with several readers (e.g.,
readPlain(), readPDF(), and
readDOC()). See ?getReaders() for an
up-to-date list of available readers.
Create a data frame for creating a corpus.
my_df <- data.frame(
  doc_id = c("doc 1" , "doc 2" , "doc 3" ),
  text = c("this is a sentence", "this is another sentence", "who what how"),
  title = c("title 1" , "title 2" , "title 3" ),
  authors = c("author 1" , "author 2" , "author 3" ),
  topics = c("topic 1" , "topic 2" , "topic 3" ),
  stringsAsFactors = FALSE
)
my_df  doc_id                     text   title  authors  topics
1  doc 1       this is a sentence title 1 author 1 topic 1
2  doc 2 this is another sentence title 2 author 2 topic 2
3  doc 3             who what how title 3 author 3 topic 3A data frame source interprets each row of the data frame as a
document. The first column must be named doc_id and contain
a unique string identifier for each document. The second column must be
named text and contain a UTF-8 encoded string representing
the document’s content. Optional additional columns are used as document
level metadata.
(my_corpus <- Corpus(DataframeSource(my_df)))<<SimpleCorpus>>
Metadata:  corpus specific: 1, document level (indexed): 3
Content:  documents: 3Create a TermDocumentMatrix.
my_tdm <- TermDocumentMatrix(my_corpus)
my_tdm<<TermDocumentMatrix (terms: 6, documents: 3)>>
Non-/sparse entries: 8/10
Sparsity           : 56%
Maximal term length: 8
Weighting          : term frequency (tf)Check out the matrix.
as.matrix(my_tdm)          Docs
Terms      doc 1 doc 2 doc 3
  sentence     1     1     0
  this         1     1     0
  another      0     1     0
  how          0     0     1
  what         0     0     1
  who          0     0     1Once we have a corpus we typically want to modify the documents in
it, e.g., stemming, stop word removal, etc. In tm, all this
functionality is performed via the tm_map() function which
applies (maps) a function to all elements of the corpus; this is called
a transformation. All transformations work on single text documents and
tm_map() just applies them to all documents in a
corpus.
reut21578 <- system.file("texts", "crude", package = "tm")
reuters <- VCorpus(DirSource(reut21578, mode = "binary"), readerControl = list(reader = readReut21578XMLasPlain))
reuters<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 20Remove whitespace.
reuters <- tm_map(reuters, stripWhitespace)We can use arbitrary character processing functions as
transformations as long as the function returns a text document. In this
case we use content_transformer() which provides a
convenience wrapper to access and set the content of a document.
Consequently most text manipulation functions from base R can directly
be used with this wrapper. This works for tolower() as used
here but also with gsub() which comes quite handy for a
broad range of text manipulation tasks.
reuters <- tm_map(reuters, content_transformer(tolower))Remove stopwords.
reuters <- tm_map(reuters, removeWords, stopwords("english"))From https://www.geeksforgeeks.org/introduction-to-stemming/:
Stemming is the process of producing morphological variants of a root/base word. Stemming programs are commonly referred to as stemming algorithms or stemmers. A stemming algorithm reduces the words “chocolates”, “chocolatey”, “choco” to the root word, “chocolate” and “retrieval”, “retrieved”, “retrieves” reduce to the stem “retrieve”. Stemming is an important part of the pipelining process in Natural language processing. The input to the stemmer is tokenized words. How do we get these tokenized words? Well, tokenization involves breaking down the document into different words.
This requires the SnowballC package.
library(SnowballC)
tm_map(reuters, stemDocument)<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 20Often it is of special interest to filter out documents satisfying
given properties. For this purpose the function tm_filter
is designed. It is possible to write custom filter functions which get
applied to each document in the corpus. Alternatively, we can create
indices based on selections and subset the corpus with them. E.g., the
following statement filters out those documents having an ID equal to
“237” and the string “INDONESIA SEEN AT CROSSROADS OVER ECONOMIC CHANGE”
as their heading.
idx <- meta(reuters, "id") == '237' & meta(reuters, "heading") == 'INDONESIA SEEN AT CROSSROADS OVER ECONOMIC CHANGE'
reuters[idx]<<VCorpus>>
Metadata:  corpus specific: 0, document level (indexed): 0
Content:  documents: 1A common approach in text mining is to create a term-document matrix
from a corpus. In the tm package the classes
TermDocumentMatrix and DocumentTermMatrix
(depending on whether you want terms as rows and documents as columns,
or vice versa) employ sparse matrices for corpora. Inspecting a
term-document matrix displays a sample, whereas as.matrix()
yields the full matrix in dense format (which can be very memory
consuming for large matrices).
dtm <- DocumentTermMatrix(reuters)
inspect(dtm)<<DocumentTermMatrix (documents: 20, terms: 1183)>>
Non-/sparse entries: 1908/21752
Sparsity           : 92%
Maximal term length: 17
Weighting          : term frequency (tf)
Sample             :
     Terms
Docs  crude dlrs last mln oil opec prices reuter said saudi
  144     0    0    1   4  11   10      3      1    9     0
  236     1    2    4   4   7    6      2      1    6     0
  237     0    1    3   1   3    1      0      1    0     0
  242     0    0    0   0   3    2      1      1    3     1
  246     0    0    2   0   4    1      0      1    4     0
  248     0    3    1   3   9    6      7      1    5     5
  273     5    2    7   9   5    5      4      1    5     7
  489     0    1    0   2   4    0      2      1    2     0
  502     0    1    0   2   4    0      2      1    2     0
  704     0    0    0   0   3    0      2      1    3     0Find terms that occur at least five times using the
findFreqTerms() function.
findFreqTerms(dtm, 5) [1] "15.8"          "abdul-aziz"    "ability"       "accord"       
 [5] "agency"        "agreement"     "ali"           "also"         
 [9] "analysts"      "arab"          "arabia"        "barrel."      
[13] "barrels"       "billion"       "bpd"           "budget"       
[17] "company"       "crude"         "daily"         "demand"       
[21] "dlrs"          "economic"      "emergency"     "energy"       
[25] "exchange"      "expected"      "exports"       "futures"      
[29] "government"    "group"         "gulf"          "help"         
[33] "hold"          "industry"      "international" "january"      
[37] "kuwait"        "last"          "market"        "may"          
[41] "meeting"       "minister"      "mln"           "month"        
[45] "nazer"         "new"           "now"           "nymex"        
[49] "official"      "oil"           "one"           "opec"         
[53] "output"        "pct"           "petroleum"     "plans"        
[57] "posted"        "present"       "price"         "prices"       
[61] "prices,"       "prices."       "production"    "quota"        
[65] "quoted"        "recent"        "report"        "research"     
[69] "reserve"       "reuter"        "said"          "said."        
[73] "saudi"         "sell"          "sheikh"        "sources"      
[77] "study"         "traders"       "u.s."          "united"       
[81] "west"          "will"          "world"        Find associations (i.e., terms which correlate) with at least 0.8
correlation for the term “opec” using the findAssocs()
function.
findAssocs(dtm, "opec", 0.8)$opec
  meeting emergency       oil      15.8  analysts    buyers      said   ability 
     0.88      0.87      0.87      0.85      0.85      0.83      0.82      0.80 Term-document matrices tend to get very big already for normal sized data sets. Therefore we provide a method to remove sparse terms, i.e., terms occurring only in very few documents. Normally, this reduces the matrix dramatically without losing significant relations inherent to the matrix:
inspect(removeSparseTerms(dtm, 0.4))<<DocumentTermMatrix (documents: 20, terms: 3)>>
Non-/sparse entries: 58/2
Sparsity           : 3%
Maximal term length: 6
Weighting          : term frequency (tf)
Sample             :
     Terms
Docs  oil reuter said
  127   5      1    1
  144  11      1    9
  236   7      1    6
  242   3      1    3
  246   4      1    4
  248   9      1    5
  273   5      1    5
  352   5      1    1
  489   4      1    2
  502   4      1    2A dictionary is a (multi-)set of strings. It is often used to denote
relevant terms in text mining. We represent a dictionary with a
character vector which may be passed to the
DocumentTermMatrix() constructor as a control argument.
Then the created matrix is tabulated against the dictionary, i.e., only
terms from the dictionary appear in the matrix. This allows to restrict
the dimension of the matrix a priori and to focus on specific
terms for distinct text mining contexts.
inspect(DocumentTermMatrix(reuters, list(dictionary = c("prices", "crude", "oil"))))<<DocumentTermMatrix (documents: 20, terms: 3)>>
Non-/sparse entries: 41/19
Sparsity           : 32%
Maximal term length: 6
Weighting          : term frequency (tf)
Sample             :
     Terms
Docs  crude oil prices
  127     2   5      3
  144     0  11      3
  236     1   7      2
  248     0   9      7
  273     5   5      4
  352     0   5      4
  353     2   4      1
  489     0   4      2
  502     0   4      2
  543     2   2      2
sessionInfo()R version 4.3.2 (2023-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.3 LTS
Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0
locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     
other attached packages:
[1] SnowballC_0.7.1 tm_0.7-11       NLP_0.2-1       workflowr_1.7.1
loaded via a namespace (and not attached):
 [1] jsonlite_1.8.7    compiler_4.3.2    promises_1.2.1    Rcpp_1.0.11      
 [5] slam_0.1-50       xml2_1.3.5        stringr_1.5.0     git2r_0.32.0     
 [9] parallel_4.3.2    callr_3.7.3       later_1.3.1       jquerylib_0.1.4  
[13] yaml_2.3.7        fastmap_1.1.1     R6_2.5.1          knitr_1.45       
[17] tibble_3.2.1      rprojroot_2.0.3   bslib_0.5.1       pillar_1.9.0     
[21] rlang_1.1.1       utf8_1.2.4        cachem_1.0.8      stringi_1.7.12   
[25] httpuv_1.6.12     xfun_0.40         getPass_0.2-2     fs_1.6.3         
[29] sass_0.4.7        cli_3.6.1         magrittr_2.0.3    ps_1.7.5         
[33] digest_0.6.33     processx_3.8.2    rstudioapi_0.15.0 lifecycle_1.0.3  
[37] vctrs_0.6.4       evaluate_0.22     glue_1.6.2        whisker_0.4.1    
[41] fansi_1.0.5       rmarkdown_2.25    httr_1.4.7        tools_4.3.2      
[45] pkgconfig_2.0.3   htmltools_0.5.6.1