Last updated: 2021-04-28

Checks: 7 0

Knit directory: STUtility_web_site/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20191031) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version f91ff8c. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    analysis/.DS_Store
    Ignored:    analysis/manual_annotation.png
    Ignored:    analysis/visualization_3D.Rmd
    Ignored:    pre_data/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/getting_started.Rmd) and HTML (docs/getting_started.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd f91ff8c Ludvig Larsson 2021-04-28 wflow_publish(files = c(“analysis/index.Rmd”, “analysis/about.Rmd”,
html 1e9c615 Ludvig Larsson 2021-04-28 Build site.
html 34884bd Ludvig Larsson 2020-10-06 Build site.
Rmd 36493b2 Ludvig Larsson 2020-10-06 wflow_publish(files = paste0(“analysis/”, c(“index.Rmd”, “about.Rmd”,
html d13dabd Ludvig Larsson 2020-06-05 Build site.
html 60407bb Ludvig Larsson 2020-06-05 Build site.
html e339ebc Ludvig Larsson 2020-06-05 Build site.
html 6606127 Ludvig Larsson 2020-06-05 Build site.
html f0ea9c1 Ludvig Larsson 2020-06-05 Build site.
html 651f315 Ludvig Larsson 2020-06-05 Build site.
html a6a0c51 Ludvig Larsson 2020-06-05 Build site.
html f3c5cb4 Ludvig Larsson 2020-06-05 Build site.
html f89df7e Ludvig Larsson 2020-06-05 Build site.
html 1e67a7c Ludvig Larsson 2020-06-04 Build site.
html f71b0c0 Ludvig Larsson 2020-06-04 Build site.
html 8a54a4d Ludvig Larsson 2020-06-04 Build site.
Rmd 3c0ea68 Ludvig Larsson 2020-06-04 update website
html 5e466eb Ludvig Larsson 2020-06-04 Build site.
Rmd 4819d2b Ludvig Larsson 2020-06-04 update website
html 377408d Ludvig Larsson 2020-06-04 Build site.
html ed54ffb Ludvig Larsson 2020-06-04 Build site.
html f14518c Ludvig Larsson 2020-06-04 Build site.
html efd885b Ludvig Larsson 2020-06-04 Build site.
html 4f42429 Ludvig Larsson 2020-06-04 Build site.
Rmd 3660612 Ludvig Larsson 2020-06-04 update website
html cd4bf7d jbergenstrahle 2020-01-17 Build site.
Rmd 74b6860 jbergenstrahle 2020-01-17 wflow_publish(“analysis/getting_started.Rmd”)
html 7d3885a jbergenstrahle 2020-01-12 Build site.
Rmd 2eb9ead jbergenstrahle 2020-01-12 Updated InputFromTable info
html fb06450 jbergenstrahle 2020-01-11 Build site.
Rmd baff98e jbergenstrahle 2019-12-16 adding
Rmd 5cb8ab1 jbergenstrahle 2019-12-02 update2
Rmd 8f9876e jbergenstrahle 2019-11-29 Update
Rmd 03c9b7c jbergenstrahle 2019-11-19 Added JB
html 03c9b7c jbergenstrahle 2019-11-19 Added JB
html be8be1d Ludvig Larsson 2019-10-31 Build site.
html f42af97 Ludvig Larsson 2019-10-31 Build site.
html 0754921 Ludvig Larsson 2019-10-31 Build site.
html 908fe2c Ludvig Larsson 2019-10-31 Build site.
html 54787b5 Ludvig Larsson 2019-10-31 Build site.
html d96da86 Ludvig Larsson 2019-10-31 Build site.
Rmd e4e84dc Ludvig Larsson 2019-10-31 Added theme
html 0c79353 Ludvig Larsson 2019-10-31 Build site.
Rmd 6257541 Ludvig Larsson 2019-10-31 Publish the initial files for myproject
html 2241363 Ludvig Larsson 2019-10-31 Build site.
Rmd 8bae4fa Ludvig Larsson 2019-10-31 Publish the initial files for myproject
html a53305c Ludvig Larsson 2019-10-31 Build site.
html 6f61b95 Ludvig Larsson 2019-10-31 Build site.
Rmd 429c12c Ludvig Larsson 2019-10-31 Publish the initial files for myproject

First you need to load the library into your R session.

library(STutility)

10X Visium platform


Input files


10X Visium data output is produced with the spaceranger command line tool from raw fastq files. The output includes a number of files, and the ones that needs to be imported into R for STUtility are the following:

  1. “filtered_feature_bc_matrix.h5” or “raw_feature_bc_matrix.h5” : Gene expression matrices in .h5 format containing the raw UMI counts for each spot and each gene. The “filtered_feature_bc_matrix.h5” matrix contains spots that are localized under the tissue while the “raw_feature_bc_matrix.h5” matrix contain all spots from the entire capture area. NOTE: if you want to include spots outside of the tissue, you need to set disable.subset = TRUE when running InputFromTable.
  2. “tissue_positions_list.csv” : contains capture-spot barcode and capture-spot coordinates.
  3. “tissue_hires_image.png” : H&E image in PNG format with a width of 2000 pixels.
  4. “scalefactors_json.json” : This file contains scaling factors subject to the H&E images of different resolutions. E.g. “tissue_hires_scalef”: 0.1, means that the pixel coordinates in the “tissue_positions_list.csv” table should be scaled by a factor of 0.1 to match the size of the “hires_image.png” file.

To use the full range of functions within STUtility, all four files are needed for each sample. However, all data analysis steps that do not involve the H&E image can be performed with only the count file as input. To read in the 10x Visium .h5 files, the package hdf5r needs to be installed: BiocManager::install("hdf5r").


To follow along this tutorial with a test data set, go to the 10x Dataset repo and download the following two files:

  • Feature / cell matrix HDF5 (filtered)
  • Spatial imaging data (.zip)
    • tissue_hires_image
    • tissue_positions_list
    • scalefactors_json

The .zip file contains the H&E image (in two resolution formats; “tissue_lowres_image” and “tissue_hires_image”), the “tissue_positions_list” with pixel coordinates for the orginial TIF image and the “scalefactors_json.json” that contains the scalefactors used to derive the pixel cooridinates for the hires images. There are three alternatives to handle the scaling of pixel coordinates. Either, you manualy open the JSON file and note the scalefactor and state these numbers in a column in the infoTable named “scaleVisium” (see below). Or, you add a column named “json” with paths to the “scalefactors_json.json” files. A third option is to manually input the values to the function InputFromTable (see ?InputFromTable). If the scalefactors are incorrect, you will end up with misaligned coordinates. This could for example happen if you are trying to use the orignal H&E image and not the “tissue_hires_image.png” when running InputFromTable.

In this vignette we have used the datasets from Mouse Brain Serial Section 1 and 2 (Sagittal-Posterior)

Prepare data

The recommended method to read the files into R is via the creation of a data.frame that we will call the infoTable. There are four columns in this table that are required for Visium data: “samples”, “spotfiles”, “imgs” and “json”. These columns specify the paths to the required input files.

                            samples                                  spotfiles
1 path/to/sample_1/count_file_1.h5  path/to/sample_1/tissue_positions_list.csv
2  path/to/sample_2/count_file_2.h5 path/to/sample_2/tissue_positions_list.csv
                                     imgs
1 path/to/sample_1/tissue_hires_image.png
2 path/to/sample_2/tissue_hires_image.png
                                     json
1 path/to/sample_1/scalefactors_json.json
2 path/to/sample_2/scalefactors_json.json

Any number of extra columns can be added to the infoTable data.frame that you want to include as meta data in your Seurat object, e.g. “gender”, “age”, “slide_id” etc. These columns can be named as you like, but they should not be called not “sample”, “spotfiles”, “imgs” or “json”.

We are now ready to load our samples and create a “Seurat” object using our infotTable data.frame.

Here, we demonstrate the creation of the Seurat object, while also including some filtering (see section “Quality Control” for more information on filtering):

  • Keeping the genes that are found in at least 5 capture spots and has a total count value >= 100.
  • Keeping the capture-spots that contains >= 500 total transcripts.

Note that you have to specify which platform the data comes from. The default platform is 10X Visium but if you wish to run data from the older ST platforms, there is support for “1k” and “2k” arrays. You can also mix datasets from different platforms by specifying one of; “Visium”, “1k” or “2k” in a separate column of the infoTable named “platform”. You just have to make sure that the datasets have gene symbols which follows the same nomenclature so that count matrices can be merged.

se <- InputFromTable(infotable = infoTable, 
                      min.gene.count = 100, 
                      min.gene.spots = 5,
                      min.spot.count = 500,
                      platform =  "Visium")


Once you have created a Seurat object you can process and visualize your data just like in a scRNA-seq experiment and make use of the plethora of functions provided in the Seurat package. There are many vignettes to get started available at the Seurat web site.

For example, if you wish to explore the spatial distribution of various features on the array coordinates you can do this using the ST.FeaturePlot() function. Features include any column stored in the “meta.data” slot, dimensionality reduction objects or gene expression vectors.


ST.FeaturePlot(se, features = c("nFeature_RNA"), dark.theme = T, cols = c("black", "darkblue", "cyan", "yellow", "red", "darkred"), ncol = 2)

Version Author Date
1e9c615 Ludvig Larsson 2021-04-28
4f42429 Ludvig Larsson 2020-06-04
fb06450 jbergenstrahle 2020-01-11
03c9b7c jbergenstrahle 2019-11-19
d96da86 Ludvig Larsson 2019-10-31


Older ST platforms

In general, using STUtility for the old ST platform data follows the same workflow as for the 10X Visium arrays. The only difference is when loading the data into R.

Input files

The original ST workflow produces the following three output files:

  1. Count file in TSV format (Count file with raw counts (UMI filtered) for each gene and capture spot)
  2. Spot detector output (File with spatial pixel coordinate information produced using the Spot Detector webtool)
  3. H&E image (same as the one used for spot detection)

Prepare data

The recommended method to read the files into R is via the creation of a “infoTable”, which is a table with at least three columns “samples”, “spotfiles” and “imgs”.

Test data is provided:

infoTable <- read.table("~/STUtility/inst/extdata/metaData_mmBrain.csv", sep=";", header=T, stringsAsFactors = F)[c(1, 5, 6, 7), ]

Load data and convert from EnsambleIDs to gene symbols

The provided count matrices uses EnsambleIDs (with version id) for the gene symbols. Gene symbols are often a preference for easier reading, and we have therefore included an option to directly convert the gene IDs when creating the Seurat object. The data.frame object required for conversion should have one column called “gene_id” matching the original gene IDs and a column called “gene_name” with the desired symbols. you also need to make sure that these columns have unique names, otherwise the converiion will not work. We have provided such a table that you can use to convert between EnsambleIDs and MGI symbols (mus musculus gene nomenclature).


#Transformation table for geneIDs
ensids <- read.table(file = list.files(system.file("extdata", package = "STutility"), full.names = T, pattern = "mouse_genes"), header = T, sep = "\t", stringsAsFactors = F)

We are now ready to load our samples and create a “Seurat” object.

Here, we demonstrate the creation of the Seurat object, while also including some filtering:

  • Keeping the genes that are found in at least 5 capture spots and has a total count value >= 100.
  • Keeping the capture-spots that contains >= 500 total transcripts.

Note that we specify that we’re using the “2k” array platform and also, since we in this case have genes in the columns, we set transpose=TRUE.

#TODO: add warnings if ids missmatch. Check that ids are in the data.frame ...
se <- InputFromTable(infotable = infoTable, 
                      min.gene.count = 100, 
                      min.gene.spots = 5,
                      min.spot.count = 500, 
                      annotation = ensids, 
                      platform = "2k",
                      transpose = TRUE)


Once you have created a Seurat object you can process and visualize your data just like in a scRNA-seq experiment and make use of the plethora of functions provided in the Seurat package. There are many vignettes to get started available at the Seurat web site.

Some of the functionalities provided in the Seurat package are not yet supported by STUtility, such as dataset integration and multimodal analysis. These methods should in principle work if you treat the data like a scRNA-seq experiment, but you will not be able to make use of the image related data or the spatial visualization functions.

For example, if you wish to explore the spatial distribution of various features on the array coordinates you can do this using the ST.FeaturePlot() function.

ST.FeaturePlot(se, features = c("nFeature_RNA"), dark.theme = T, cols = c("dark blue", "cyan", "yellow", "red", "darkred"), ncol = 2)

Version Author Date
1e9c615 Ludvig Larsson 2021-04-28
4f42429 Ludvig Larsson 2020-06-04
fb06450 jbergenstrahle 2020-01-11