Last updated: 2024-11-01
Checks: 7 0
Knit directory:
locust-comparative-genomics/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20221025) was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 1aaa476. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .RData
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.DS_Store
Ignored: analysis/.Rhistory
Ignored: data/.DS_Store
Ignored: data/.Rhistory
Ignored: data/OLD/.DS_Store
Ignored: data/OLD/DEseq2_SCUBE_SCUBE_THORAX_STARnew_features/.DS_Store
Ignored: data/OLD/DEseq2_SGREG_SGREG_HEAD_STARnew_features/.DS_Store
Ignored: data/OLD/DEseq2_SGREG_SGREG_THORAX_STARnew_features/.DS_Store
Ignored: data/OLD/americana/.DS_Store
Ignored: data/OLD/americana/deg_counts/.DS_Store
Ignored: data/OLD/americana/deg_counts/STAR_newparams/.DS_Store
Ignored: data/OLD/cubense/deg_counts/STAR/cubense/featurecounts/
Ignored: data/OLD/cubense/deg_counts/STAR/gregaria/
Ignored: data/OLD/gregaria/.DS_Store
Ignored: data/OLD/gregaria/deg_counts/.DS_Store
Ignored: data/OLD/gregaria/deg_counts/STAR/.DS_Store
Ignored: data/OLD/gregaria/deg_counts/STAR/gregaria/.DS_Store
Ignored: data/OLD/gregaria/deg_counts/STAR_newparams/.DS_Store
Ignored: data/OLD/piceifrons/.DS_Store
Ignored: data/annotation/
Ignored: data/list/.DS_Store
Ignored: figures/
Ignored: tables/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/3_rna-mapping.Rmd) and
HTML (docs/3_rna-mapping.html) files. If you’ve configured
a remote Git repository (see ?wflow_git_remote), click on
the hyperlinks in the table below to view the files as they were in that
past version.
| File | Version | Author | Date | Message |
|---|---|---|---|---|
| Rmd | f01f1cf | Maeva TECHER | 2024-11-01 | Adding new files and docs |
| html | f01f1cf | Maeva TECHER | 2024-11-01 | Adding new files and docs |
| html | ba35b82 | Maeva A. TECHER | 2024-06-20 | Build site. |
| html | 3a3cc33 | Maeva A. TECHER | 2024-05-15 | Build site. |
| Rmd | 15b0b8b | Maeva A. TECHER | 2024-05-15 | wflow_publish("analysis/3_rna-mapping.Rmd") |
| html | db5ed7e | Maeva A. TECHER | 2024-05-15 | Build site. |
| Rmd | 9fb3455 | Maeva A. TECHER | 2024-05-15 | wflow_publish("analysis/3_rna-mapping.Rmd") |
| html | 4901b2b | Maeva A. TECHER | 2024-05-14 | Build site. |
| Rmd | b71a166 | Maeva A. TECHER | 2024-05-14 | wflow_publish("analysis/3_rna-mapping.Rmd") |
| html | 0c0bf91 | Maeva A. TECHER | 2024-05-14 | Build site. |
| Rmd | be09a11 | Maeva A. TECHER | 2024-05-14 | update markdown |
| html | be09a11 | Maeva A. TECHER | 2024-05-14 | update markdown |
| html | 0638be7 | Maeva A. TECHER | 2024-03-05 | Build site. |
| Rmd | f2184f2 | Maeva A. TECHER | 2024-03-05 | wflow_publish("analysis/3_rna-mapping.Rmd") |
| html | df94db2 | Maeva A. TECHER | 2024-02-20 | adding markdown qc |
| html | e39d280 | Maeva A. TECHER | 2024-01-30 | Build site. |
| html | f701a01 | Maeva A. TECHER | 2024-01-30 | reupdate |
| html | 5ea7338 | Maeva A. TECHER | 2024-01-30 | Build site. |
| Rmd | 7f93ae2 | Maeva A. TECHER | 2024-01-30 | wflow_publish("analysis/3_rna-mapping.Rmd") |
| html | c11436d | Maeva A. TECHER | 2024-01-24 | Build site. |
| html | 1b09cbe | Maeva A. TECHER | 2024-01-24 | remove |
| html | 71b33ec | Maeva A. TECHER | 2023-12-18 | Build site. |
| Rmd | 53877fa | Maeva A. TECHER | 2023-12-18 | add pages |
library("knitr")
library("rmdformats")
library("tidyverse")
library("DT") # for making interactive search table
library("plotly") # for interactive plots
library("ggthemes") # for theme_calc
library("reshape2")
## Global options
options(max.print="10000")
knitr::opts_chunk$set(
echo = TRUE,
message = FALSE,
warning = FALSE,
cache = FALSE,
comment = FALSE,
prompt = FALSE,
tidy = TRUE
)
opts_knit$set(width=75)
The following workflow results from a standardization and several independent computational tests between Maeva Techer (Texas A&M University) and David Bellini (BCM). We will analyse transcriptomes from six Schistocerca sister species, but they may not have 1:1 orthologs for each gene. Thus, we propose two analysis strategies to find shared evolutionary patterns regarding gene expression changes associated with conspecific density.
The first method consist in mapping
all transcript reads to a common RefSeq genome which is
also the closest to the LCA: the swarming desert locust S.
gregaria.
Pros: All genes LOCID are shared and this ease downstream
overlap/venn diagram analysis for cross-species comparison.
Cons: Rare transcripts species specific might be disregarded
and we loose resolution.
The second method consist in mapping
all transcript reads to their respective RefSeq genome
and then matching species genes using orthologs predictions made with
OrthoFinder.
Pros: Rare transcript species specific are well represented
and a fine resolution is conserved. Highly conserved gene families are
well represented.
Cons: More time spend before achieving cross-species
comparison without the guarantee that orthologous relationships are the
most exacts without manual curation.
For both methods we will use the same pipeline except that we will change the reference genome.
We used STAR for mapping reads to either their own
species reference genome (thereafter name strategy 1) or to their
respective reference genome (strategy 2). For both methods, we will
first index the genomes with parameters that relies on observations
during S. americana genome annotation curation.
We index each genome in the rule Snakemake which allows to run it for multiple species:
########################################
# Snakefile rule
########################################
rule STAR_index:
input:
ref_genome = lambda wildcards: REFDir + "/" + species_to_genome[wildcards.species] + "_genomic.fna",
annotation = lambda wildcards: REFDir + "/" + species_to_genome[wildcards.species] + "_genomic.gtf"
output:
index_dir = directory(REFDir + "/index/{species}/STAR")
params:
genome_dir = REFDir + "/index/{species}/STAR"
shell:
"""
module purge
ml GCC/12.2.0 STAR/2.7.10b
STAR --runMode genomeGenerate \\
--runThreadN 20 \\
--genomeDir {params.genome_dir} \\
--genomeFastaFiles {input.ref_genome} \\
--sjdbGTFfile {input.annotation} \\
--alignIntronMax 2500000 \\
--sjdbOverhang 149
"""
The parameters were chosen for the following reasons:
--runMode genomeGenerate indicates we are in the
mode to build genome index
--alignIntronMax 2500000 David found evidence that
introns can be as long as 2mil bp long.
--sjdbGTFfile $ANNOTATION we use these parameters to
indicates that we want the annotation file to be already accounted
for.
--sjdbOverhang 149 this should be ideally
(mate_length -1), and our reads are PE150.
We kept the rest to be default values.
How did we we found that most species intron were large?
For each genome, we calculated the size of the intron with the following awk scripts, the example highlight with nitens genome:
ml GCC/12.3.0 BEDTools/2.31.0
# Step 1: Extract exons and the corresponding transcript (mRNA) IDs
awk '$3 == "exon" {split($9, a, ";"); gene_id=""; for (i in a) {gsub(/"| /, "", a[i]); if (a[i] ~ /Parent=/) {split(a[i], b, "="); gene_id=b[2]; }} print $1"\t"$4-1"\t"$5"\t"gene_id"\t.\t"$7}' GCF_023898315.1_iqSchNite1.1_genomic.gff > exons.bed
# Step 2: Sort the BED file by chromosome and start position
sort -k1,1 -k2,2n exons.bed > exons_sorted.bed
# Step 3: Calculate intron sizes and find the largest intron
echo "Largest Intron:"
bedtools cluster -i exons_sorted.bed | bedtools groupby -g 4 -c 2,3 -o min,max | \
awk '{intron_size = $3 - $2; if (intron_size > 0) print $1"\t"$2"\t"$3"\t"intron_size}' | \
sort -k4,4nr | head -n 1
# Step 4: Calculate intron sizes and find the smallest intron
echo "Smallest Intron:"
bedtools cluster -i exons_sorted.bed | bedtools groupby -g 4 -c 2,3 -o min,max | \
awk '{intron_size = $3 - $2; if (intron_size > 0) print $1"\t"$2"\t"$3"\t"intron_size}' | \
sort -k4,4n | head -n 1
STARHere we made some test to check whether the selection of certain parameters in the mapping will influence the discovery of differentially expressed genes (DEG) in the downstream analysis (see next section). We were interested, in particular to see how the specified intron size and splicing modes. Our choices here relies on preliminary runs on all species head tissues.
########################################
# Snakefile rule
########################################
#Ahead of the alignment I will build independently the index for STAR
rule STAR_align:
input:
index = lambda wildcards: REFDir + "/index/" + wildcards.species + "/STAR",
annotation = lambda wildcards: REFDir + "/" + species_to_genome[wildcards.species] + "_genomic.gtf",
trimmed_read1 = WORKDir + "/01-{species}-trimmed-tgalore/{locust}_1_val_1.fq.gz",
trimmed_read2 = WORKDir + "/01-{species}-trimmed-tgalore/{locust}_2_val_2.fq.gz"
output:
bam = WORKDir + "/02-{species}-star/{locust}_Aligned.sortedByCoord.out.bam",
readtable = WORKDir + "/02-{species}-star/{locust}_ReadsPerGene.out.tab",
counts = WORKDir + "/03-{species}-DESeq2/{locust}_counts.txt"
params:
prefix = WORKDir + "/02-{species}-star/{locust}_"
shell:
"""
module purge
ml GCC/12.2.0 STAR/2.7.10b
STAR --runThreadN 16 \\
--genomeDir {input.index} \\
--genomeLoad NoSharedMemory \\
--limitBAMsortRAM 32000000000 \\
--outSAMtype BAM SortedByCoordinate \\
--outSAMattrRGline ID:{wildcards.locust} SM:{wildcards.locust} LB:Stranded_Total_RNA_RiboZero PL:Illumina PU:NovaSeq6000 \\
--quantMode TranscriptomeSAM GeneCounts \\
--twopassMode Basic \\
--sjdbGTFfile {input.annotation} \\
--sjdbOverhang 149 \\
--outSAMattributes NH HI AS NM MD \\
--alignIntronMax 2500000 \\
--outSAMunmapped Within \\
--readFilesCommand zcat \\
--readFilesIn {input.trimmed_read1} {input.trimmed_read2} \\
--outFileNamePrefix {params.prefix}
module purge
ml GCC/13.2.0 SAMtools/1.20
samtools index -c {output.bam}
cut -f1,4 {output.readtable} | grep -v "_" > {output.counts}
"""
########################################
# Parameters in the cluster.json file
########################################
"STAR_align":
{
"cpus-per-task" : 10,
"partition" : "medium",
"ntasks": 1,
"mem" : "100G",
"time": "0-08:00:00"
}
The parameters were chosen as follow:
--genomeDir {input.index}: Defines the directory
containing the pre-built genome index files, allowing STAR to access the
indexed reference genome for alignment.
--genomeLoad NoSharedMemory: Sets the genome loading
mode to “NoSharedMemory,” meaning the genome index is not loaded into
shared memory. This option can be used when multiple instances of STAR
will not access the same genome index in parallel.
--limitBAMsortRAM 32000000000:Limits the amount of
memory used for sorting the BAM file to 32GB. This setting helps avoid
out-of-memory errors by controlling the RAM allocation for sorting
steps.
--outSAMtype BAM SortedByCoordinate Specifies the
output format and sorting method for the alignment file. Here, the
output is in BAM format, sorted by genomic coordinates.
--outSAMattrRGline: Adds read group information to
the BAM header. Here:
ID:{wildcards.locust} is the read group
identifier,
SM:{wildcards.locust} specifies the sample
name,
LB:Stranded_Total_RNA_RiboZero labels the library
type,
PL:Illumina defines the platform, and
PU:NovaSeq6000 specifies the platform unit.
--quantMode TranscriptomeSAM GeneCounts Activates
two modes of quantification:
TranscriptomeSAM outputs alignments against the
transcriptome, useful for quantification tools,
GeneCounts generates a read count per gene, which is
essential for differential expression analysis.
--twopassMode Basic : Enables a two-pass alignment
mode. In the first pass, STAR discovers splice junctions, and in the
second pass, it re-aligns reads using these discovered junctions. This
mode enhances accuracy for novel splice junctions, especially in de novo
transcript discovery.
--sjdbGTFfile {input.annotation} Specifies the path
to the annotation GTF file, which provides STAR with known exon-exon
junctions to improve alignment accuracy.
--sjdbOverhang 149 Sets the overhang length to 149,
which should match the read length minus 1. This parameter defines the
length of the read portion that overlaps with known splice
junctions.
--outSAMattributes NH HI AS NM MD: Specifies
additional attributes to include in the SAM/BAM file:
NH = number of alignments for the read,
HI = index of the alignment,
AS = alignment score,
NM = edit distance to the reference,
MD = mismatch information.
--alignIntronMax 2500000 same as index
--outSAMunmapped Within : Includes unmapped reads in
the output BAM file, placing them alongside mapped reads, which can be
useful for quality control and downstream analysis.
--readFilesCommand zcat : Instructs STAR to use
zcat to decompress the input files on the fly, allowing it
to read .fastq.gz files directly without manual
decompression.
--readFilesIn {input.read1} {input.read2} path to
the paired-end reads.
--outFileNamePrefix {params.prefix} is the prefix of
our output file.
After mapping, we obtained alignment statistics from the
*_Log.final.out file and filled out the metadata table with
it.
grep 'Number of input reads' *_Log.final.out
grep 'Average input read length' *_Log.final.out
grep 'Uniquely mapped reads number' *_Log.final.out
grep 'Number of reads mapped to multiple loci' *_Log.final.out
grep 'Number of reads mapped to too many loci' *_Log.final.out
grep 'Number of reads unmapped: too many mismatches' *Log.final.out
grep 'Number of reads unmapped: too short' *Log.final.out
grep 'Number of reads unmapped: other' *Log.final.out
Alternative, we can use HISAT2 or pseudo-alignment with
Salmon. Both rules are included in our Snakemake but I will
focus on STAR from here on.
GeneCountWe can note that the option --quantMode GeneCounts from
STAR produces the same output as the htseq-count tool if we
used the –-sjdbGTFfile option.
In the output file {locust}_ReadsPerGene.out.tab we can
obtain the read counts per gene depending if our data is
unstranded (column 2) or stranded
(columns 3 and 4).
column 1: gene ID
column 2: counts for unstranded RNA-seq.
column 3: counts for the 1st read strand aligned with RNA
column 4: counts for the 2nd read strand aligned with RNA (the most
common protocol nowadays)
For our pilot S. gregaria project, we know we used Illumina Stranded Total RNA Prep kit but to check we can with the following code:
grep -v "N_" {locust}_ReadsPerGene.out.tab | awk '{unst+=$2;forw+=$3;rev+=$4}END{print unst,forw,rev}'
#or as a loop
for i in *_ReadsPerGene.out.tab; do echo $i; grep -v "N_" $i | awk '{unst+=$2;forw+=$3;rev+=$4}END{print unst,forw,rev}'; done
In a stranded library preparation protocol, there should be a strong imbalance between number of reads mapped to known genes in forward versus reverse strands.
########################################
# Snakefile rule
########################################
#either run the following rule
rule reads_count:
input:
readtable = OUTdir + "/alignment/STAR2/{locust}_ReadsPerGene.out.tab",
output:
OUTdir + "/DESeq2/counts_4thcol/{locust}_counts.txt"
shell:
"""
cut -f1,4 {input.readtable} | grep -v "_" > {output}
"""
# or simply this loop for less core usage
# for i in $SCRATCH/locust_phase/data/alignment/STAR/*ReadsPerGene.out.tab; do echo $i; cut -f1,4 $i | grep -v "_" > $SCRATCH/locust_phase/data/DESeq2/counts_4thcol/`basename $i ReadsPerGene.out.tab`counts.txt; done
ALTERNATIVE OPTION: We can also build a single matrix of expression with all individuals targeted. Below is the example for S. piceifrons:
paste SPICE_*_ReadsPerGene.out.tab | grep -v "_" | awk '{printf "%s\t", $1}{for (i=4;i<=NF;i+=4) printf "%s\t", $i; printf "\n" }' > tmp
sed -e "1igene_name\t$(ls SPICE_*ReadsPerGene.out.tab | tr '\n' '\t' | sed 's/_ReadsPerGene.out.tab//g')" tmp > raw_counts_piceifrons_matrix.txt
featureCountsWhile GeneCounts is very easy to use, another options is
to refine the read count using featureCounts from the
Subread module, to be able to summarise and parse the reads
counts on genomic features such as genes, exons, promoter, rRNA…
To run featureCounts onto Grace cluster, we have adapted
the script to be as follow (later will be implemented in Snakemake).
########################################
# Snakefile rule
########################################
rule featurecounts:
input:
bamfile = WORKDir + "/02-{species}-star/{locust}_Aligned.sortedByCoord.out.bam",
annotation = lambda wildcards: REFDir + "/" + species_to_genome[wildcards.species] + "_genomic.gtf"
output:
counts_txt = WORKDir + "/04-{species}-featurecounts/{locust}_counts.txt",
final_counts = WORKDir + "/03-{species}-DESeq2/{locust}_featurecounts.txt"
shell:
"""
module purge
ml GCC/11.2.0 Subread/2.0.3
featureCounts -p \\
--countReadPairs \\
-t transcript,exon \\
-g gene_id \\
--extraAttributes gene_name \\
-a {input.annotation} \\
-R BAM {input.bamfile} \\
-T 12 \\
-o {output.counts_txt}
cut -f 1,8 {output.counts_txt} | tail -n +3 > {output.final_counts}
"""
Use this loop after creating the files because I still had the header.
-p: Specifies paired-end read
counting mode. This parameter instructs featureCounts to count paired
reads as a single fragment, which is essential for accurate
quantification in paired-end data.
--countReadPairs: Ensures that
featureCounts counts read pairs instead of individual reads, aligning
with the paired-end data provided.
-t transcript,exon: Specifies the
feature types to count, in this case, both transcript and exon. The
command will count reads aligning to these feature types based on the
annotation provided.
-g gene_id: Specifies the attribute
to be used as the gene identifier. In this case, gene_id indicates that
each read will be assigned based on the gene ID, typically provided in
the annotation file.
--extraAttributes gene_name: Adds
extra information, such as gene_name, to the output. This is useful for
associating gene symbols or names with the read counts.
-a {input.annotation}: Specifies
the path to the annotation file (in GTF or GFF format). This file
provides gene and exon locations to guide featureCounts in assigning
reads to specific features.
-R BAM {input.bamfile}: Generates a
BAM file with reads annotated by feature, which can be useful for
visualization or further analysis. {input.bamfile} is the path to the
input BAM file generated from alignment.
-o {output.counts_txt}: Specifies
the output file to store the feature counts in tabular format. This file
will contain read counts for each feature (gene or transcript), with
additional attributes if specified.
If you do not like to use Snakemake, you can run this rule as as shell as follow:
#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=feature
#SBATCH --time=22:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=12
#SBATCH --mem=40G
ml GCC/11.2.0 Subread/2.0.3
# Specify the input folder containing .bam files
input_folder="/scratch/group/songlab/maeva/headthor-locusts-rna/gregaria-rna/data/alignment/STAR/GCF_023897955.1_iqSchGreg1.2_genomic"
# Specify the output folder
output_folder="/scratch/group/songlab/maeva/headthor-locusts-rna/gregaria-rna/data/deg_counts/STAR/gregaria/featurecounts"
# Ensure the input folder exists
if [ ! -d "$input_folder" ]; then
echo "Error: Input folder not found."
exit 1
fi
# Ensure the output folder exists
if [ ! -d "$output_folder" ]; then
mkdir -p "$output_folder" || exit 1
fi
# Create finalcounts folder if it doesn't exist
finalcounts_folder="$output_folder/finalcounts"
mkdir -p "$finalcounts_folder" || exit 1
# Specify the path of gtf
annot="/scratch/group/songlab/maeva/refgenomes/locusts_complete/GCF_023897955.1_iqSchGreg1.2_genomic.gtf"
for bamfile in "$input_folder"/*Aligned.sortedByCoord.out*.bam; do
featureCounts -p --countReadPairs -t transcript --extraAttributes gene_name -a "$annot" -R BAM "$bamfile" -T 20 -o "$output_folder/$(basename "${bamfile%.bam}.txt")"
cut -f 1,8 "$output_folder/$(basename "${bamfile%.bam}.txt")" | tail -n +2 > "$output_folder/finalcounts/$(basename "${bamfile%.bam}.txt")"
echo "Processing completed for $bamfile."
done
Then to make the output compatible with our R scripts in the future,
we want to also run the following renaming code in the
finalcounts folder:
for file in *_Aligned.sortedByCoord.out.txt; do
# Extract the filename without the extension
filename="${file%_Aligned.sortedByCoord.out.txt}"
# Remove the first line and save to new file
tail -n +3 "$file" > "${filename}_counts.txt"
echo "Processing completed for $file."
done
#### Load our SRA metadata table
metaseq <- read_table("data/metadata/RNAseq_modified_METADATA2022.txt", col_names = TRUE, guess_max = 5000)
#### Create an interactive search table
metaseq %>%
datatable(extensions = "Buttons", options = list(dom = "Blfrtip", buttons = c("copy",
"csv", "excel"), lengthMenu = list(c(10, 20, 50, 100, 200, -1), c(10, 20,
50, 100, 200, "All"))))
mycol_species <- c("green", "deeppink", "orange", "orange2", "blue2", "red2", "yellow2")
#### READS AVERAGE colored by STATUS
eren <- ggplot(metaseq, aes(x=Map_SUM, y = Inputtrim_reads, color = Species, label = SampleID))
eren <- eren + geom_point(size=2, alpha =0.7)
eren <- eren + scale_color_manual(values = mycol_species)
eren <- eren + theme_calc()
eren <- eren + geom_hline(yintercept=30000000, linetype="dotted", color = "green3")
eren <- eren + geom_hline(yintercept=50000000, linetype="dotted", color = "green3")
eren <- eren + geom_vline(xintercept=80, linetype="dotted", color = "blue2")
eren <- eren + xlim(0,100)
options(scipen=20) #to remove the scientific annotation of the axis
#### make an interactive version of the scatter plot
attacktitan <- ggplotly(eren)
attacktitan
We added the green thresholds to indicate how many reads are recommended by Illumina (lower end and optimal). The blue line demonstrates where the mapping ratio could be considered not contaminated.
## READS AVERAGE colored by STATUS
mikasa <- ggplot(metaseq, aes(x=Map_SUM, y = Inputtrim_reads, color = Species, label = SampleID))
mikasa <- mikasa + geom_point(size=2, alpha =0.7)
mikasa <- mikasa + scale_color_manual(values = mycol_species)
mikasa <- mikasa + theme_calc()
mikasa <- mikasa + geom_hline(yintercept=30000000, linetype="dotted", color = "green3")
mikasa <- mikasa + geom_hline(yintercept=50000000, linetype="dotted", color = "green3")
mikasa <- mikasa + geom_vline(xintercept=80, linetype="dotted", color = "blue2")
mikasa <- mikasa + xlim(0,100)
options(scipen=20) #to remove the scientific annotation of the axis
## make an interactive version of the scatter plot
attacktitan <- ggplotly(eren)
attacktitan
We added the green thresholds to indicate how many reads are recommended by Illumina (lower end and optimal). The blue line demonstrates where the mapping ratio could be considered not contaminated.
STAR reads mappingThis follows the same code as for STRATEGY 1 except that we will change the RefSeq to the transcript species genome path.
sessionInfo()
FALSE R version 4.4.1 (2024-06-14)
FALSE Platform: aarch64-apple-darwin20
FALSE Running under: macOS Sonoma 14.7
FALSE
FALSE Matrix products: default
FALSE BLAS: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib
FALSE LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0
FALSE
FALSE locale:
FALSE [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
FALSE
FALSE time zone: Asia/Tokyo
FALSE tzcode source: internal
FALSE
FALSE attached base packages:
FALSE [1] stats graphics grDevices utils datasets methods base
FALSE
FALSE other attached packages:
FALSE [1] reshape2_1.4.4 ggthemes_5.1.0 plotly_4.10.4 DT_0.33
FALSE [5] lubridate_1.9.3 forcats_1.0.0 stringr_1.5.1 dplyr_1.1.4
FALSE [9] purrr_1.0.2 readr_2.1.5 tidyr_1.3.1 tibble_3.2.1
FALSE [13] ggplot2_3.5.1 tidyverse_2.0.0 rmdformats_1.0.4 knitr_1.48
FALSE
FALSE loaded via a namespace (and not attached):
FALSE [1] sass_0.4.9 utf8_1.2.4 generics_0.1.3 stringi_1.8.4
FALSE [5] hms_1.1.3 digest_0.6.37 magrittr_2.0.3 timechange_0.3.0
FALSE [9] evaluate_1.0.1 grid_4.4.1 bookdown_0.41 fastmap_1.2.0
FALSE [13] plyr_1.8.9 rprojroot_2.0.4 workflowr_1.7.1 jsonlite_1.8.9
FALSE [17] whisker_0.4.1 formatR_1.14 promises_1.3.0 httr_1.4.7
FALSE [21] fansi_1.0.6 viridisLite_0.4.2 scales_1.3.0 lazyeval_0.2.2
FALSE [25] jquerylib_0.1.4 cli_3.6.3 rlang_1.1.4 munsell_0.5.1
FALSE [29] withr_3.0.2 cachem_1.1.0 yaml_2.3.10 tools_4.4.1
FALSE [33] tzdb_0.4.0 colorspace_2.1-1 httpuv_1.6.15 vctrs_0.6.5
FALSE [37] R6_2.5.1 lifecycle_1.0.4 git2r_0.35.0 htmlwidgets_1.6.4
FALSE [41] fs_1.6.5 pkgconfig_2.0.3 pillar_1.9.0 bslib_0.8.0
FALSE [45] later_1.3.2 gtable_0.3.6 data.table_1.16.2 glue_1.8.0
FALSE [49] Rcpp_1.0.13 xfun_0.49 tidyselect_1.2.1 rstudioapi_0.17.1
FALSE [53] htmltools_0.5.8.1 rmarkdown_2.28 compiler_4.4.1