Last updated: 2018-08-02
workflowr checks: (Click a bullet for more information) ✔ R Markdown file: up-to-date
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
✔ Environment: empty
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
✔ Seed:
set.seed(12345)
The command set.seed(12345)
was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.
✔ Session information: recorded
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
✔ Repository version: 0f79304
wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: output/.DS_Store
Untracked files:
Untracked: analysis/snake.config.notes.Rmd
Untracked: data/18486.genecov.txt
Untracked: data/APApeaksYL.total.inbrain.bed
Untracked: data/Totalpeaks_filtered_clean.bed
Untracked: data/YL-SP-18486-T_S9_R1_001-genecov.txt
Untracked: data/bedgraph_peaks/
Untracked: data/bin200.5.T.nuccov.bed
Untracked: data/bin200.Anuccov.bed
Untracked: data/bin200.nuccov.bed
Untracked: data/clean_peaks/
Untracked: data/combined_reads_mapped_three_prime_seq.csv
Untracked: data/gencov.test.csv
Untracked: data/gencov.test.txt
Untracked: data/gencov_zero.test.csv
Untracked: data/gencov_zero.test.txt
Untracked: data/gene_cov/
Untracked: data/joined
Untracked: data/leafcutter/
Untracked: data/merged_combined_YL-SP-threeprimeseq.bg
Untracked: data/nuc6up/
Untracked: data/reads_mapped_three_prime_seq.csv
Untracked: data/smash.cov.results.bed
Untracked: data/smash.cov.results.csv
Untracked: data/smash.cov.results.txt
Untracked: data/smash_testregion/
Untracked: data/ssFC200.cov.bed
Untracked: data/temp.file1
Untracked: data/temp.file2
Untracked: data/temp.gencov.test.txt
Untracked: data/temp.gencov_zero.test.txt
Untracked: output/picard/
Untracked: output/plots/
Untracked: output/qual.fig2.pdf
Unstaged changes:
Modified: analysis/cleanupdtseq.internalpriming.Rmd
Modified: analysis/dif.iso.usage.leafcutter.Rmd
Modified: analysis/explore.filters.Rmd
Modified: analysis/test.max2.Rmd
Modified: code/Snakefile
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 0f79304 | brimittleman | 2018-08-02 | fix cov to peak file problem |
html | efad657 | Briana Mittleman | 2018-07-31 | Build site. |
Rmd | 7c203e4 | Briana Mittleman | 2018-07-31 | format files for yangs peak script |
html | 7fc2ce7 | Briana Mittleman | 2018-07-30 | Build site. |
Rmd | 782320d | Briana Mittleman | 2018-07-30 | look at coverage in merged bw |
html | e5a8da6 | Briana Mittleman | 2018-07-30 | Build site. |
Rmd | 422a428 | Briana Mittleman | 2018-07-30 | add peak cove pipeline and combined lane qc |
I need to create a processing pipeline that I can run each time I get more individuals that will do the following:
combine all total and nuclear libraries (as a bigwig/genome coverage)
call peaks with Yang’s script
filter peaks with Yang’s script
clean peaks
run feature counts on these peaks for all fo the individuals
I can do this step in my snakefile. First, I added the following to my environemnt.
I want to create bedgraph for each file. I will add a rule to my snakefile that does this and puts them in the bedgraph directory.
#add to directory
dir_bedgraph= dir_data + "bedgraph/"
#add to rule_all
expand(dir_bedgraph + "{samples}.bg", samples=samples)
#rule
rule bedgraph:
input:
bam = dir_sort + "{samples}-sort.bam"
output: dir_bedgraph + "{samples}.bg"
shell: "bedtools genomecov -ibam {input.bam} -bg -5 > {output}"
I want to add more memory for this rule in the cluster.json
"bedgraph" :
{
"mem": 16000
}
I will use the bedgraphtobigwig tool.
#add to directory
dir_bigwig= dir_data + "bigwig/"
dir_sortbg= dir_data + "bedgraph_sort/"
#add to rule_all
expand(dir_sortbg + "{samples}.sort.bg", samples=samples)
expand(dir_bigwig + "{samples}.bw", samples=samples)
rule sort_bg:
input: dir_bedgraph + "{samples}.bg"
output: dir_sortbg + "{samples}.sort.bg"
shell: "sort -k1,1 -k2,2n {input} > {output}"
rule bg_to_bw:
input:
bg=dir_sortbg + "{samples}.sort.bg"
len= chrom_length
output: dir_bigwig + "{samples}.bw"
shell: "bedGraphToBigWig {input.bg} {input.len} {output}"
This next step will take all of the files in the bigwig directory and merge them. To do this I will create a script that creates a list of all of the files then uses this list in the merge script.
mergeBW.sh
#!/bin/bash
#SBATCH --job-name=mergeBW
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=mergeBW.out
#SBATCH --error=mergeBW.err
#SBATCH --partition=broadwl
#SBATCH --mem=40G
#SBATCH --mail-type=END
module load Anaconda3
source activate three-prime-env
ls -d -1 /project2/gilad/briana/threeprimeseq/data/bigwig/* | tail -n +2 > /project2/gilad/briana/threeprimeseq/data/list_bw/list_of_bigwig.txt
bigWigMerge -inList /project2/gilad/briana/threeprimeseq/data/list_bw/list_of_bigwig.txt /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.bg
The result of this script will be a merged bedgraph of all of the files.
library(workflowr)
This is workflowr version 1.1.1
Run ?workflowr for help getting started
library(ggplot2)
library(dplyr)
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
#!/usr/bin/env python
main(inFile, outFile):
fout = open(outFile,'w')
for ind,ln in enumerate(open(inFile)):
print(ind)
chrom, start, end, count = ln.split()
i2=int(start)
while i2 < int(end):
fout.write("%s\t%d\t%s\n"%(chrom, i2 + 1, count))
fout.flush()
i2 += 1
fout.close()
if __name__ == "__main__":
import numpy as np
from misc_helper import *
import sys
inFile = sys.argv[1]
outFile = sys.argv[2]
main(inFile, outFile)
Create a bash script to run this. I want the input and output files to be arguments in the python script.
#!/bin/bash
#SBATCH --job-name=run_bgtocov
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=run_bgtocov.out
#SBATCH --error=run_bgtocov.err
#SBATCH --partition=broadwl
#SBATCH --mem=12G
#SBATCH --mail-type=END
module load Anaconda3
source activate three-prime-env
python bg_to_cov.py "/project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.bg" "/project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.txt"
Add zeros to the bedgraph to make it a genome coverage file.
awk '{print $1 "\t" $2 "\t" "0"}' /project2/gilad/briana/threeprimeseq/data/bedgraph_comb/NuclearBamFiles.split.genomecov.bed > /project2/gilad/briana/threeprimeseq/data/mergedBW/genomecov_zero.txt
Try this with bash:
#!/bin/bash
#SBATCH --job-name=addzero_bash
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=addzerobash.out
#SBATCH --error=addzerobash.err
#SBATCH --partition=broadwl
#SBATCH --mem=12G
#SBATCH --mail-type=END
module load Anaconda3
source activate three-prime-env
sort -k1,1 -k2,2n /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.txt > /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.txt
less /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.txt | awk '{print($1"^"$2"\t"$3)}' > /project2/gilad/briana/threeprimeseq/data/mergedBW/temp1.txt
less /project2/gilad/briana/threeprimeseq/data/mergedBW/genomecov_zero.txt | awk '{print($1"^"$2"\t"$3)}' > /project2/gilad/briana/threeprimeseq/data/mergedBW/temp2.txt
join -a1 -a2 -o '0,1.2' -e 0 /project2/gilad/briana/threeprimeseq/data/mergedBW/temp1.txt /project2/gilad/briana/threeprimeseq/data/mergedBW/temp2.txt | tr '^' '\t' | tr ' ' '\t' | cut -f1-4 > /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.with0.bash.txt
sort_gencov.sh
#!/bin/bash
#SBATCH --job-name=sort_gencov
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=sortgencov.out
#SBATCH --error=sortgencov.err
#SBATCH --partition=bigmem2
#SBATCH --mem=200G
#SBATCH --mail-type=END
sort -k1,1 -k2,2n /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.with0.bash.txt > /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.with0.bash.sort.txt
Run yangs scrip on /project2/gilad/briana/threeprimeseq/data/mergedBW/merged_combined_YL-SP-threeprimeseq.coverage.sort.with0.sort.txt by making this the input file in the callPeaksYL_GEN.py
#!/bin/bash
#SBATCH --job-name=w_getpeakYLgen
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=w_getpeakYLgen.out
#SBATCH --error=w_getpeakYLgen.err
#SBATCH --partition=broadwl
#SBATCH --mem=12G
#SBATCH --mail-type=END
module load Anaconda3
source activate three-prime-env
for i in $(seq 1 22); do
python callPeaksYL_GEN.py $i
done
Run the file with : sbatch w_getpeakYLGEN.sh
After I have the peaks I will need to use Yangs filter peak function.
#!/bin/bash
#SBATCH --job-name=comb_gencov
#SBATCH --account=pi-yangili1
#SBATCH --time=24:00:00
#SBATCH --output=comb_gencov.out
#SBATCH --error=comb_gencov.err
#SBATCH --partition=bigmem2
#SBATCH --mem=100G
#SBATCH --mail-type=END
module load Anaconda3
source activate three-prime-env
samtools merge /project2/gilad/briana/threeprimeseq/data/comb_bam/all_total.nuc_comb.bam /project2/gilad/briana/threeprimeseq/data/sort/*.bam
bedtools genomecov -ibam /project2/gilad/briana/threeprimeseq/data/comb_bam/all_total.nuc_comb.bam -d -split > /project2/gilad/briana/threeprimeseq/data/comb_bam/all_total.nuc_comb.split.genomecov.bed
Update each of the following scripts:
cat /project2/gilad/briana/threeprimeseq/data/mergedPeaks/*.bed >
bed2saf.py
run_feature.sh
filter_peaks.py
Add a rule to the snakefile that creates the 5’ bp resolution but also one that uses the whole read. I will use the whole read method for peak calling.
#add to directory
dir_bedgraph= dir_data + "bedgraph/"
dir_bigwig= dir_data + "bigwig/"
dir_sortbg= dir_data + "bedgraph_sort/"
dir_bedgraph_5= dir_data + "bedgraph_5prime/"
#add to rule_all
expand(dir_bedgraph + "{samples}.split.bg", samples=samples)
expand(dir_sortbg + "{samples}.sort.bg", samples=samples)
expand(dir_bigwig + "{samples}.bw", samples=samples)
expand(dir_bedgraph_5 + "{samples}.5.bg", samples=samples)
#rule
rule bedgraph_5:
input:
bam = dir_sort + "{samples}-sort.bam"
output: dir_bedgraph_5 + "{samples}.5.bg"
shell: "bedtools genomecov -ibam {input.bam} -bg -5 > {output}"
rule bedgraph:
input:
bam = dir_sort + "{samples}-sort.bam"
output: dir_bedgraph + "{samples}.split.bg"
shell: "bedtools genomecov -ibam {input.bam} -bg -split > {output}"
rule sort_bg:
input: dir_bedgraph + "{samples}.split.bg"
output: dir_sortbg + "{samples}.sort.bg"
shell: "sort -k1,1 -k2,2n {input} > {output}"
rule bg_to_bw:
input:
bg=dir_sortbg + "{samples}.sort.bg"
len= chrom_length
output: dir_bigwig + "{samples}.bw"
shell: "bedGraphToBigWig {input.bg} {input.len} {output}"
Will need to run mergeBW.sh and run_bgtocov.sh then call peaks with the updated callpeaks script from yang (get_APA_peaks.py) I run this with w_getpeakYLGEN.sh.
sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.6
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] dplyr_0.7.6 ggplot2_3.0.0 workflowr_1.1.1
loaded via a namespace (and not attached):
[1] Rcpp_0.12.18 compiler_3.5.1 pillar_1.3.0
[4] git2r_0.23.0 plyr_1.8.4 bindr_0.1.1
[7] R.methodsS3_1.7.1 R.utils_2.6.0 tools_3.5.1
[10] digest_0.6.15 evaluate_0.11 tibble_1.4.2
[13] gtable_0.2.0 pkgconfig_2.0.1 rlang_0.2.1
[16] rstudioapi_0.7 yaml_2.1.19 bindrcpp_0.2.2
[19] withr_2.1.2 stringr_1.3.1 knitr_1.20
[22] rprojroot_1.3-2 grid_3.5.1 tidyselect_0.2.4
[25] glue_1.3.0 R6_2.2.2 rmarkdown_1.10
[28] purrr_0.2.5 magrittr_1.5 whisker_0.3-2
[31] backports_1.1.2 scales_0.5.0 htmltools_0.3.6
[34] assertthat_0.2.0 colorspace_1.3-2 stringi_1.2.4
[37] lazyeval_0.2.1 munsell_0.5.0 crayon_1.3.4
[40] R.oo_1.22.0
This reproducible R Markdown analysis was created with workflowr 1.1.1