Last updated: 2025-11-11
Checks: 7 0
Knit directory: misc/analysis/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(1) was run prior to running the
code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 1eba385. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish or
wflow_git_commit). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .DS_Store
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.RData
Ignored: analysis/.Rhistory
Ignored: analysis/ALStruct_cache/
Ignored: data/.Rhistory
Ignored: data/methylation-data-for-matthew.rds
Ignored: data/pbmc/
Ignored: data/pbmc_purified.RData
Untracked files:
Untracked: .dropbox
Untracked: Icon
Untracked: analysis/GHstan.Rmd
Untracked: analysis/GTEX-cogaps.Rmd
Untracked: analysis/PACS.Rmd
Untracked: analysis/Rplot.png
Untracked: analysis/SPCAvRP.rmd
Untracked: analysis/abf_comparisons.Rmd
Untracked: analysis/admm_02.Rmd
Untracked: analysis/admm_03.Rmd
Untracked: analysis/bispca.Rmd
Untracked: analysis/cache/
Untracked: analysis/cholesky.Rmd
Untracked: analysis/compare-transformed-models.Rmd
Untracked: analysis/cormotif.Rmd
Untracked: analysis/cp_ash.Rmd
Untracked: analysis/eQTL.perm.rand.pdf
Untracked: analysis/eb_power2.Rmd
Untracked: analysis/eb_prepilot.Rmd
Untracked: analysis/eb_var.Rmd
Untracked: analysis/ebpmf1.Rmd
Untracked: analysis/ebpmf_sla_text.Rmd
Untracked: analysis/ebspca_sims.Rmd
Untracked: analysis/explore_psvd.Rmd
Untracked: analysis/fa_check_identify.Rmd
Untracked: analysis/fa_iterative.Rmd
Untracked: analysis/flash_cov_overlapping_groups_init.Rmd
Untracked: analysis/flash_test_tree.Rmd
Untracked: analysis/flashier_newgroups.Rmd
Untracked: analysis/flashier_nmf_triples.Rmd
Untracked: analysis/flashier_pbmc.Rmd
Untracked: analysis/flashier_snn_shifted_prior.Rmd
Untracked: analysis/greedy_ebpmf_exploration_00.Rmd
Untracked: analysis/ieQTL.perm.rand.pdf
Untracked: analysis/lasso_em_03.Rmd
Untracked: analysis/m6amash.Rmd
Untracked: analysis/mash_bhat_z.Rmd
Untracked: analysis/mash_ieqtl_permutations.Rmd
Untracked: analysis/methylation_example.Rmd
Untracked: analysis/mixsqp.Rmd
Untracked: analysis/mr.ash_lasso_init.Rmd
Untracked: analysis/mr.mash.test.Rmd
Untracked: analysis/mr_ash_modular.Rmd
Untracked: analysis/mr_ash_parameterization.Rmd
Untracked: analysis/mr_ash_ridge.Rmd
Untracked: analysis/mv_gaussian_message_passing.Rmd
Untracked: analysis/nejm.Rmd
Untracked: analysis/nmf_bg.Rmd
Untracked: analysis/nonneg_underapprox.Rmd
Untracked: analysis/normal_conditional_on_r2.Rmd
Untracked: analysis/normalize.Rmd
Untracked: analysis/pbmc.Rmd
Untracked: analysis/pca_binary_weighted.Rmd
Untracked: analysis/pca_l1.Rmd
Untracked: analysis/poisson_nmf_approx.Rmd
Untracked: analysis/poisson_shrink.Rmd
Untracked: analysis/poisson_transform.Rmd
Untracked: analysis/qrnotes.txt
Untracked: analysis/ridge_iterative_02.Rmd
Untracked: analysis/ridge_iterative_splitting.Rmd
Untracked: analysis/samps/
Untracked: analysis/sc_bimodal.Rmd
Untracked: analysis/shrinkage_comparisons_changepoints.Rmd
Untracked: analysis/susie_cov.Rmd
Untracked: analysis/susie_en.Rmd
Untracked: analysis/susie_z_investigate.Rmd
Untracked: analysis/svd-timing.Rmd
Untracked: analysis/temp.RDS
Untracked: analysis/temp.Rmd
Untracked: analysis/test-figure/
Untracked: analysis/test.Rmd
Untracked: analysis/test.Rpres
Untracked: analysis/test.md
Untracked: analysis/test_qr.R
Untracked: analysis/test_sparse.Rmd
Untracked: analysis/tree_dist_top_eigenvector.Rmd
Untracked: analysis/z.txt
Untracked: code/coordinate_descent_symNMF.R
Untracked: code/multivariate_testfuncs.R
Untracked: code/rqb.hacked.R
Untracked: data/4matthew/
Untracked: data/4matthew2/
Untracked: data/E-MTAB-2805.processed.1/
Untracked: data/ENSG00000156738.Sim_Y2.RDS
Untracked: data/GDS5363_full.soft.gz
Untracked: data/GSE41265_allGenesTPM.txt
Untracked: data/Muscle_Skeletal.ACTN3.pm1Mb.RDS
Untracked: data/P.rds
Untracked: data/Thyroid.FMO2.pm1Mb.RDS
Untracked: data/bmass.HaemgenRBC2016.MAF01.Vs2.MergedDataSources.200kRanSubset.ChrBPMAFMarkerZScores.vs1.txt.gz
Untracked: data/bmass.HaemgenRBC2016.Vs2.NewSNPs.ZScores.hclust.vs1.txt
Untracked: data/bmass.HaemgenRBC2016.Vs2.PreviousSNPs.ZScores.hclust.vs1.txt
Untracked: data/eb_prepilot/
Untracked: data/finemap_data/fmo2.sim/b.txt
Untracked: data/finemap_data/fmo2.sim/dap_out.txt
Untracked: data/finemap_data/fmo2.sim/dap_out2.txt
Untracked: data/finemap_data/fmo2.sim/dap_out2_snp.txt
Untracked: data/finemap_data/fmo2.sim/dap_out_snp.txt
Untracked: data/finemap_data/fmo2.sim/data
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.config
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.k
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.k4.config
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.k4.snp
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.ld
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.snp
Untracked: data/finemap_data/fmo2.sim/fmo2.sim.z
Untracked: data/finemap_data/fmo2.sim/pos.txt
Untracked: data/logm.csv
Untracked: data/m.cd.RDS
Untracked: data/m.cdu.old.RDS
Untracked: data/m.new.cd.RDS
Untracked: data/m.old.cd.RDS
Untracked: data/mainbib.bib.old
Untracked: data/mat.csv
Untracked: data/mat.txt
Untracked: data/mat_new.csv
Untracked: data/matrix_lik.rds
Untracked: data/paintor_data/
Untracked: data/running_data_chris.csv
Untracked: data/running_data_matthew.csv
Untracked: data/temp.txt
Untracked: data/y.txt
Untracked: data/y_f.txt
Untracked: data/zscore_jointLCLs_m6AQTLs_susie_eQTLpruned.rds
Untracked: data/zscore_jointLCLs_random.rds
Untracked: explore_udi.R
Untracked: output/fit.k10.rds
Untracked: output/fit.nn.pbmc.purified.rds
Untracked: output/fit.nn.rds
Untracked: output/fit.nn.s.001.rds
Untracked: output/fit.nn.s.01.rds
Untracked: output/fit.nn.s.1.rds
Untracked: output/fit.nn.s.10.rds
Untracked: output/fit.snn.s.001.rds
Untracked: output/fit.snn.s.01.nninit.rds
Untracked: output/fit.snn.s.01.rds
Untracked: output/fit.varbvs.RDS
Untracked: output/fit2.nn.pbmc.purified.rds
Untracked: output/glmnet.fit.RDS
Untracked: output/snn07.txt
Untracked: output/snn34.txt
Untracked: output/test.bv.txt
Untracked: output/test.gamma.txt
Untracked: output/test.hyp.txt
Untracked: output/test.log.txt
Untracked: output/test.param.txt
Untracked: output/test2.bv.txt
Untracked: output/test2.gamma.txt
Untracked: output/test2.hyp.txt
Untracked: output/test2.log.txt
Untracked: output/test2.param.txt
Untracked: output/test3.bv.txt
Untracked: output/test3.gamma.txt
Untracked: output/test3.hyp.txt
Untracked: output/test3.log.txt
Untracked: output/test3.param.txt
Untracked: output/test4.bv.txt
Untracked: output/test4.gamma.txt
Untracked: output/test4.hyp.txt
Untracked: output/test4.log.txt
Untracked: output/test4.param.txt
Untracked: output/test5.bv.txt
Untracked: output/test5.gamma.txt
Untracked: output/test5.hyp.txt
Untracked: output/test5.log.txt
Untracked: output/test5.param.txt
Unstaged changes:
Modified: .gitignore
Modified: analysis/eb_snmu.Rmd
Modified: analysis/ebnm_binormal.Rmd
Modified: analysis/ebpower.Rmd
Modified: analysis/flashier_log1p.Rmd
Modified: analysis/flashier_sla_text.Rmd
Modified: analysis/index.Rmd
Modified: analysis/logistic_z_scores.Rmd
Modified: analysis/mr_ash_pen.Rmd
Modified: analysis/nmu_em.Rmd
Modified: analysis/susie_flash.Rmd
Modified: analysis/tap_free_energy.Rmd
Modified: misc.Rproj
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/flash_pca.Rmd) and HTML
(docs/flash_pca.html) files. If you’ve configured a remote
Git repository (see ?wflow_git_remote), click on the
hyperlinks in the table below to view the files as they were in that
past version.
| File | Version | Author | Date | Message |
|---|---|---|---|---|
| Rmd | 1eba385 | Matthew Stephens | 2025-11-11 | workflowr::wflow_publish("flash_pca.rmd") |
library(flashier)
Loading required package: ebnm
library(fastICA)
library(PMA)
Motivated by some results from ICA I want to try fitting \(X = M + lf' + E\) where M is low rank.
Given \(l\) and \(f\), the maximum likelihood estimate for \(M\) is given by truncated SVD of \(X-lf'\), so I can iterate between estimating \(l,f\) and estimating \(M\). This approach shoudl work for either using flashier or PMD to estimate \(l,f\).
These are the same simulations as in (fastica_01.html)
M <- 10000 # Number of variants/samples (rows)
L <- 10 # True number of latent factors
T <- 100 # Number of traits/phenotypes (columns)
s_1 <- 1 # Standard Deviation 1 (Spike component)
s_2 <- 5 # Standard Deviation 2 (Slab component)
eps <- 1e-2 # Standard Deviation for observation noise
# Set seed for reproducibility
set.seed(42)
# Data Simulation (G = X %*% Y + noise)
# 3.1. Generating Standard Deviation Matrices (a and b)
# Elements are sampled from {s_1, s_2} [1, 2].
sd_choices <- c(s_1, s_2)
# Matrix 'a' (M x L): Standard deviations for X (Probabilities p=[0.7, 0.3]) [4]
p_a <- c(0.7, 0.3)
a_vector <- sample(sd_choices, size = M * L, replace = TRUE, prob = p_a)
a <- matrix(a_vector, nrow = M, ncol = L)
# Matrix 'b' (L x T): Standard deviations for Y (Probabilities p=[0.8, 0.2]) [4]
p_b <- c(0.8, 0.2)
b_vector <- sample(sd_choices, size = L * T, replace = TRUE, prob = p_b)
b <- matrix(b_vector, nrow = L, ncol = T)
# Generating Latent Factors (X and Y)
# X is drawn from Normal(0, a)
X <- matrix(rnorm(M * L, mean = 0, sd = a), nrow = M, ncol = L)
# Y is drawn from Normal(0, b)
Y <- matrix(rnorm(L * T, mean = 0, sd = b), nrow = L, ncol = T)
# Generating Noise and Final Data Matrix G
# Noise is generated from Normal(0, eps)
noise <- matrix(rnorm(M * T, mean = 0, sd = eps), nrow = M, ncol = T)
# Calculate the final data matrix G = X @ Y + noise
G <- X %*% Y + noise
Here I try the iterative approach with rank of M=9. I find that it chooses a laplace prior with all its weight on non-null component, and does not find a great solution.
fit.fl = flash(G, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
for(i in 1:5){
M.svd = svd(G - fitted(fit.fl) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
#fit.fl = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
fit.fl = flash_update_data(fit.fl,G-M)
fit.fl = flash_backfit(fit.fl)
print(fit.fl$elbo)
}
Backfitting 1 factors (tolerance: 1.49e-02)...
Difference between iterations is within 1.0e+06...
Difference between iterations is within 1.0e+05...
Difference between iterations is within 1.0e+04...
Difference between iterations is within 1.0e+03...
Difference between iterations is within 1.0e+02...
Difference between iterations is within 1.0e+01...
Wrapping up...
Done.
[1] 3111599
Backfitting 1 factors (tolerance: 1.49e-02)...
Difference between iterations is within 1.0e+01...
Wrapping up...
Done.
[1] 3141269
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141269
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141269
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141269
cor(X, fit.fl$L_pm)
[,1]
[1,] -0.38113764
[2,] 0.18419586
[3,] -0.08593986
[4,] 0.13029310
[5,] -0.17712154
[6,] 0.56054887
[7,] -0.11792697
[8,] -0.61798516
[9,] 0.25148413
[10,] 0.05945271
fit.fl$L_ghat
[[1]]
$pi
[1] 1.806438e-07 9.999998e-01
$mean
[1] 0 0
$scale
[1] 0.0000000 0.7150248
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl$elbo
[1] 3141269
Try a normal mixture prior instead:
fit.fl = flash(G, ebnm_fn = ebnm_normal_scale_mixture, greedy_Kmax = 1)
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
cor(X, fit.fl$L_pm)
[,1]
[1,] -0.38140081
[2,] 0.18573512
[3,] -0.08625343
[4,] 0.13224256
[5,] -0.17841814
[6,] 0.55913558
[7,] -0.11852873
[8,] -0.61631534
[9,] 0.25340468
[10,] 0.06051061
for(i in 1:5){
M.svd = svd(G - fitted(fit.fl) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
#fit.fl = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
fit.fl = flash_update_data(fit.fl,G-M)
fit.fl = flash_backfit(fit.fl)
print(fit.fl$elbo)
}
Backfitting 1 factors (tolerance: 1.49e-02)...
Difference between iterations is within 1.0e+06...
Difference between iterations is within 1.0e+05...
Difference between iterations is within 1.0e+04...
Difference between iterations is within 1.0e+03...
Difference between iterations is within 1.0e+02...
Difference between iterations is within 1.0e+01...
Wrapping up...
Done.
[1] 3131196
Backfitting 1 factors (tolerance: 1.49e-02)...
Difference between iterations is within 1.0e+00...
Wrapping up...
Done.
[1] 3141348
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141348
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141348
Backfitting 1 factors (tolerance: 1.49e-02)...
Wrapping up...
Done.
[1] 3141348
cor(X, fit.fl$L_pm)
[,1]
[1,] -0.3815060
[2,] 0.1857550
[3,] -0.0862240
[4,] 0.1322642
[5,] -0.1784584
[6,] 0.5592986
[7,] -0.1185320
[8,] -0.6164988
[9,] 0.2534772
[10,] 0.0605242
fit.fl$L_ghat
[[1]]
$pi
[1] 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.18094444
[7] 0.03758443 0.00000000 0.21533611 0.02070509 0.00000000 0.41790590
[13] 0.07271325 0.05481079 0.00000000 0.00000000 0.00000000 0.00000000
[19] 0.00000000 0.00000000 0.00000000 0.00000000
$mean
[1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
$sd
[1] 0.0000000 0.1229333 0.1891227 0.2529762 0.3202798 0.3940946 0.4768439
[8] 0.5708588 0.6786031 0.8028042 0.9465530 1.1133963 1.3074330 1.5334192
[15] 1.7968879 2.1042857 2.4631315 2.8822015 3.3717434 3.9437279 4.6121413
[22] 5.3933273
attr(,"class")
[1] "normalmix"
attr(,"row.names")
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
fit.fl$elbo
[1] 3141348
Now try with initializing at an ica solution. It still uses a laplace prior, but it essentially converges to the ica solution (ie does not move much) and the elbo is better
fit.ica = fastICA(G, n.comp = 10)
s = fit.ica$S[,1]
a = fit.ica$A[1,]
M.svd = svd(G - s %*% t(a) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
fit.fl2 = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
fit.fl2$L_ghat
[[1]]
$pi
[1] 1.940483e-06 9.999981e-01
$mean
[1] 0 0
$scale
[1] 0.0000000 0.4334429
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl2$elbo
[1] 3148073
for(i in 1:5){
M.svd = svd(G - fitted(fit.fl2) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
fit.fl2 = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
print(fit.fl$elbo)
}
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 3141348
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 3141348
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 3141348
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 3141348
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 3141348
cor(X, fit.fl2$L_pm)
[,1]
[1,] -2.211661e-03
[2,] -2.436869e-03
[3,] 2.869316e-05
[4,] 7.598077e-03
[5,] 1.470651e-03
[6,] -3.292375e-03
[7,] -9.997270e-01
[8,] -3.886822e-03
[9,] 8.330709e-03
[10,] -4.525288e-03
fit.fl2$L_ghat
[[1]]
$pi
[1] 1.685282e-06 9.999983e-01
$mean
[1] 0 0
$scale
[1] 0.0000000 0.4334428
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl2$F_ghat
[[1]]
$pi
[1] 6.48661e-11 1.00000e+00
$mean
[1] 0 0
$scale
[1] 0.000000 4.767295
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl2$elbo
[1] 3149452
fit.fl$elbo
[1] 3141348
Here I try the same thing, but using just the first 8 PCs in an attempt to provide a bit more “wiggle room” for the optimization to move. But I found convergence is very slow here.
fit.fl = flash(G, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
for(i in 1:50){
M.svd = svd(G - fitted(fit.fl) )
M = M.svd$u[,1:8] %*% (M.svd$d[1:8] * t(M.svd$v[,1:8]))
fit.fl = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
print(fit.fl$elbo)
}
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740099
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740098
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740098
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740097
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740097
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740096
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740096
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740096
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740095
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740095
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740094
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740094
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740093
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740093
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740093
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740092
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740092
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740092
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740091
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740091
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740090
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740090
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740090
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740089
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740089
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740089
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740088
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740088
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740088
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740087
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740087
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740087
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740086
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740086
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740086
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740085
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740085
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740085
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740085
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740084
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740084
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740084
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740083
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740083
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740083
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740082
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740082
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740082
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740081
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] -2740081
cor(X, fit.fl$L_pm)
[,1]
[1,] -0.37800842
[2,] 0.17910674
[3,] -0.08737775
[4,] 0.12771926
[5,] -0.17455302
[6,] 0.56250455
[7,] -0.11533349
[8,] -0.62305161
[9,] 0.24789657
[10,] 0.05560268
fit.fl$L_ghat
[[1]]
$pi
[1] 1.131635e-06 9.999989e-01
$mean
[1] 0 0
$scale
[1] 0.0000000 0.7134562
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl$elbo
[1] -2740081
Try nuclear norm regularization on M rather than hard rank constraint.
fit.fl = flash(G, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
for(i in 1:50){
M.svd = svd(G - fitted(fit.fl) )
M.svd$d = pmax(0, M.svd$d - 10)
M = M.svd$u %*% (M.svd$d * t(M.svd$v))
fit.fl = flash(G-M, ebnm_fn = ebnm_point_laplace, greedy_Kmax = 1)
print(fit.fl$elbo)
}
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1924275
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1923715
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1936740
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1946657
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1951180
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1953225
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954144
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954551
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954727
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954799
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954824
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954827
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954821
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954810
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954797
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954783
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954769
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954755
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954740
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954726
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954712
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954697
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954683
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954669
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954655
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954640
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954626
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954612
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954598
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954584
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954569
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954555
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954541
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954527
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954513
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954499
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954485
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954471
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954457
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954443
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954429
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954415
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954401
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954387
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954373
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954359
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954345
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954330
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954315
Adding factor 1 to flash object...
Wrapping up...
Done.
Nullchecking 1 factors...
Done.
[1] 1954306
cor(X, fit.fl$L_pm)
[,1]
[1,] -0.38122318
[2,] 0.18435429
[3,] -0.08616472
[4,] 0.13039632
[5,] -0.17723444
[6,] 0.56045909
[7,] -0.11810855
[8,] -0.61778328
[9,] 0.25160371
[10,] 0.05952674
fit.fl$L_ghat
[[1]]
$pi
[1] 2.814814e-07 9.999997e-01
$mean
[1] 0 0
$scale
[1] 0.000000 0.715645
attr(,"class")
[1] "laplacemix"
attr(,"row.names")
[1] 1 2
fit.fl$elbo
[1] 1954306
Here I try the same with PMD
library(PMA)
fit.pmd = PMD(G,"standard",sumabs=0.5,center=F)
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1234567891011121314151617181920
cor(X, fit.pmd$u)
[,1]
[1,] 0.26840307
[2,] -0.08054940
[3,] 0.06713861
[4,] -0.01299302
[5,] 0.10758749
[6,] -0.36451223
[7,] 0.05598098
[8,] 0.81813865
[9,] -0.12268001
[10,] -0.07062926
for(i in 1:10){
M.svd = svd(G - fit.pmd$u %*% t(fit.pmd$d * fit.pmd$v) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
fit.pmd = PMD(G-M, v = fit.pmd$v, type="standard", sumabs= 0.5, center=F,niter=1)
}
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
cor(X, fit.pmd$u)
[,1]
[1,] 0.011377423
[2,] -0.011759704
[3,] -0.008862186
[4,] 0.011258508
[5,] 0.016953914
[6,] -0.055848148
[7,] 0.011011700
[8,] 0.986368084
[9,] -0.014366883
[10,] 0.011987536
# fit term
sum((G-M- fit.pmd$u %*% (fit.pmd$d * t(fit.pmd$v)))^2)
[1] 2029164
# penalty term (seems to be fixed)
sum(abs(fit.pmd$u)) * fit.pmd$sumabsu + sum(abs(fit.pmd$v)) * fit.pmd$sumabsv
[1] 2525
This worked well. Now I’ll try a lesser penalty.
fit.pmd = PMD(G,"standard",sumabs=0.9,center=F)
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
cor(X, fit.pmd$u)
[,1]
[1,] 0.38293438
[2,] -0.19070578
[3,] 0.08766478
[4,] -0.13750981
[5,] 0.18229298
[6,] -0.55618217
[7,] 0.12076554
[8,] 0.61146048
[9,] -0.25850008
[10,] -0.06340065
for(i in 1:10){
M.svd = svd(G - fit.pmd$u %*% t(fit.pmd$d * fit.pmd$v) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
fit.pmd = PMD(G-M, v = fit.pmd$v, type="standard", sumabs= 0.9, center=F,niter=1)
print(sum((G-M- fit.pmd$u %*% (fit.pmd$d * t(fit.pmd$v)))^2) )
}
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
Warning in PMDL1L1(x, sumabs = sumabs, sumabsu = sumabsu, sumabsv = sumabsv, :
PMDL1L1 was run without first subtracting out the mean of x.
1
[1] 89.97461
cor(X, fit.pmd$u)
[,1]
[1,] 0.38293438
[2,] -0.19070578
[3,] 0.08766478
[4,] -0.13750981
[5,] 0.18229298
[6,] -0.55618217
[7,] 0.12076554
[8,] 0.61146048
[9,] -0.25850008
[10,] -0.06340065
# fit term
sum((G-M- fit.pmd$u %*% (fit.pmd$d * t(fit.pmd$v)))^2)
[1] 89.97461
# penalty term (seems to be fixed)
sum(abs(fit.pmd$u)) * fit.pmd$sumabsu + sum(abs(fit.pmd$v)) * fit.pmd$sumabsv
[1] 6784.095
Here I compare the fit and penalty at the fastICA solution to see if this is a convergence issue. It was a bit of a suprise to see that the ICA solution does not get close to the PMD fit term. This is presumably because it does not optimize for mse. Also we see that the M.svd$d has 9 large eigenvalues, and one that is not large but bigger than background. This is the one that ICA is not fitting. That is, ICA does not actually leave a rank 9 matrix but actually a rank 10 matrix. Presumably this is because ICA fits the top 10PCs not the actual data matrix, so it leaves some of the “noise” unfitted, but maybe actually PMD is fitting more noise? (Is this an advantage of an initial step doing PCA? It might be. Maybe we should try PMD on the top PCs rather than the full data matrix?)
M.svd = svd(G - s %*% t(a) )
M = M.svd$u[,1:9] %*% (M.svd$d[1:9] * t(M.svd$v[,1:9]))
sum((G-M- s %*% t(a))^2)
[1] 93.62801
M.svd$d
[1] 9176.2905200 8113.6850689 7919.9268198 7220.2153718 7120.6590558
[6] 6467.8700988 6108.6111561 5314.9885635 3673.8448242 1.9145568
[11] 1.0879917 1.0818923 1.0781103 1.0769687 1.0749752
[16] 1.0737024 1.0705980 1.0673045 1.0639946 1.0618220
[21] 1.0600228 1.0576188 1.0561401 1.0532369 1.0514490
[26] 1.0486938 1.0472644 1.0449246 1.0417742 1.0408329
[31] 1.0397812 1.0379476 1.0359387 1.0348096 1.0338034
[36] 1.0318856 1.0307806 1.0301591 1.0260115 1.0235496
[41] 1.0227009 1.0217301 1.0191786 1.0173757 1.0149219
[46] 1.0140178 1.0121285 1.0109634 1.0095153 1.0065901
[51] 1.0061486 1.0041182 1.0010232 0.9995755 0.9977064
[56] 0.9975979 0.9957359 0.9946129 0.9941258 0.9918242
[61] 0.9886653 0.9882669 0.9858695 0.9823833 0.9814767
[66] 0.9807947 0.9802120 0.9787143 0.9766344 0.9738211
[71] 0.9730331 0.9720059 0.9702359 0.9665484 0.9661569
[76] 0.9645165 0.9629077 0.9604952 0.9594310 0.9568659
[81] 0.9551573 0.9545256 0.9520256 0.9492639 0.9462706
[86] 0.9448512 0.9440892 0.9428766 0.9397537 0.9378132
[91] 0.9356203 0.9344767 0.9306737 0.9276352 0.9267261
[96] 0.9249556 0.9239296 0.9199207 0.9158973 0.9126204
svd((G-fit.pmd$u %*% (fit.pmd$d * t(fit.pmd$v))))$d
[1] 8.117499e+03 7.920111e+03 7.256598e+03 7.134314e+03 6.478365e+03
[6] 6.148451e+03 5.338734e+03 4.748096e+03 3.608830e+03 1.088411e+00
[11] 1.081894e+00 1.078265e+00 1.076970e+00 1.075021e+00 1.073817e+00
[16] 1.070731e+00 1.067332e+00 1.064023e+00 1.061822e+00 1.060127e+00
[21] 1.057620e+00 1.056216e+00 1.053259e+00 1.051613e+00 1.048720e+00
[26] 1.047286e+00 1.044964e+00 1.042030e+00 1.040842e+00 1.039811e+00
[31] 1.037961e+00 1.035977e+00 1.034845e+00 1.033894e+00 1.032034e+00
[36] 1.030914e+00 1.030176e+00 1.026102e+00 1.023608e+00 1.022779e+00
[41] 1.021776e+00 1.019187e+00 1.017391e+00 1.014994e+00 1.014120e+00
[46] 1.012301e+00 1.010972e+00 1.009856e+00 1.006717e+00 1.006161e+00
[51] 1.004120e+00 1.001031e+00 9.995807e-01 9.977976e-01 9.976091e-01
[56] 9.957384e-01 9.946141e-01 9.942168e-01 9.921946e-01 9.888689e-01
[61] 9.883603e-01 9.859798e-01 9.824715e-01 9.814767e-01 9.810739e-01
[66] 9.803592e-01 9.787418e-01 9.766685e-01 9.738279e-01 9.730336e-01
[71] 9.720156e-01 9.702887e-01 9.665486e-01 9.661805e-01 9.646655e-01
[76] 9.629099e-01 9.605022e-01 9.594346e-01 9.568751e-01 9.551861e-01
[81] 9.545383e-01 9.521384e-01 9.493240e-01 9.463089e-01 9.449444e-01
[86] 9.440933e-01 9.430927e-01 9.397773e-01 9.378201e-01 9.356860e-01
[91] 9.344981e-01 9.307157e-01 9.276357e-01 9.267522e-01 9.249570e-01
[96] 9.240263e-01 9.199398e-01 9.159090e-01 9.126318e-01 1.617511e-11
sessionInfo()
R version 4.4.2 (2024-10-31)
Platform: aarch64-apple-darwin20
Running under: macOS Sequoia 15.6.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
time zone: America/Chicago
tzcode source: internal
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] PMA_1.2-4 fastICA_1.2-7 flashier_1.0.56 ebnm_1.1-34
loaded via a namespace (and not attached):
[1] softImpute_1.4-3 gtable_0.3.6 xfun_0.52
[4] bslib_0.9.0 ggplot2_3.5.2 htmlwidgets_1.6.4
[7] ggrepel_0.9.6 lattice_0.22-6 quadprog_1.5-8
[10] vctrs_0.6.5 tools_4.4.2 generics_0.1.4
[13] parallel_4.4.2 Polychrome_1.5.4 tibble_3.3.0
[16] pkgconfig_2.0.3 Matrix_1.7-2 data.table_1.17.6
[19] SQUAREM_2021.1 RColorBrewer_1.1-3 RcppParallel_5.1.10
[22] scatterplot3d_0.3-44 lifecycle_1.0.4 truncnorm_1.0-9
[25] compiler_4.4.2 farver_2.1.2 stringr_1.5.1
[28] git2r_0.35.0 progress_1.2.3 RhpcBLASctl_0.23-42
[31] httpuv_1.6.15 htmltools_0.5.8.1 sass_0.4.10
[34] lazyeval_0.2.2 yaml_2.3.10 plotly_4.11.0
[37] crayon_1.5.3 tidyr_1.3.1 later_1.4.2
[40] pillar_1.10.2 jquerylib_0.1.4 whisker_0.4.1
[43] uwot_0.2.3 cachem_1.1.0 trust_0.1-8
[46] gtools_3.9.5 tidyselect_1.2.1 digest_0.6.37
[49] Rtsne_0.17 stringi_1.8.7 purrr_1.0.4
[52] dplyr_1.1.4 ashr_2.2-66 splines_4.4.2
[55] cowplot_1.1.3 rprojroot_2.0.4 fastmap_1.2.0
[58] grid_4.4.2 colorspace_2.1-1 cli_3.6.5
[61] invgamma_1.1 magrittr_2.0.3 prettyunits_1.2.0
[64] scales_1.4.0 promises_1.3.3 horseshoe_0.2.0
[67] httr_1.4.7 rmarkdown_2.29 fastTopics_0.7-07
[70] deconvolveR_1.2-1 workflowr_1.7.1 hms_1.1.3
[73] pbapply_1.7-2 evaluate_1.0.4 knitr_1.50
[76] viridisLite_0.4.2 irlba_2.3.5.1 rlang_1.1.6
[79] Rcpp_1.0.14 mixsqp_0.3-54 glue_1.8.0
[82] rstudioapi_0.17.1 jsonlite_2.0.0 R6_2.6.1
[85] fs_1.6.6