Last updated: 2018-05-14

workflowr checks: (Click a bullet for more information)
  • R Markdown file: up-to-date

    Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

  • Environment: empty

    Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

  • Seed: set.seed(20180411)

    The command set.seed(20180411) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

  • Session information: recorded

    Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

  • Repository version: 24e0151

    Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

    Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
    
    Ignored files:
        Ignored:    .DS_Store
        Ignored:    .Rhistory
        Ignored:    .Rproj.user/
        Ignored:    .sos/
        Ignored:    exams/
        Ignored:    temp/
    
    Untracked files:
        Untracked:  analysis/pca_cell_cycle.Rmd
        Untracked:  analysis/ridge_mle.Rmd
        Untracked:  docs/figure/pca_cell_cycle.Rmd/
        Untracked:  homework/fdr.aux
        Untracked:  homework/fdr.log
        Untracked:  tempETA_1_parBayesC.dat
        Untracked:  temp_ETA_1_parBayesC.dat
        Untracked:  temp_mu.dat
        Untracked:  temp_varE.dat
        Untracked:  tempmu.dat
        Untracked:  tempvarE.dat
    
    Unstaged changes:
        Modified:   analysis/cell_cycle.Rmd
        Modified:   analysis/density_est_cell_cycle.Rmd
        Modified:   analysis/eb_vs_soft.Rmd
        Modified:   analysis/eight_schools.Rmd
        Modified:   analysis/glmnet_intro.Rmd
    
    
    Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
Expand here to see past versions:
    File Version Author Date Message
    Rmd 24e0151 stephens999 2018-05-14 wflow_publish(“var_select_regression.Rmd”)


Simulation

We will simulate a very simple example, where there are two very similar variables that are hard to choose between (as well as a bunch of other “null” variables.)

set.seed(1)
n = 100
p = 10
x = matrix(rnorm(n*p),nrow=n)
x = cbind(x[,1]+rnorm(n,0,0.0001),x) # add a column to x very similar to first column
y = x[,1] + rnorm(n)

Try different version of penalized regrssion (glmnet). First LASSO. Notice that there is some variation in results of CV as it is random… so I ran it 5 times.

glmnet_fit = function(x,y,alpha){
  y.fit = glmnet::glmnet(x,y,alpha=alpha)
  y.cv  = glmnet::cv.glmnet(x,y,alpha=alpha,lambda = y.fit$lambda)
  return(list(bhat=coef(y.fit,s = y.cv$lambda.min)[,1],y.fit=y.fit,y.cv=y.cv))
}
for(i in 1:5){
  print(glmnet_fit(x,y,1)$b)
}
 (Intercept)           V1           V2           V3           V4 
-0.127408729  0.775727994  0.000000000  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.009882609  0.000000000 
(Intercept)          V1          V2          V3          V4          V5 
-0.13115963  0.81302446  0.00000000  0.00000000  0.00000000  0.00000000 
         V6          V7          V8          V9         V10         V11 
 0.00000000  0.00000000  0.00000000  0.00000000 -0.04066203  0.00000000 
(Intercept)          V1          V2          V3          V4          V5 
 -0.1259043   0.7609970   0.0000000   0.0000000   0.0000000   0.0000000 
         V6          V7          V8          V9         V10         V11 
  0.0000000   0.0000000   0.0000000   0.0000000   0.0000000   0.0000000 
(Intercept)          V1          V2          V3          V4          V5 
-0.12877698  0.78933299  0.00000000  0.00000000  0.00000000  0.00000000 
         V6          V7          V8          V9         V10         V11 
 0.00000000  0.00000000  0.00000000  0.00000000 -0.02111032  0.00000000 
(Intercept)          V1          V2          V3          V4          V5 
-0.13313775  0.83269353  0.00000000  0.00000000  0.00000000  0.00000000 
         V6          V7          V8          V9         V10         V11 
 0.00000000  0.00000000  0.00000000  0.00000000 -0.05689421  0.00000000 

I was puzzled here because I had set it up so that the first two columns were almost indistinguishable. So I was suprised it always chose the first column. It turns out this is a numerical issue with the glmnet implementation. When two columns are very highly correlated it tends to favor the first. We can see this by swapping the first two columns of x:

temp = x[,2]
x[,2] = x[,1]
x[,1] = temp

for(i in 1:5){
  print(glmnet_fit(x,y,1)$b)
}
 (Intercept)           V1           V2           V3           V4 
-0.133996469  0.840055684  0.001169731  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.063954927  0.000000000 
 (Intercept)           V1           V2           V3           V4 
-0.131159068  0.811995641  0.001016784  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.040671109  0.000000000 
 (Intercept)           V1           V2           V3           V4 
-0.127408191  0.774857822  0.000858692  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.009891271  0.000000000 
 (Intercept)           V1           V2           V3           V4 
-0.133137172  0.831561972  0.001119236  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.056903503  0.000000000 
 (Intercept)           V1           V2           V3           V4 
-0.131159068  0.811995641  0.001016784  0.000000000  0.000000000 
          V5           V6           V7           V8           V9 
 0.000000000  0.000000000  0.000000000  0.000000000  0.000000000 
         V10          V11 
-0.040671109  0.000000000 

Now try a Bayesian MCMC based approach. BGLR fits this model by MCMC. Unfortunately it does not seem to output samples from the posterior distribution on \(b\) - only point estimates on \(b\). Which means it is hard to see some of the things I would like to look at. (Note that this software does not include an intercept by default.. so the coefficients here do not include intercept.)

fit=BGLR::BGLR(y=y,ETA=list( list(X=x,model='BayesC')), saveAt="temp_",nIter = 10000,verbose=FALSE)
bhat = fit$ETA[[1]]$b
bhat
 [1]  0.557289436  0.318978917  0.015231508 -0.007506149  0.009931342
 [6] -0.003992154  0.005388826 -0.005622043 -0.011497332 -0.042000776
[11] -0.005627523

Notice that most of the weight is on the first two variables (makes sense!)

My guess is that the posterior samples will mostly have one or other of the first two variables included. And that other variables will not often be included. However I can’t show that yet. I am still looking for a simple R package that will help me illustrate this better.

Session information

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X El Capitan 10.11.6

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] workflowr_1.0.1   Rcpp_0.12.16      codetools_0.2-15 
 [4] lattice_0.20-35   foreach_1.4.4     glmnet_2.0-16    
 [7] digest_0.6.15     rprojroot_1.3-2   R.methodsS3_1.7.1
[10] grid_3.3.2        backports_1.1.2   git2r_0.21.0     
[13] magrittr_1.5      evaluate_0.10.1   stringi_1.1.7    
[16] whisker_0.3-2     R.oo_1.22.0       R.utils_2.6.0    
[19] Matrix_1.2-14     rmarkdown_1.9     iterators_1.0.9  
[22] tools_3.3.2       stringr_1.3.0     yaml_2.1.18      
[25] BGLR_1.0.5        htmltools_0.3.6   knitr_1.20       

This reproducible R Markdown analysis was created with workflowr 1.0.1