You should be familiar with Bayesian inference for a normal mean.
The “Normal means” problem is as follows: assume we have data Xj∼N(θj,s2j)(j=1,…,n) where the standard deviations sj are known, and the means θj are to be estimated.
It is easy to show that the maximum likelihood estimate of θj is Xj.
The idea here is that we can do better than the maximum likelihood estimates, by combining information across j=1,…,n.
The Empirical Bayes (EB) approach to this problem assumes that the θj come from some underlying distribution g∈G where G is some appropriate family of distributions. Here, for simplicity, we will assume G is the set of all normal distributions. That is, we assume θj∼N(μ,V) for some mean μ and variance V. Of course this assumption is somewhat inflexible, but it is a starting point. More flexible assumptions are possible, but we will stick with the simple normal assumption for now.
If we knew (or were willing to specify) μ,V then it would be easy to do Bayesian inference for θj|Xj,μ,V like this. The idea behind the EB approach is to instead estimate μ,V from the data – specifically, by maximum likelihood estimation. It is called “Empirical Bayes” because you can think of estimating μ,V as “estimating the prior” on θj from the data.
Notice that we can write Xj=θj+N(0,s2j) and θj|μ,V∼N(μ,V). So using the fact that the sum of two normal distributions is normal we have: Xj|μ,V∼N(μ,V+s2j).
Assuming that the Xj are independent, we can compute the log-likelihood using the following function. Notice that we parameterize in terms of log(V) rather than V - this is to make the numerical optimization easier later. Specifically, the optimization over log(V) is
unconstrained, which is often easier to do than the constrained optimization (V>0).
#' @title the loglikelihood for the EB normal means problem
#' @param par a vector of parameters (mu,log(V))
#' @param x the data vector
#' @param s the vector of standard deviations
nm_loglik = function(par,x,s){
mu = par[1]
V = exp(par[2])
sum(dnorm(x,mu,sqrt(s^2+V),log=TRUE))
}
We use the R function optim
to optimize this log-likelihood. (By default optim
performs a minimization; here we set fnscale=-1
so that it will maximize the log-likelihood.) If we wanted to make the optimization more reliable we should compute the gradient of the log likelihood, but for now we will try with just providing it the function.
ebnm_normal = function(x,s){
par_init = c(0,0)
res = optim(par=par_init,fn = nm_loglik,method="BFGS",control=list(fnscale=-1),x=x,s=s)
return(res$par)
}
Here, to illustrate we run this on a simulated example with μ=1,V=7.
set.seed(1)
mu = 1
V = 7
n = 1000
t = rnorm(n,mu,sqrt(V))
s = rep(1,n)
x = rnorm(n,t,s)
res = ebnm_normal(x,s)
c(res[1],exp(res[2]))
## [1] 0.952920 7.606758
TODO: complete this by computing the posterior distributions θj|μj,Xj,ˆV.
This site was created with R Markdown