Neuronized Priors for Bayesian Sparse Linear Regression.

2018 
Although Bayesian variable selection procedures have been widely adopted in many scientific research fields, their routine use in practice has not caught up with their non-Bayesian counterparts, such as Lasso, due to difficulties in both Bayesian computations and in testing effects of different prior distributions. To ease these challenges, we propose the neuronized priors to unify and extend existing shrinkage priors such as one-group continuous shrinkage priors, continuous spike-and-slab priors, and discrete spike-and-slab priors with point-mass mixtures. The new priors are formulated as the product of a weight variable and a scale variable. The weight is a Gaussian random variable, but the scale is a Gaussian variable controlled through an activation function. By altering the activation function, practitioners can easily implement a large class of Bayesian variable selection procedures. Compared with classic spike and slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variable, which results in more efficient MCMC algorithms and more effective posterior modal estimates obtained from a simple coordinate-ascent algorithm. We examine a wide range of simulated and real data examples and also show that using the "neuronization" representation is computationally more or comparably efficient than its standard counterpart in all well-known cases.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    75
    References
    3
    Citations
    NaN
    KQI
    []