Article ID Journal Published Year Pages File Type
6863321 Neural Networks 2015 10 Pages PDF
Abstract
In Bayesian variable selection, indicator model selection (IMS) is a class of well-known sampling algorithms, which has been used in various models. The IMS is a class of methods that uses pseudo-priors and it contains specific methods such as Gibbs variable selection (GVS) and Kuo and Mallick's (KM) method. However, the efficiency of the IMS strongly depends on the parameters of a proposal distribution and the pseudo-priors. Specifically, the GVS determines their parameters based on a pilot run for a full model and the KM method sets their parameters as those of priors, which often leads to slow mixings of them. In this paper, we propose an algorithm that adapts the parameters of the IMS during running. The parameters obtained on the fly provide an appropriate proposal distribution and pseudo-priors, which improve the mixing of the algorithm. We also prove the convergence theorem of the proposed algorithm, and confirm that the algorithm is more efficient than the conventional algorithms by experiments of the Bayesian variable selection.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,