Article ID Journal Published Year Pages File Type
411012 Neurocomputing 2006 5 Pages PDF
Abstract

In this paper, comparing with the Gaussian prior, the Laplacian distribution which is a sparse distribution is employed as the weight prior in the relevance vector machine (RVM) which is a method for learning sparse regression and classification. In order to derive an expectation–maximization (EM) algorithm in closed form for learning the weights, a strict lower bound on the sparse distribution is employed in this paper. This strict lower bound conveniently gives a strict lower bound in Gaussian form for the weight posterior and thus naturally derives an EM algorithm in closed form for learning the weights and the hyperparameters.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
,