Article ID Journal Published Year Pages File Type
408014 Neurocomputing 2011 13 Pages PDF
Abstract

Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.

Research highlights► Stochastic approximation is proposed as an alternative to EM for probabilistic mixture learning. ► Non standard probability density functions are easily managed. ► The multivariate triangular family of probability density functions is presented. ► Multivariate triangular pdfs feature a finite support and a linearly decaying density. ► Differences among probability density function families are studied.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
,