Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6865671 | Neurocomputing | 2015 | 6 Pages |
Abstract
Laplacian mixture models have been used to deal with heavy-tailed distributions in data modeling problems. We consider an extension of Laplacian mixture models, which consists of ε-insensitive component distributions. An EM-type learning algorithm is derived for the maximum likelihood estimation of the proposed mixture model. The E-step is formulated in the usual way, while the M-step is formulated as the dual optimization problem instead of the primal optimization problem. Additionally, the convergence proof for ε=0 is accomplished. As an analogy to the k-means algorithm, we obtain what we call the ei-means algorithm in a certain limit of the learning algorithm. The derived algorithm is applied to approximate computation of rate-distortion functions associated with the ε-insensitive loss function. Then, it is demonstrated by synthetic data and real-world Spambase data that with appropriate selection of the ε value, the model is able to tolerate small percentage of noisy data.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Kazuho Watanabe,