Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
416264 | Computational Statistics & Data Analysis | 2006 | 24 Pages |
Abstract
A crucial step in all gradient-based algorithms for calculating the nonparametric maximum likelihood estimator in mixture models is the global maximization of the gradient function. For example, in mixtures of exponentials, the methods usually proposed fail. Based on a discretization which is adapted to the data points, a method for maximizing the gradient is suggested. The method is implemented in different gradient-based algorithms; a comparison shows that on mixtures of exponentials, the ISDM algorithm introduced by Lesperance and Kalbfleisch is much faster than its competitors.
Related Topics
Physical Sciences and Engineering
Computer Science
Computational Theory and Mathematics
Authors
Wilfried Seidel, Krunoslav Sever, Hana Ševčíková,