Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4946771 | Neural Networks | 2016 | 11 Pages |
Abstract
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
A.N. Gorban, E.M. Mirkes, A. Zinovyev,