کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
410702 | 679160 | 2011 | 6 صفحه PDF | دانلود رایگان |

Support vector regression (SVR) is a state-of-the-art method for regression which uses the ε‐sensitiveε‐sensitive loss and produces sparse models. However, non-linear SVRs are difficult to tune because of the additional kernel parameter. In this paper, a new parameter-insensitive kernel inspired from extreme learning is used for non-linear SVR. Hence, the practitioner has only two meta-parameters to optimise. The proposed approach reduces significantly the computational complexity yet experiments show that it yields performances that are very close from the state-of-the-art. Unlike previous works which rely on Monte-Carlo approximation to estimate the kernel, this work also shows that the proposed kernel has an analytic form which is computationally easier to evaluate.
Journal: Neurocomputing - Volume 74, Issue 16, September 2011, Pages 2526–2531