کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
477000 | 1446097 | 2011 | 10 صفحه PDF | دانلود رایگان |
![عکس صفحه اول مقاله: Optimization problems in statistical learning: Duality and optimality conditions Optimization problems in statistical learning: Duality and optimality conditions](/preview/png/477000.png)
Regularization methods are techniques for learning functions from given data. We consider regularization problems the objective function of which consisting of a cost function and a regularization term with the aim of selecting a prediction function f with a finite representation f(·)=∑i=1ncik(·,Xi) which minimizes the error of prediction. Here the role of the regularizer is to avoid overfitting. In general these are convex optimization problems with not necessarily differentiable objective functions. Thus in order to provide optimality conditions for this class of problems one needs to appeal on some specific techniques from the convex analysis. In this paper we provide a general approach for deriving necessary and sufficient optimality conditions for the regularized problem via the so-called conjugate duality theory. Afterwards we employ the obtained results to the Support Vector Machines problem and Support Vector Regression problem formulated for different cost functions.
► We consider regularization problems with the aim of selecting a prediction function with a fnite representation.
► The role of the regularizer is to avoid overftting.
► In order to provide optimality conditions we appeal on some specifc techniques from the convex analysis.
► We derive optimality conditions for the regularized problem via the conjugate duality theory.
► We employ the obtained results to the Support Vector Machines problem and Support Vector Regression problem formulated for different cost functions.
Journal: European Journal of Operational Research - Volume 213, Issue 2, 1 September 2011, Pages 395–404