Article ID Journal Published Year Pages File Type
395533 Information Sciences 2011 17 Pages PDF
Abstract

Not only different databases but two classes of data within a database can also have different data structures. SVM and LS-SVM typically minimize the empirical ϕ-risk; regularized versions subject to fixed penalty (L2 or L1 penalty) are non-adaptive since their penalty forms are pre-determined. They often perform well only for certain types of situations. For example, LS-SVM with L2 penalty is not preferred if the underlying model is sparse. This paper proposes an adaptive penalty learning procedure called evolution strategies (ES) based adaptive Lp least squares support vector machine (ES-based Lp LS-SVM) to address the above issue. By introducing multiple kernels, a Lp penalty based nonlinear objective function is derived. The iterative re-weighted minimal solver (IRMS) algorithm is used to solve the nonlinear function. Then evolution strategies (ES) is used to solve the multi-parameters optimization problem. Penalty parameterp, kernel and regularized parameters are adaptively selected by the proposed ES-based algorithm in the process of training the data, which makes it easier to achieve the optimal solution. Numerical experiments are conducted on two artificial data sets and six real world data sets. The experiment results show that the proposed procedure offer better generalization performance than the standard SVM, the LS-SVM and other improved algorithms.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,