Article ID Journal Published Year Pages File Type
406404 Neural Networks 2013 15 Pages PDF
Abstract

•We provide a general framework for developing fully corrective boosting methods.•Boosting methods with arbitrary convex loss and regularization are made possible.•We show that it is much faster to solve boosting’s primal problems.

We propose a general framework for analyzing and developing fully corrective boosting-based classifiers. The framework accepts any convex objective function, and allows any convex (for example, ℓpℓp-norm, p≥1p≥1) regularization term. By placing the wide variety of existing fully corrective boosting-based classifiers on a common footing, and considering the primal and dual problems together, the framework allows a direct comparison between apparently disparate methods. By solving the primal rather than the dual the framework is capable of generating efficient fully-corrective boosting algorithms without recourse to sophisticated convex optimization processes. We show that a range of additional boosting-based algorithms can be incorporated into the framework despite not being fully corrective. Finally, we provide an empirical analysis of the performance of a variety of the most significant boosting-based classifiers on a few machine learning benchmark datasets.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,