Article ID Journal Published Year Pages File Type
406367 Neurocomputing 2015 13 Pages PDF
Abstract

Non-convex regularization has attracted much attention in the fields of machine learning, since it is unbiased and improves the performance on many applications compared with the convex counterparts. The optimization is important but difficult for non-convex regularization. In this paper, we propose the Damping Proximal Coordinate Descent (DPCD) algorithms that address the optimization issues of a general family of non-convex regularized problems. DPCD is guaranteed to be globally convergent. The computational complexity of obtaining an approximately stationary solution with a desired precision is only linear to the data size. Our experiments on many machine learning benchmark datasets also show that DPCD has a fast convergence rate and it reduces the time of training models without significant loss of prediction accuracy.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,