Article ID Journal Published Year Pages File Type
405695 Neurocomputing 2016 7 Pages PDF
Abstract

Matrices are appropriate for representing a wealth of data with complex structures such as images and electroencephalogram data (EEG). To learn a classifier dealing with these matrix data, the structure information of the feature matrix is useful. In this paper, we focus on the regularized matrix classifiers whose input samples and weight parameters are both in the form of a matrix. Some existing approaches assume that the weight matrix has a low-rank structure and then utilize the popular nuclear norm of the weight matrix as a regularization term. However, the optimization methods for these matrix classifiers often involve numbers of expensive singular value decomposition (SVD) operations, which prevents scaling beyond moderate matrix sizes. To reduce the time complexity, we propose a novel learning algorithm called Atom Decomposition Based Subgradient Descent (ADBSD), which solves the optimization problem for the matrix classifier whose objective function is the combination of the Frobenius matrix norm and nuclear norm of the weight matrix along with the hinge loss function. Our ADBSD is an iterative scheme which selects the most informative rank-one matrices from the subgradient of the objective function in each iteration. We consider using the atom decomposition based methods to minimize nuclear norm because they mainly rely on the computation of top singular vector pair which leads to great advantages on efficiency. We empirically evaluate the performance of the proposed algorithm ADBSD on both synthetic and real-world datasets. Results show that our approach is more efficient and robust than the state-of-the-art methods.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,