Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
11021148 | Neurocomputing | 2018 | 26 Pages |
Abstract
Due to the wide applications in imbalanced learning, directly optimizing AUC has gained increasing interest in recent years. Compared with traditional batch learning methods, which often suffer from poor scalability, it is more challenging to design the efficient AUC maximizing algorithm for large-scale data set, especially when dimension of data is also high. To address the issue, in this paper, an adaptive stochastic gradient method for AUC maximization, termed AMAUC, is proposed. Specifically, the algorithm adopts the framework of mini-batch, and uses projection gradient method for the inner optimization. To further improve the performance, an adaptive learning rate updating strategy is also suggested, where the second order gradient information is utilized to provide the feature-wise updating. Empirical studies on the benchmark and high-dimensional data sets with large scale demonstrate the efficiency and effectiveness of the proposed AMAUC.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Fan Cheng, Xia Zhang, Chuang Zhang, Jianfeng Qiu, Lei Zhang,