Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6863030 | Neural Networks | 2018 | 14 Pages |
Abstract
Parallel incremental learning is an effective approach for rapidly processing large scale data streams, where parallel and incremental learning are often treated as two separate problems and solved one after another. Incremental learning can be implemented by merging knowledge from incoming data and parallel learning can be performed by merging knowledge from simultaneous learners. We propose to simultaneously solve the two learning problems with a single process of knowledge merging, and we propose parallel incremental wESVM (weighted Extreme Support Vector Machine) to do so. Here, wESVM is reformulated such that knowledge from subsets of training data can be merged via simple matrix addition. As such, the proposed algorithm is able to conduct parallel incremental learning by merging knowledge over data slices arriving at each incremental stage. Both theoretical and experimental studies show the equivalence of the proposed algorithm to batch wESVM in terms of learning effectiveness. In particular, the algorithm demonstrates desired scalability and clear speed advantages to batch retraining.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Lei Zhu, Kazushi Ikeda, Shaoning Pang, Tao Ban, Abdolhossein Sarrafzadeh,