Article ID Journal Published Year Pages File Type
392567 Information Sciences 2016 10 Pages PDF
Abstract

As a powerful tool for data regression and classification, neural networks have received considerable attention from researchers in fields such as machine learning, statistics, computer vision and so on. There exists a large body of research work on network training, among which most of them tune the parameters iteratively. Such methods often suffer from local minima and slow convergence. It has been shown that randomization based training methods can significantly boost the performance or efficiency of neural networks. Among these methods, most approaches use randomization either to change the data distributions, and/or to fix a part of the parameters or network configurations. This article presents a comprehensive survey of the earliest work and recent advances as well as some suggestions for future research.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,