Article ID Journal Published Year Pages File Type
8953594 Neurocomputing 2018 6 Pages PDF
Abstract
This paper focuses on the parameter pattern during the initialization of Extreme Learning Machines (ELMs). According to the algorithm, model performance is highly dependent on the matrix rank of its hidden layer. Previous research has already proved that the sigmoid activation function can transform input data to a full rank hidden matrix with probability 1, which secures the stability of ELM solution. In recent study, we notice that, under full-rank condition, the hidden matrix possibly has very small eigenvalue, which seriously affects the model generalization ability. Our study indicates such a negative impact is caused by the discontinuity of generalized inverse at the boundary of full and waning rank. Experiments show that each phase of ELM modeling possibly leads to this rank deficient phenomenon, which harms the test accuracy.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,