Article ID Journal Published Year Pages File Type
6853010 Artificial Intelligence 2018 70 Pages PDF
Abstract
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRBP), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 benchmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results for RBP and its variants including: (1) the convergence to optimal fixed points for linear chains of arbitrary length; (2) convergence to fixed points for linear autoencoders with decorrelated data; (3) long-term existence of solutions for linear systems with a single hidden layer, and their convergence in special cases; and (4) convergence to fixed points of non-linear chains, when the derivative of the activation functions is included.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,