Article ID Journal Published Year Pages File Type
405923 Neural Networks 2016 10 Pages PDF
Abstract

The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback–Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDnn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDnn learning rule coincide with those of ML learning rule in a Gaussian–Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian–Bernoulli RBM, we find that both ML and CDnn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD11 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,