Article ID Journal Published Year Pages File Type
409129 Neurocomputing 2008 10 Pages PDF
Abstract

Various alternatives have been developed to improve the winner-takes-all (WTA) mechanism in vector quantization, including the neural gas (NG). However, the behavior of these algorithms including their learning dynamics, robustness with respect to initialization, asymptotic results, etc. has only partially been studied in a rigorous mathematical analysis. The theory of on-line learning allows for an exact mathematical description of the training dynamics in model situations. We demonstrate using a system of three competing prototypes trained from a mixture of Gaussian clusters that the NG can improve convergence speed and achieves robustness to initial conditions. However, depending on the structure of the data, the NG does not always obtain the best asymptotic quantization error.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,