Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
411119 | Neurocomputing | 2009 | 5 Pages |
In this letter, we develop a fixed-point arithmetic, low precision, implementation of an exponentially weighted moving average (EWMA) that is used in a neural network with plastic weights. We analyze the proposed design both analytically and experimentally, and we also evaluate its performance in the application of an attractor neural network. The EWMA in the proposed design has a constant relative truncation error, which is important for avoiding round-off errors in applications with slowly decaying processes, e.g. connectionist networks. We conclude that the proposed design offers greatly improved memory and computational efficiency compared to a naïve implementation of the EWMA's difference equation, and that it is well suited for implementation in digital hardware.