Article ID Journal Published Year Pages File Type
462768 Microprocessors and Microsystems 2012 13 Pages PDF
Abstract

In this paper a novel architecture for implementing multi-layer perceptron (MLP) neural networks on field programmable gate arrays (FPGA) is presented. The architecture presents a new scalable design that allows variable degrees of parallelism in order to achieve the best balance between performance and FPGA resources usage. Performance is enhanced using a highly efficient pipelined design. Extensive analysis and simulations have been conducted on four standard benchmark problems. Results show that a minimum performance boost of three orders of magnitude (O3) over software implementation is regularly achieved. We report performance of 2–67 GCUPS for these simple problems, and performance reaching over 1 TCUPS for larger networks and different single FPGA chips. To our knowledge, this is the highest speed reported to date for any MLP network implementation on FPGAs.

Related Topics
Physical Sciences and Engineering Computer Science Computer Networks and Communications
Authors
, , ,