Article ID Journal Published Year Pages File Type
406684 Neurocomputing 2014 14 Pages PDF
Abstract

This paper presents the hardware implementation of an evolvable block-based neural network that utilizes a novel and cost efficient sigmoid-like activation function. Evolvable block-based neural networks (BbNNs) feature simultaneous optimization of structure, and viable implementation in reconfigurable digital hardware such as field programmable gate arrays (FPGAs). Efficient hardware implementation of BbNN structures is the primary goal of this paper. Various aspects of BbNN modeling and design considerations are presented. The neuron blocks are designed with properly described methodology, using only a single multiplier each, and implement a cost efficient sigmoid-like activation function. A novel method of reusing the multiplier to smoothly approximate a hyperbolic tangent (tanh) function to be used as the activation function for the neuron blocks is also presented. This is an important contribution, because a sigmoid-like activation function is provided at almost no additional cost. The neuron blocks are very cost efficient in terms of logic utilization when compared to the previous work. The BbNN is designed as an system-on-chip (SoC), and is functionally verified and tested on several case studies. The system performance allows real-time classification, and executes up to 410×faster than embedded software.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,