Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6863727 | Neurocomputing | 2018 | 18 Pages |
Abstract
This brief paper presents two implementations of feed-forward artificial neural networks in FPGAs. The implementations differ in the FPGA resources requirement and calculations speed. Both implementations exercise floating point arithmetic, apply very high accuracy activation function realization, and enable easy alteration of the neural network's structure without the need of a re-implementation of the entire FPGA project.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Zbigniew Hajduk,