This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.
|Titolo:||FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators|
|Data di pubblicazione:||2014|
|Appare nelle tipologie:||1.1 Articolo in rivista|