This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented. © 2014 Advances in electrical and electronic engineering.
Lozito, G.M., Laudani, A., RIGANTI FULGINEI, F., Salvini, A. (2014). FPGA implementations of feed forward neural network by using floating point hardware accelerators. ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERING, 12(1), 30-39 [10.15598/aeee.v12i1.831].
FPGA implementations of feed forward neural network by using floating point hardware accelerators
Lozito, Gabriele Maria;LAUDANI, ANTONINO;RIGANTI FULGINEI, Francesco;SALVINI, Alessandro
2014-01-01
Abstract
This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented. © 2014 Advances in electrical and electronic engineering.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.