This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.
Lozito, G.M., Laudani, A., RIGANTI FULGINEI, F., Salvini, A. (2014). FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators. ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERING, 12(1), 30-39.
FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators
LOZITO, GABRIELE MARIA;LAUDANI, ANTONINO;RIGANTI FULGINEI, Francesco;SALVINI, Alessandro
2014-01-01
Abstract
This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.