Neural network training based on FPGA with floating point number format and it's performance


Çavuşlu M. A., KARAKUZU C., ŞAHİN S., Yakut M.

Neural Computing and Applications, vol.20, no.2, pp.195-202, 2011 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 20 Issue: 2
  • Publication Date: 2011
  • Doi Number: 10.1007/s00521-010-0423-3
  • Journal Name: Neural Computing and Applications
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.195-202
  • Keywords: Artificial neural networks, Floating point arithmetic, FPGA, Parallel programming, VHDL
  • Bilecik Şeyh Edebali University Affiliated: Yes

Abstract

In this paper, two-layered feed forward artificial neural network's (ANN) training by back propagation and its implementation on FPGA (field programmable gate array) using floating point number format with different bit lengths are remarked based on EX-OR problem. In the study, being suitable with the parallel data-processing specification on ANN's nature, it is especially ensured to realize ANN training operations parallel over FPGA. On the training, Virtex2vp30 chip of Xilinx FPGA family is used. The network created on FPGA is coded by using VHDL. By comparing the results to available literature, the technique developed here proved to consume less space for the subjected ANN training which has the same structure and bit length, it is shown to have better performance. © 2010 Springer-Verlag London Limited.