Skip to main content
Log in

An Efficient Hardware Implementation of Feed-Forward Neural Networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper proposes a new way of digital hardware implementation of nonlinear activation functions in feed-forward neural networks. The basic idea of this new realization is that the nonlinear functions can be implemented using a matrix-vector multiplication. Recently a new approach was proposed for the efficient realization of matrix-vector multipliers, and this approach can be applied for implementing nonlinear functions if these functions are approximated by simple basis functions. The paper proposes to use B-spline basis functions to approximate nonlinear sigmoidal functions, it shows that this approximation fulfils the general requirements on the activation functions, presents the details of the proposed hardware implementation, and gives a summary of an extensive study about the effects of B-spline nonlinear function realization on the size and the trainability of feed-forward neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M.H. Hassoun, Fundamentals of Artificial Neural Networks, MIT Press, 1995.

  2. S. Knerr, L. Personnaz, and G. Dreyfus, “Handwritten digit recognition by neural networks with single-layer training,” IEEE Trans. on Neural Networks, vol. 6, pp. 962–969, 1992.

    Google Scholar 

  3. “White blood cell classification & counting with the ZISC,” http://www.fr.ibm.com/france/cdlab/zblcell.htm, 2000.

  4. T.G. Clarkson and Y. Ding, RAM-Based Neural Networks, chap. 'Extracting directional information for the recognition of fingerprints by pRAM networks,' World Scientific, 1998, pp. 174–185.

  5. J.W.M. Van Dam, B.J.A. Kröse, and F.C.A. Groen, “Adaptive sensor models,” in Proc. of 1996 International Conference on Multisensor Fusion and Integration for Intelligent Systems, Washington DC, Dec. 1996, pp. 705–712.

  6. T. Szab#x00F3;, B. Feh#x00E9;r, and G. Horv#x00E1;th, “Neural network implementation using distributed arithmetic,” in Proceedings of the International Conference on Knowledge-Based Electronic Systems, Adelaide, Australia, 1998, vol. 3, pp. 511–520.

    Google Scholar 

  7. T. Szab#x00F3;, L. Antoni, G. Horv#x00E1;th, and B. Feh#x00E9;r, “An efficient implementation for a matrix-vector multiplier structure,” in Proceedings of IEEE International Joint Conference on Neural Networtks, IJCNN2000, 2000, vol. II, pp. 49–54.

    Google Scholar 

  8. M. Glesner and W. Pöchmüller, Neurocomputers, an Overview of Neural Networks in VLSI, Neural Computing. Chapman & Hall, 1994.

  9. F. Scarselli and A.C. Tsoi, “Universal approximation using feedforward neural networks: Asurvey of some existing methods and some new results,” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998.

    Google Scholar 

  10. V. Kurková, “Approximation of functions by neural networks,” in Proceedings of NC'98, 1998, pp. 29–36.

  11. M.B. Stinchcombe and H. White, “Approximating and learning unknown mappings using multilayer networks with bounded weights,” in Proc. of Int. Joint Conference on Neural Networks, IJCNN'90, IEEE Press, 1990, vol. III, pp. 7–16.

    Google Scholar 

  12. M. Leshno, V.Ya. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a nonpolynomial activation function can approximate any function,” Neural Networks, vol. 6, pp. 861–867, 1993.

    Google Scholar 

  13. K. Hornik, “Some new results on neural network approximatiom,” Neural Networks, vol. 6, pp. 1069–1072, 1993.

    Google Scholar 

  14. C. de Boor, A Practical Guide to Splines, Springer-Verlag, 1978.

  15. L.L. Schumaker, Spline Functions, Basic Theory, Wiley & Sons, 1981.

  16. V. Kurková, “Kolmogorov's theorem and multilayer neural networks,” Neural Networks, vol. 5, pp. 501–506, 1992.

    Google Scholar 

  17. J.T. Lewis, “Computation of best monotone approximations,” Mathematics of Computation, vol. 26, no. 119, pp. 737–747, 1972.

    Google Scholar 

  18. R.A. De Vore, “Monotone approximation by splines,” SIAM J. Math. Anal., vol. 8, no. 5, pp. 891–905, 1977.

    Google Scholar 

  19. E. Passow and J.A. Roulier, “Monotone and convex spline interpolation,” SIAM J. Numer. Anal, vol. 14, no. 5, pp. 904–909, 1977.

    Google Scholar 

  20. X.M. Yu and S.P. Zhou, “On monotone spline approximation,” SIAM J. Math. Anal., vol. 25, no. 4, pp. 1227–1239, 1994.

    Google Scholar 

  21. D. Saal and S. Solla, “Learning from corrupted examples in multilayer perceptrons,” Tech. Rep., Aston University, UK, 1996.

    Google Scholar 

  22. University of Stuttgart, Institute of Parallel and Distributed High-Performance Systems (IPVR), http://www.informatik.unistuttgart. de/ipvr/bv/projekte/snns/snns.html, Stuttgart Neural Network Simulator, User Manual 4.2, 2000.

Download references

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Szab#x00F3;, T., Horv#x00E1;th, G. An Efficient Hardware Implementation of Feed-Forward Neural Networks. Applied Intelligence 21, 143–158 (2004). https://doi.org/10.1023/B:APIN.0000033634.62074.46

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:APIN.0000033634.62074.46

Navigation