Skip to main content

Backpropagation with Photonics

  • Chapter
  • First Online:
Application of FPGA to Real‐Time Machine Learning

Part of the book series: Springer Theses ((Springer Theses))

  • 1818 Accesses

Abstract

This chapter presents an experiment that was not originally planned as part of my thesis. The project was set up when Michiel Hermans joined our team in 2015 with an idea of implementing the backpropagation training algorithm (more on that in Sect. 3.2) in hardware, using our opto-electronic reservoir computer (see Sect. 1.2.4) with one slight modification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hermans, Michiel, Piotr Antonik, Marc Haelterman, and Serge Massar. 2016. Embodiment of learning in electro-optical signal processors. Physical Review Letters 117: 128301.

    Google Scholar 

  2. Rumelhart, David E., James L. McClelland, and PDP Research Group. 1986. Parallel distributed processing: explorations in the microstructure of cognition. In Learning internal representations by error propagation, vol. 1, 318–362. Cambridge, MA, USA: MIT Press.

    Google Scholar 

  3. Werbos, Paul J. 1988. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks 1 (4): 339–356.

    Article  Google Scholar 

  4. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521 (7553): 436–444.

    Article  ADS  Google Scholar 

  5. Hermans, Michiel, Joni Dambre, and Peter Bienstman. 2015. Optoelectronic systems trained with backpropagation through time. IEEE Transactions on Neural Networks and Learning Systems 26 (7): 1545–1550.

    Article  MathSciNet  Google Scholar 

  6. Hermans, Michiel, Miguel Soriano, Joni Dambre, Peter Bienstman, and Ingo Fischer. 2015. Photonic delay systems as machine learning implementations. JMLR 16: 2081–2097.

    MathSciNet  MATH  Google Scholar 

  7. Hermans, Michiel, Michaël Burm, Thomas Van Vaerenbergh, Joni Dambre, and Peter Bienstman. 2015. Trainable hardware for dynamical computing using error backpropagation through physical media. Nature Communications 6: 6729.

    Google Scholar 

  8. Ikeda, Kensuke, and Kenji Matsumoto. 1987. High-dimensional chaotic behavior in systems with time-delayed feedback. Physica D: Non-Linear Phenomena 29 (1): 223–235.

    Article  ADS  Google Scholar 

  9. Nesterov, Yurii. 1983. A method of solving a convex programming problem with convergence rate O (1/k2). Soviet Mathematics Doklady 27 (2): 372–376.

    Google Scholar 

  10. Sutskever, Ilya, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th international conference on machine learning (ICML-13), 1139–1147.

    Google Scholar 

  11. Paquot Yvan, François Duport, Anteo Smerieri, Joni Dambre, Benjamin Schrauwen, Marc Haelterman, and Serge Massar. 2012. Optoelectronic reservoir computing. Scientific Reports 2: 287.

    Google Scholar 

  12. Zimmermann, Hubert. 1980. OSI reference model–the ISO model of architecture for open systems interconnection. IEEE Transactions on Communications 28 (4): 425–432.

    Article  Google Scholar 

  13. Garofolo, John, S., and NIST. 1993. TIMIT Acoustic-phonetic continuous speech corpus. Linguistic Data Consortium.

    Google Scholar 

  14. Triefenbach, Fabian, Azarakhsh Jalalvand, Benjamin Schrauwen, and Jean-Pierre Martens. 2010. Phoneme recognition with large hierarchical reservoirs. Advances in Neural Information Processing Systems 23: 2307–2315.

    Google Scholar 

  15. Triefenbach, Fabian, Kris Demuynck, and Jean-Pierre Martens. 2014. Large vocabulary continuous speech recognition with reservoir-based acoustic models. IEEE Signal Processing Letters 21 (3): 311–315.

    Article  ADS  Google Scholar 

  16. Vinckier, Quentin, François Duport, Anteo Smerieri, Kristof Vandoorne, Peter Bienstman, Marc Haelterman, and Serge Massar. 2015. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica 2 (5): 438–446.

    Article  Google Scholar 

  17. Hermans, Michiel, and Benjamin Schrauwen. 2012. Infinite sparse threshold unit networks. In Proceedings of the international conference on artificial neural networks, 612–619.

    Chapter  Google Scholar 

  18. Bishop, Christopher M. 2006. Pattern recognition and machine learning. Springer.

    Google Scholar 

  19. Singh, Satnam. 2011. Using the Virtex-6 Embedded Tri-Mode Ethernet MAC Wrapper v1.4 with the ML605 Board. http://blogs.msdn.microsoft.com/satnam_singh/2011/02/11/using-the-virtex-6-embedded-tri-mode-ethernet-mac-wrapper-v1-4-with-the-ml605-board/.

  20. Brunner, Daniel, Miguel C Soriano, Claudio R Mirasso, and Ingo Fischer. 2013. Parallel photonic information processing at gigabyte per second data rates using transient states. Nature Communications 4: 1364.

    Article  ADS  Google Scholar 

  21. Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In 14th international conference on artificial intelligence and statistics, vol. 15 (106), p. 275.

    Google Scholar 

  22. Shi, Bingxue, and Chun Lu. 2002. Generator of neuron transfer function and its derivative. US Patent 6429699.

    Google Scholar 

  23. Vandoorne, Kristof, Pauline Mechet, Thomas Van Vaerenbergh, Martin Fiers, Geert Morthier, David Verstraeten, Benjamin Schrauwen, Joni Dambre, and Peter Bienstman. 2014. Experimental demonstration of reservoir computing on a silicon photonics chip. Nature Communications 5: 3541.

    Google Scholar 

  24. Smerieri, Anteo, François Duport, Yvan Paquot, Benjamin Schrauwen, Marc Haelterman, and Serge Massar. 2012. Analog readout for optical reservoir computers. In Advances in neural information processing systems, 944–952.

    Google Scholar 

  25. Duport, François, Anteo Smerieri, Akram Akrout, Marc Haelterman, and Serge Massar. 2016. Fully analogue photonic reservoir computer. Scientific Reports 6: 22381.

    Google Scholar 

  26. Vinckier, Quentin, Arno Bouwens, Marc Haelterman, and Serge Massar. 2016. Autonomous all-photonic processor based on reservoir computing paradigm. In Conference on lasers and electro-optics. Optical society of America. SF1F.1.

    Google Scholar 

  27. Waldrop, M.Mitchell. 2016. The chips are down for Moore’s law. Nature 530: 144–147.

    Article  ADS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Piotr Antonik .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Antonik, P. (2018). Backpropagation with Photonics. In: Application of FPGA to Real‐Time Machine Learning. Springer Theses. Springer, Cham. https://doi.org/10.1007/978-3-319-91053-6_3

Download citation

Publish with us

Policies and ethics