Skip to main content

Recurrent Networks

  • Chapter
  • First Online:
Computational Intelligence

Abstract

The Hopfield networks that we discussed in the preceding chapter are special recurrent networks, which have a very constrained structure. In this chapter, however, we lift all restrictions and consider recurrent networks without any constraints. Such general recurrent networks are well suited to represent differential equations and to solve them (approximately) in a numerical fashion. If the type of the differential equation is known that describes a given system, but the values of the parameters appearing in it are unknown, one may also try to train a suitable recurrent network with the help of example patterns in order to determine the system parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Due to the special operation scheme of neural networks, it is not possible to solve arbitrary differential equations numerically with the help of recurrent networks. It suffices, however, if the differential equation can be solved for the independent variable or for one of the occurring derivatives of the dependent variable. Here we consider, as an example, the special case in which the differential equation can be written with the highest occurring derivative isolated on one side.

  2. 2.

    A laparoscope is a medical instrument with which a physician can examine the abdominal cavity through small incisions. In virtual laparoscopy, an examination of the abdominal cavity is simulated with a computer and a force-feedback device in the shape of a laparoscope in order to teach medical students how to use this instrument properly.

References

  1. Cho K, van Merrienboer B, Gulcehre C, Caglar D, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Representations learning phrase, using RNN encoder-decoder for statistical machine translation. In: Proceedings of conference on empirical methods in natural language processing (EMNLP, (2014) Doha, Qatar). Association for Computational Linguistics, Stroudsberg, PA, USA, pp 1724–1734

    Google Scholar 

  2. Feynman RP, Leighton RB, Sands M (1963) The feynman lectures on physics, vol. 1: mechanics, radiation, and heat. Addison-Wesley, Reading, MA, USA

    Google Scholar 

  3. Gers FA, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with LSTM. Neural Computation 12(10):2451–2471. MIT Press, Cambridge, MA

    Google Scholar 

  4. Greiner W, Teil M (1989) 1 (Series: Theoretische Physik) Verlag Harri Deutsch, Thun, Frankfurt am Main, Germany. English edition: Classical Mechanics. Springer, Berlin, Germany

    Google Scholar 

  5. Heuser H (1989) Gewöhnliche Differentialgleichungen. Teubner, Stuttgart, Germany

    Google Scholar 

  6. Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fĂĽr Informatik, Technische Universit"at M"unchen, Germany

    Google Scholar 

  7. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. MIT Press, Cambridge, MA

    Google Scholar 

  8. Hochreiter S, Bengio Y, Frasconi P, Schmidhuber J (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: [Kremer and Kolen 2001]

    Google Scholar 

  9. Kremer SC, Kolen JF (eds) (2001) A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, Piscataway, NJ, USA

    Google Scholar 

  10. Radetzky A, Nürnberger A (2002) Visualization and simulation techniques for surgical simulators using actual patient’s data. Artif Intell Med 26:3, 255–279. Elsevier Science, Amsterdam, Netherlands

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanaz Mostaghim .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kruse, R., Mostaghim, S., Borgelt, C., Braune, C., Steinbrecher, M. (2022). Recurrent Networks. In: Computational Intelligence. Texts in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-030-42227-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-42227-1_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-42226-4

  • Online ISBN: 978-3-030-42227-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics