Abstract
The Hopfield networks that we discussed in the preceding chapter are special recurrent networks, which have a very constrained structure. In this chapter, however, we lift all restrictions and consider recurrent networks without any constraints. Such general recurrent networks are well suited to represent differential equations and to solve them (approximately) in a numerical fashion. If the type of the differential equation is known that describes a given system, but the values of the parameters appearing in it are unknown, one may also try to train a suitable recurrent network with the help of example patterns in order to determine the system parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Due to the special operation scheme of neural networks, it is not possible to solve arbitrary differential equations numerically with the help of recurrent networks. It suffices, however, if the differential equation can be solved for the independent variable or for one of the occurring derivatives of the dependent variable. Here we consider, as an example, the special case in which the differential equation can be written with the highest occurring derivative isolated on one side.
- 2.
A laparoscope is a medical instrument with which a physician can examine the abdominal cavity through small incisions. In virtual laparoscopy, an examination of the abdominal cavity is simulated with a computer and a force-feedback device in the shape of a laparoscope in order to teach medical students how to use this instrument properly.
References
Cho K, van Merrienboer B, Gulcehre C, Caglar D, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Representations learning phrase, using RNN encoder-decoder for statistical machine translation. In: Proceedings of conference on empirical methods in natural language processing (EMNLP, (2014) Doha, Qatar). Association for Computational Linguistics, Stroudsberg, PA, USA, pp 1724–1734
Feynman RP, Leighton RB, Sands M (1963) The feynman lectures on physics, vol. 1: mechanics, radiation, and heat. Addison-Wesley, Reading, MA, USA
Gers FA, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with LSTM. Neural Computation 12(10):2451–2471. MIT Press, Cambridge, MA
Greiner W, Teil M (1989) 1 (Series: Theoretische Physik) Verlag Harri Deutsch, Thun, Frankfurt am Main, Germany. English edition: Classical Mechanics. Springer, Berlin, Germany
Heuser H (1989) Gewöhnliche Differentialgleichungen. Teubner, Stuttgart, Germany
Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fĂĽr Informatik, Technische Universit"at M"unchen, Germany
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. MIT Press, Cambridge, MA
Hochreiter S, Bengio Y, Frasconi P, Schmidhuber J (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: [Kremer and Kolen 2001]
Kremer SC, Kolen JF (eds) (2001) A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, Piscataway, NJ, USA
Radetzky A, Nürnberger A (2002) Visualization and simulation techniques for surgical simulators using actual patient’s data. Artif Intell Med 26:3, 255–279. Elsevier Science, Amsterdam, Netherlands
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kruse, R., Mostaghim, S., Borgelt, C., Braune, C., Steinbrecher, M. (2022). Recurrent Networks. In: Computational Intelligence. Texts in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-030-42227-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-42227-1_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-42226-4
Online ISBN: 978-3-030-42227-1
eBook Packages: Computer ScienceComputer Science (R0)