Original ContributionA method for improving the real-time recurrent learning algorithm
References (16)
- et al.
Discrete time leaky integrator network with synaptic noise
Neural Networks
(1991) - et al.
Neural network models of the primate motor system
The “Moving Target” training algorithm
- et al.
Experimental analysis of the Real-time Recurrent Learning Algorithm
Connection Science
(1989) Back propagation in perceptrons with feedback
- et al.
Temporal dimension in cognitive models
- et al.
Adding a temporal dimension to expert systems working in a real-time environment. CC-AI The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science, and Applied Epistemology
(1992) Some Networks that can learn, remember, and reproduce any number of complicated space-time patterns I
Journal of Mathematics and Mechanics
(1969)
There are more references available in the full text version of this article.
Cited by (27)
On-line Gauss-Newton-based learning for fully recurrent neural networks
2005, Nonlinear Analysis, Theory, Methods and ApplicationsIntracranial pressure model in intensive care unit using a simple recurrent neural network through time
2004, NeurocomputingCitation Excerpt :Although the RTRL algorithm has great power and generality, it has the disadvantage of being computationally very expensive. In spite of several modifications of RTRL [4,45] to reduce the computational expense, it is still complicated when dealing with complex problems. Therefore, the partially RNNs, also called SRN, involve treating a neural network as a simple dynamical system in which previous states are made available as additional input to any layer, i.e., feedback connections are organized in strict topologies such as hidden to hidden, output to output feedback.
A conjugate gradient learning algorithm for recurrent neural networks
1999, NeurocomputingAdaptive networks for physical modeling
1998, Neurocomputing
Copyright © 1993 Published by Elsevier Ltd.