Skip to main content

Time in Connectionist Models

  • Chapter
  • First Online:
Sequence Learning

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1828))

Abstract

The prototypical use of “classical” connectionist models (including the multilayer perceptron (MLP), the Hopfield network and the Kohonen self-organizing map) concerns static data processing. These classical models are not well suited to working with data varying over time. In response to this, temporal connectionist models have appeared and constitute a continuously growing research field. The purpose of this chapter is to present the main aspects of this research area and to review the key connectionist architectures that have been designed for solving temporal problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Abbott, L. F., & Kepler, T. B. (1990). Model neurons: from Hodgkin-Huxley to Hopfield. In Garrido, L. (Ed.), Statistical Mechanics of Neural Networks, pp. 5–18. Springer.

    Google Scholar 

  • Abeles, M. (1982). Local cortical circuits: an electrophysiological study (Studies of brain functions, Vol. 6). Springer Verlag.

    Google Scholar 

  • Amit, D. J. (1988). Neural network counting chimes. Proc. Nat. Acad. Sci. USA, 85, 2141–2145.

    Article  MathSciNet  Google Scholar 

  • Back, A., & Tsoi, A. (1991). FIR and IIR Synapses: A New Neural Network Architecture for Time Series Modeling. Neural Computation, 3(3), 375–385.

    Article  Google Scholar 

  • Bengio, Y., Cardin, R., de Mori, R., & Merlo, E. (1989). Programmable Execution of Multi-Layered Networks for Automatic Speech Recognition. Communications of the ACM, 32, 195–199.

    Article  Google Scholar 

  • Bengio, Y., Frasconi, P., & Simard, P. (1993). The problem of learning long-term dependencies in recurrent networks. In IEEE Transactions on Neural Networks, pp. 1183–1195 San Francisco. IEEE Press. (invited paper).

    Google Scholar 

  • Bengio, Y., Mori, R. D., & Gori, M. (1992). Learning the Dynamic Nature of Speech with Back-propagation for Sequences. Pattern Recognition Letters, 13(5), 375–386.

    Article  Google Scholar 

  • Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies is difficult. IEEE Trans. on Neural Networks, 5(2), 157–166.

    Article  Google Scholar 

  • Béroule, D. (1987). Guided propagation inside a topographic memory. In 1st int. conf. on neural networks, pp. 469–476 San Diego. IEEE.

    Google Scholar 

  • Bodenhausen, U., & Waibel, A. (1991). The Tempo2 algorithm: adjusting time delays by supervised learning. In Lippmann, R. P., Moody, J., & Tourestzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3, pp. 155–161 San Mateo (CA). Morgan Kaufmann.

    Google Scholar 

  • Bonnet, D., Perrault, V., & Grumbach, A. (1997a). Daily Passenger Traffic Forecasting using δ-NARMA Neural Networks. In Proceedings of the World Congress on Railroad Research (WCRR’97), pp. CD-ROM.

    Google Scholar 

  • Bonnet, D., Perrault, V., & Grumbach, A. (1997b). δ-NARMA neural network: a new approach to signal prediction. IEEE Transaction on Signal Processing, 45(11), 2799–2810.

    Article  Google Scholar 

  • Bonnet, D., Perrault, V., & Grumbach, A. (1997c). δ-NARMA neural networks: a connectionist extension of ARARMA models. In Verleysen, M. (Ed.), Proceedings of the European Symposium on Artificial Neural Networks, pp. 127–132 Brussels (Belgium). D Facto.

    Google Scholar 

  • Bourlard, H., & Morgan, N. (1994). Connectionist Speech Recognition — A Hybrid Approach. Kluwer Academic Publishers.

    Google Scholar 

  • Bourlard, H., & Morgan, N. (1998). Hybrid HMM/ANN Systems for Speech Recognition: Overview and New Research Directions. In Giles, C. L., & Gori, M. (Eds.), Adaptive Processing of Sequences and Data Structures, Vol. 1387 of Lecture Notes in Artificial Intelligence, pp. 389–417. Springer.

    Google Scholar 

  • Catfolis, T. (1994). Mapping a complex temporal problem into a combination of static and dynamic neural networks. Sigart Bulletin, 5(3), 23–28.

    Article  Google Scholar 

  • Chappelier, J.-C., & Grumbach, A. (1994). Time in Neural Networks. Sigart Bulletin, 5(3), 3–10.

    Article  Google Scholar 

  • Chappelier, J.-C., & Grumbach, A. (1996). A Kohonen Map for Temporal Sequences. In NEURAP’95 Marseille.

    Google Scholar 

  • Chappelier, J.-C., & Grumbach, A. (1998). RST: a Connectionist Architecture to Deal with Spatiotemporal Relationships. Neural Computation, 10(4), 883–902.

    Article  Google Scholar 

  • Chappell, G. J., & Taylor, J. G. (1993). The Temporal Kohonen Map. NN, 6, 441–445.

    Google Scholar 

  • Cleeremans, A., Servan-Schreiber, D., & McClelland, J. (1989). Finite State Automata and Simple Recurrent Networks. Neural Computation, 1, 372–381.

    Article  Google Scholar 

  • Connor, J., Atlas, L. E., & Martin, D. R. (1992). Recurrent network and NARMA modelling. In Hanson, S. J., Lippmann, R. P., Moody, J. E., & Touretzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 4, pp. 301–308. Morgan Kaufmann, San Mateo (CA).

    Google Scholar 

  • Connor, J., & Martin, D. R. (1994). Recurrent neural networks and robust time series prediction. IEEE Transactions on Neural Networks, 5(2), 240–253.

    Article  Google Scholar 

  • de Vries, B., & Principe, J. C. (1992). The Gamma model. A new neural model for temporal processing. Neural Networks, 5, 565–576.

    Article  Google Scholar 

  • Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211.

    Article  Google Scholar 

  • Elman, J. L. (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7(2), 195–226.

    Google Scholar 

  • Euliano, N. R., & Principe, J. C. (1996). Spatio-Temporal Self-Organizing Feature Maps. In IJCNN’96, Vol. 4, pp. 1900–1905.

    Google Scholar 

  • Fogelman-Soulié, F., & Gallinari, P. (Eds.). (1998). Industrial Applications of Neural Networks. World Scientific Publishing Co.

    Google Scholar 

  • Frasconi, P. (1998). An introduction to learning structured information. In Giles, C. L., & Gori, M. (Eds.), Adaptive Processing of Sequences and Data Structures, Vol. 1387 of Lecture Notes in Artificial Intelligence, pp. 99–120. Springer.

    Google Scholar 

  • Frasconi, P., Gori, M., & Soda, G. (1992). Local Feedback Multi-Layered Networks. Neural Computation, 4(2), 120–130.

    Article  Google Scholar 

  • Frasconi, P., Gori, M., & Sperduti, A. (1998). A general framework for adaptive processing of data structures. IEEE Transactions on Neural Networks, 9, 768–786.

    Article  Google Scholar 

  • Funahashi, K., & Nakamura, Y. (1993). Approximations of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6(6), 801–806.

    Article  Google Scholar 

  • Gerstner, W. (1995). Time structure of the activity in neural network models. Physical Review E, 51, 738–758.

    Article  Google Scholar 

  • Goldberg, K. Y., & Pearlmutter, B. A. (1989). Using Backpropagation with Temporal Windows to Learn the dynamics of the CMU Direct-Drive Arm II. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 1. Morgan-Kaufmann.

    Google Scholar 

  • Gori, M. (Ed.). (1992). Neural Networks for Speech Processing. Lint.

    Google Scholar 

  • Gorman, R. P., & Sejnowski, T. J. (1988). Analysis of hidden units in a layered network trained to classify sonar targets. NN, 1, 75–89.

    Google Scholar 

  • Hirch, M. W. (1991). Network dynamics: Principles and problems. In Paseman, F., & Doebner, H. (Eds.), Neurodynamics, Series on neural networks, pp. 3–29. World Scientific.

    Google Scholar 

  • Hochreiter, S., & Schmidhuber, J. (1997a). Bridging Long Time Lags by Weight Guessing and “Lond Short Term Memory”. In Silva, F. L., Principe, J. C., & Almeida, L. B. (Eds.), Spatiotemporal Models in Biological and Artificial Systems, pp. 65–72. IOS Press.

    Google Scholar 

  • Hochreiter, S., & Schmidhuber, J. (1997b). Long Short-Term Memory. NC, 9(8), 1735–1780.

    Google Scholar 

  • Horn, D., & Usher, M. (1991). Parallel activation of memories in an oscillatory neural network. Neural Computation, 3(1), 31–43.

    Article  Google Scholar 

  • Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. NN, 4, 251–257.

    Google Scholar 

  • Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward neural networks are universal approximators. NN, 2(5), 359–366.

    Google Scholar 

  • Jacquemin, C. (1994). A Temporal Connectionnist Approach to Natural Language. Sigart Bulletin, 5(3), 12–22.

    Article  Google Scholar 

  • Jordan, M. I. (1986). Attractor dynamics and parallelism in a connectionist sequential machine. In Proc. of the 8th annual conference on Cognitive Science. Erlbaum.

    Google Scholar 

  • Kangas, J. (1990). Time-Delayed Self-Organizing Maps. In Proceedings of IJCNN’90, Vol. II, pp. 331–336.

    Google Scholar 

  • Kohonen, T. (1991). The HyperMap Architecture. In T. Kohonen, K. Makisara, O. S., & Kangas, J. (Eds.), Artificial Neural Networks, pp. 1357–1360. North-Holland.

    Google Scholar 

  • Kolen, J. F. (1994). Recurrent Networks: State Machines or Iterated Function Systems?. In Mozer, M. C., Smolensky, P., Touretzky, D. S., Elman, J. L., & Weigend, A. S. (Eds.), Proceedings of the 1993 Connectionist Models Summer School, pp. 203–210 Hillsdale NJ. Erlbaum.

    Google Scholar 

  • Kopecz, K. (1995). Unsupervised Learning of Sequences on Maos with Lateral Connectivity. In Proceedings of ICANN’95, Vol. 2, pp. 431–436.

    Google Scholar 

  • Lane, P. C. R., & Henderson, J. B. (1998). Simple Synchrony Networks: Learning to parse natural language with Temporal Synchrony Variable Binding. In Noklasson, L., Boden, M., & Ziemke, T. (Eds.), Proc. of 8th Int. Conf. on Artificial Neural Networks (ICANN’98), pp. 615–620 Skövde (Sweden).

    Google Scholar 

  • Lane, P. C. R., & Henderson, J. B. (2000). Incremental Syntactic Parsing of Natural Language Corpora with Simple Synchrony Networks. IEEE Transactions on Knowledge and Data Engineering, to appear. Special Issue on Commenctionist Models for Learning in Structured Domains.

    Google Scholar 

  • Lang, K. J., Waibel, A. H., & Hinton, G. E. (1990). A time-delay neural-network architecture for isolated word recognition. Neural Networks, 3(1), 23–44.

    Article  Google Scholar 

  • Lapedes, A. S., & Farber, R. (1987). Nonlinear signal processing using neural networks: prediction and system modelling. Tech. rep. LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos (CA).

    Google Scholar 

  • Lumer, E. D., & Huberman, B. A. (1992). Binding hierarchies: a basis for dynamic perceptual grouping. NC, 4, 341–355.

    Google Scholar 

  • Maass, W. (1994). On the computationnal complexity of networks of spiking neurons. In NIPS’94 Proc., Vol. 7. MIT-Press.

    Google Scholar 

  • Maass, W. (1996). Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1), 1–40.

    Article  MATH  MathSciNet  Google Scholar 

  • Maass, W. (1997a). Analog Computing with Temporal Coding in Networks of Spiking Neurons. In Silva, F. L., Principe, J. C., & Almeida, L. B. (Eds.), Spatiotemporal Models in Biological and Artificial Systems, pp. 97–104. IOS Press.

    Google Scholar 

  • Maass, W. (1997b). Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9), 1659–1671.

    Article  Google Scholar 

  • MacGregor, R. J., & Lewis, E. R. (1977). Neural Modeling, Electric signal processing in the neurons systems. Plenum Press.

    Google Scholar 

  • Miller., C. B., & Giles, C. L. (1993). Experimental Comparison of the Effect of Order in Recurrent Neural Networks. Int. Journal of Pattern Recognition and Artificial Intelligence, 7(4), 849–872.

    Article  Google Scholar 

  • Morasso, P. (1991). Self-Organizing Feature Maps for Cursive Script Recognition. In T. Kohonen, K. Makisara, O. S., & Kangas, J. (Eds.), Artificial Neural Networks, pp. 1323–1326. North-Holland.

    Google Scholar 

  • Mozayyani, N., Alanou, V., Dreyfus, J., & Vaucher, G. (1995). A Spatio-Temporal Data-Coding Applied to Kohonen Maps. In International Conference on Artificial Neural Networks, Vol. 2, pp. 75–79.

    Google Scholar 

  • Mozer, M. C. (1994). Neural Net Architectures for Temporal Sequence Processing. In Weigend, A., & Gershenfeld, N. (Eds.), Time Series Prediction, pp. 243–264. Addison-Wesley.

    Google Scholar 

  • Mozer, M. (1989). A Focused Back-Propagation Algorithm for Temporal Pattern Recognition. Complex Systems, 3, 349–381.

    MATH  MathSciNet  Google Scholar 

  • Narendra, K. P., & Parthasarathy, K. (1990). Identification and Control of Dynamical Systems using Neural Networks. IEEE Transactions on Neural Networks, 1, 4–27.

    Article  Google Scholar 

  • Nerrand, O., Roussel-Ragot, P., Personnaz, L., Dreyfus, G., & Marcos, S. (1993). Neural networks and nonlinear adaptive filtering: unifiying concepts and new algorithms. Neural Computation, 5, 165–197.

    Article  Google Scholar 

  • Omlin, C., & Giles, C. (1996). Constructing Deterministic Finite-State Automata in Recurrent Neural Networks. Journal of the ACM, 43(6), 937–972.

    Article  MATH  MathSciNet  Google Scholar 

  • Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46(1–2), 77–106.

    Article  Google Scholar 

  • Privitera, C. M., & Morasso, P. (1993). A New Approach to Storing Temporal Sequences. In Proc. IJCNN’93, pp. 2745–2748.

    Google Scholar 

  • Ramacher, U. (1993). Hamiltonian dynamics of neural networks. Neural Networks, 6(4), 547–557.

    Article  Google Scholar 

  • Reber, A. S. (1976). Implicit learing of synthetic languages: The role of the instructional set. Journal of Experimental Psycology: Human Learning and Memory, 2, 88–94.

    Article  Google Scholar 

  • Rinzel, J., & Ermentrout, G. B. (1989). Analysis of Neural Excitability and Oscillations. In Koch, C., & Segev, I. (Eds.), Methods in Neural Modeling — From Synapses to Networks, pp. 135–169. MIT Press.

    Google Scholar 

  • Robinson, T. (1994). An application of recurrentn nets to phone probability estimation. IEEE Trans. on Neural Networks, 5(2), 298–305.

    Article  Google Scholar 

  • Rohwer, R. (1994). The Time Dimension of Neural Network Models. Sigart Bulletin, 5(3), 36–44.

    Article  Google Scholar 

  • Servan-Schreiber, Cleeremans, A., & McClelland, J. (1991). Graded state machines: The representation of temporal contingencies in simple recurrent networks. Machine Learning, 7(2), 161–194.

    Google Scholar 

  • Shastri, L., & Ajjanagadde, V. (1993). From simple associations to systematic reasoning: a connectionist representation of rules, variables and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences, 16, 417–494.

    Article  Google Scholar 

  • Siegelmann, H. T., & Sontag, E. D. (1995). On the Computational Power of Neural Nets. Journal of Computers and System Sciences, 50, 132–150.

    Article  MATH  MathSciNet  Google Scholar 

  • Simpson, P. K., & Deich, R. O. (1988). Neural networks, fuzzy logic and acoustic pattern generation. In Proceedings of AAAIC’88.

    Google Scholar 

  • Sperduti, A. (1998). Neural Network for Processing Data Structures. In Giles, C. L., & Gori, M. (Eds.), Adaptive Processing of Sequences and Data Structures, Vol. 1387 of Lecture Notes in Artificial Intelligence, pp. 121–144. Springer.

    Google Scholar 

  • Tank, D. W., & Hopfield, J. J. (1987). Neural computation by concentring information in time. Proc. Nat. Acad. Sci. USA, 84, 1896–1900.

    Article  MathSciNet  Google Scholar 

  • Tsoi, A. C. (1998). Recurent Neural Network Architectures: An Overview. In Giles, C. L., & Gori, M. (Eds.), Adaptive Processing of Sequences and Data Structures, Vol. 1387 of Lecture Notes in Artificial Intelligence, pp. 1–26. Springer.

    Google Scholar 

  • Tsoi, A. C., & Back, A. (1997). Discrete time recurent neural network architectures: A unifying review. NeuroComputing, 15(3 & 4), 183–223.

    Article  MATH  Google Scholar 

  • Unnikrishnan, K. P., Hopfield, J. J., & Tank, D. W. (1991). Connected-digit speaker-dependent speech recognition using a neural network with time delay connections. IEEE Transaction on Signal Processing, 39(3), 698–713.

    Article  Google Scholar 

  • Vaucher, G. (1996). Neuro-Biological Bases for Spario-Temporal Data Coding in Artificial Neural Networks. Lecture Notes in Computer Science, 1112, 703ff.

    Google Scholar 

  • Williams, R. J., & Peng, J. (1990). An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories. Neural Computation, 2(4), 490–501.

    Article  Google Scholar 

  • Williams, R., & Zipser, D. (1989). A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation, 1(3), 270–280.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Chappelier, JC., Gori, M., Grumbach, A. (2000). Time in Connectionist Models. In: Sun, R., Giles, C.L. (eds) Sequence Learning. Lecture Notes in Computer Science(), vol 1828. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44565-X_6

Download citation

  • DOI: https://doi.org/10.1007/3-540-44565-X_6

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41597-8

  • Online ISBN: 978-3-540-44565-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics