Abstract
We presented an architecture for VLSI neural networks, where stochastic products are used in the synapses. Despite an increase in the computation times, in comparison with purely analog matrix products, this method has the advantage of offering a better precision in the computations.
If n bits precision are required (depending of the type of network, the learning algorithm and the application), 2n steps are needed for a complete computation of the matrix product, but first approximation can be obtained earlier with well-chosen time constants in the low-pass filter included in the neuron to realize the integration. In standard applications, this is however much less than the number of steps required for a similar purely digital processor, where at least as many iterations as the number of synapses are required. In comparison with analog synapses, the precision which can be obtained is much better, since it almost no more depends on the precision of the components (only for the maximum number of connected synapses). The proposed architecture offers thus a good compromise between the precision of digital implementations and the size and speed of analog ones.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
6. References
Blayo, F., and Marchal, P., Extension of Cellular Automata to Neural Computation, in: Proceedings of the International Joint Conference on Neural Networks, (Washington, June 1989).
Tryba, V., and Goser, K., Self-organizing feature maps for process control in chemistry, Proceedings of the International Conference on Neural Networks, (Helsinky, Finland, June 24–28 1991).
Verleysen, M., and Jespers, P., Precision of Sum-of-products in Analog Neural Networks, in: Goser, K., Ramacher, U., and Rückert, U., (ed.), Proceedings of the 1st International Workshop on Microelectronics for Neural Networks, (Dortmund, Germany, June 25–26 1990).
Gregorian, R., and Temes, G.C., Analog MOS Integrated Circuits for Signal Processing, (J.Wiley & Sons, 1986).
Graf, H.P., and de Vegvar, P., A CMOS Implementation of a Neural Network Model, in: Losleben, P., (ed.), Proceedings of the 1987 Stanford Conference on Advanced Research in VLSI, (MIT Press, Cambridge, 1987).
Verleysen, M., et al., Neural Networks for High-Storage Content-Addressable Memory: VLSI Circuit and Learning Algorithm IEEE JSSC, (1989) 562–569.
Van den Bout, D., and Miller, T., A Stochastic Architecture for Neural Nets, in: Proceedings of the IEEE IJCNN 1988, (Washington, 1988).
Jespers, P., Windal, M., and Watteyne, T., An Integrated Binary Correlator Module IEEE JSSC, (1983) 286–290.
Verleysen, M., and Jespers, P., Stochstic computations in VLSI analog neural networks, Proceedings of the International Conference on Neural Networks, (Helsinky, Finland, June 24–28 1991).
Macq, D., and Jespers, P., Analog Storage in a Standard CMOS Technology, in print.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1991 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Verleysen, M., Jespers, P. (1991). Analog VLSI synapse matrix with enhanced stochastic computations. In: Prieto, A. (eds) Artificial Neural Networks. IWANN 1991. Lecture Notes in Computer Science, vol 540. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0035908
Download citation
DOI: https://doi.org/10.1007/BFb0035908
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-54537-8
Online ISBN: 978-3-540-38460-1
eBook Packages: Springer Book Archive