Abstract
Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema.
Many experimental chips and microelectronic implementations have been reported in the literature based on the research carried out over the last few years by several research groups. The author presents and discusses the motivations, the system and circuit issues, the design methodology as well as the limitations of this kind of approach. Attention is focused on supervised learning algorithms because of their reliability and popularity within the neural network research community. In particular, the Back Propagation and Weight Perturbation learning algorithms are introduced and reviewed with respect to their analog VLSI implementation.
Finally, the author also reviews and compares the main results reported in the literature, highlighting the efficiency and the reliability of the on-chip implementation of these algorithms.
Similar content being viewed by others
References
Haykin, S., Neural Networks: A Comprehensive Foundation, 2nd ed., Prentice Hall, 1999.
Kohonen, T., Self-Organization and Associative Memory. 2nd ed., Springer Verlag, 1988.
Sheu, B. J. and Choi, J., Neural Information Processing and VLSI. Kluwer Academic Publishers, 1995.
Cauwenberghs, G. and Bayoumi, M. (eds.), Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.
Lippmann, R. P., “An introduction to computing with neural nets.” IEEE ASSP Magazine 4(2), pp. 4-22, 1987.
Rosenblatt, F., Principles of Neurodynamics: Perceptron and Theory of Brain Machines. Washington D.C.: Spartan Books, 1962.
Rumelhart, D. E. and McClelland, J. L., Parallel Distributed Processing. Cambridge: MIT Press, 1986.
Murray, A. F., “Silicon implementations of neural networks,” in IEE Proceedings-F 138(1), February 1991.
Anguita, D. and Valle, M., “Perspectives on dedicated hardware implementations,” in Proc. of the European Symposium on Artificial Neural Networks, Bruges (Belgium), pp. 45-55, April 25-27, 2001.
Omondi, A. R., “Neurocomputers: A dead end?” International Journal of Neural Systems 10(6), pp. 475-481, 2000.
Hopfield, J. J. and Tank, D. W., “Computing with neural circuits: A model.” Science 233, pp. 625-633, 8 August 1986.
Tsividis, Y. and Satyanarayana, S., “Analog circuits for variable synapse electronic neural networks.” Electronic Letters 2(24), pp. 1313-1314, 1987.
Mead, C., Analog VLSI and Neural Systems. Reading MA: Addison Wesley, 1989.
Vittoz, E. A., “Future of the analog in the VLSI environment,” in Proc. of ISCAS 1990, pp. 1372-1375, 1990.
Draghici, S., “Neural networks in analog hardware-design and implementation issues.” Int. Journal of Neural Systems 10(3), 2000.
Mead, C., “Neuromorphic electronic systems,” in Proceedings of the IEEE 78(10), October 1990.
Sarpeshkar, R., “Analog versus digital: Extrapolating from electronics to neurobiology.” Neural Computation 10, pp. 1601-1638, 1998.
Indiveri, G., “A neuromorphic VLSI device for implementing 2-D selective attention systems.” IEEE Trans. on Neural Networks 12(6), pp. 1455-1463, November 2001.
Vittoz, E., “Present and future industrial applications of bio-inspired VLSI systems.” Analog Integrated Systems and Signal Processing 30, pp. 173-184, 2002.
Mahowald, M. and Douglas, R., “A silicon neuron.” Nature 354, pp. 19-26, December 1991.
Andreou, A. G., “Electronic arts imitate life.” Nature 354, 19/26, December 1991.
Meador, J. L. and Cole, C. S., “A low-power CMOS circuit which emulates temporal electrical properties of neurons.” In: Touretzky, S. (ed.), Advances in Neural Information Processing Systems. Morgan Kaufmann Publishers, 1, 1989.
Le Masson, S. et al., “Analog circuits for modeling biological neural networks: Design and applications.” IEEE Trans. on Biomedical Engineering 46(6), pp. 638-645, June 1999.
Rasche, C. and Douglas, R. J., “Forward and backpropagation in a silicon dendrite.” IEEE Trans. on Neural Networks 12(2), pp. 386-393, March 2001.
Bibyk, S. and Ismail, M., “Issues in analog VLSI and MOS techniques for neural computing.” In: Mead, C. and Ismail, M. (eds.), Analog VLSI Implementation of Neural Systems. Kluwer Academic Publishers, 1989.
Tsividis, Y., “Analog MOS integrated circuits-certain new ideas, trends, and obstacles.” IEEE Journal of Solid-State Circuits SC-22(3), pp. 317-321, June 1987.
Harris, J. G. et al., “Analog hardware implementation of continuous-time adaptive filter structures.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.), Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.
Vittoz, E. A., “Analog VLSI signal processing: Why, where, and how?” Journal of VLSI Signal Processing 8, pp. 27-44, 1994.
Cairns, G. and Tarassenko, L., “Precision issues for learning with analog VLSI multilayer perceptrons.” IEEEMICRO 15(3), pp. 54-56, June 1995.
Ismail, M. and Fiez, T., Analog VLSI Signal and Information Processing. McGraw-Hill International Editors, 1994.
Masa, P., Hoen, K. and Wallinga, H., “A high-speed analog neural processor.” IEEE Micro 14(3), pp. 40-50, June 1994.
Holler, M., Tam, S., Castro, H. and Benson, R., “An electrically trainable artificial neural network (ETANN) with 10240 floating gate synapses,” in Proc. of IJCNN, Washington DC, 2, pp. 191-196, 1989.
Choi, J. and Sheu, B., “VLSI design of compact and high precision analog neural network processor,” in Proc. of IJCNN 2, pp. 637-641, 1992.
Masa, P. et al., “10 mW CMOS retina and classifier for handheld, 1000 images per second optical character recognition systems,” in Proc. ISSCC'99, San Francisco, 1999.
Shima, T., Kimura, T., Kamatani, Y., Itakura, T., Fujita, Y. and Iida, T., “Neuro chips with on-chip back-propagation and/or hebbian learning.” IEEE Journal of Solid State Circuits 27, pp. 1868-1875, 1992.
Valle, M., Caviglia, D. D. and Bisio, G. M., “An analog VLSI neural network with on-chip back propagation learning.” Analog Integrated Circuits and Signal Processing, Kluwer Academic
Boahen, K. A. and Andreou, A. G., “A contrast sensitive silicon retina with reciprocal synapses.” In: Moody, J. E., Hanson, S. J. and Lippmann, R. P. (eds.): Advances in Neural Processing Systems San Mateo, CA: Morgan Kaufmann Publishers, 4, 1992.
Boahen, K. A., “Aretinomorphic vision system.” IEEE MICRO 1996, 16(5), pp. 30-39, October 1996.
Boahen, K. A., “A retinomorphic chip with parallel pathways: Encoding increasing, on, decreasing, and off, visual signals.” Analog Integrated Circuits and Signal Processing (30), pp. 121-135, 2002.
Liu, W., Andreou, A. G. and Goldstein, M. H., “Voiced-speech representation by an analog silicon model of the auditory periphery.” IEEE Trans. On Neural Networks 3(3), May 1992.
Liu, S., “Silicon photoreceptors with controllable adaptive filtering properties.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.
Ridella, S. et al., “K-Winner machines for pattern classification.” IEEE Trans. on Neural Networks 12(2), pp. 371-385, March 2001.
Cauwenberghs, G., “Learning on silicon: A survey.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.
Hebb, D. O., The Organization of Behavior, A Neuropsychological Theory. New York: John Wiley, 1949.
Carpenter, G. A., “Neural network models for pattern recognition and associative memory.” Neural Networks 2(4), pp. 243-257, 1989.
Tarassenko, L., A Guide to Neural Computing Applications. London: Arnold Publishers, 1999.
Hertz, J., Krogh, A. and Palmer, R. G., Introduction to the Theory of the Neural Computation. Addison-Wesley Publishing Company, 1991.
Cichocki, A. and Unbehauen, R., Neural Networks for Optimisation and Signal Processing. John Wiley & Sons, 1993.
Jabri, M. and Flower, B., “Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks.” IEEE Trans. Neural Networks 3(1), pp. 154-157, 1992.
Alspector, J. and Lippe, D., “A study of parallel perturbative gradient descent.” Advances in Neural Information Processing Systems (NIPS96), pp. 803-810, 1996.
Jabri, M. A., Coggins, R. F. and Flower, B. G., Adaptive Analog VLSI Neural Systems. Chapman & Hall, 1996.
Cauwenberghs, G., “A fast stochastic error-descent algorithm for supervised learning and optimization.” Advances in Neural Information Processing Systems 5 (NIPS5), pp. 244-251, 1993.
Alspector, J., Meir, R., Yuhas, B., Jayakumar, A. and Lippe, D., “Aparallel gradient descent method for learning in analog VLSI neural networks.” Advances in Neural Information Processing Systems 5 (NIPS5), pp. 836-844, 1993.
Reyneri, L. M. and Filippi, E., “An analysis on the performance of silicon implementations of backpropagation algorithms for artificial neural networks.” IEEE Trans. on Computers 12, pp. 1380-1389, 1991.
Caviglia, D. D., Valle, M., Rossi, A., Vincentelli, M., Bo, G. M., Colangelo, P., Pedrazzi, P. and Colla, A., “Feature extraction circuit for optical character recognition.” Electronics Letters 30(10), pp. 769-760, 1994.
Jacobs, B. A., “Increased rates of convergence through learning rate adaptation.” Neural Networks 4, pp. 295-307, 1988.
Tollenaere, T., “SuperSAB: Fast adaptive back propagation with good scaling properties.” Neural Networks 3, pp. 561-573, 1990.
Edwards, P. J. and Murray, A. F., “Analogue imprecision in MLP training.”World Scientific Publishing Co. Pte. Ltd., 1996.
Valle, M., Caviglia, D. D., Donzellini, G., Mussi, A., Oddone, F. and Bisio, G. M., “A neural computer based on an analog VLSI neural network,” in Proc. of the International Conference on Artificial Neural Network, ICANN94 2, pp. 1339-1342, 1994.
Bo, G. M., Caviglia, D. D., Chiblé, H. and Valle, M., “A circuit architecture for analog on-chip back propagation learning with local learning rate adaptation.” Analog Integrated Circuits and Signal Processing 18(2/3), pp. 163-174, 1999.
Bo, G. M., Caviglia, D. D., Chiblé, H. and Valle, M., “A self-learning analog neural processor.” To be published on IEICE Trans. on Fundamentals, 2002.
Murray, A. F. and Tarassenko, L., Analogue Neural VLSI: A Pulse Stream Approach. Chapman & Hall, 1994.
Kondo, Y. and Sawada, Y., “Functional abilities of a stochastic logic neural network.” IEEE Trans. on Neural Networks 3(3), pp. 434-443, May 1992.
Andreou, A. G. et al., “Current-mode subthreshold MOS circuits for analog VLSI neural systems.” IEEE Trans. on Neural Networks 2(2), March 1991.
Andreou, A. G. and Boahen, K. A., “Translinear circuits in subthreshold CMOS.” Analog Integrated Circuits and Signal Processing 9, pp. 141-166, 1996.
Alspector, J. et al., “VLSI architectures for neural networks.” In: Antognetti, P. and Milutinovic V. (eds.): Neural Networks: Concepts, Applications and Implementations I, Prentice Hall Publisher, pp. 180-215, 1991.
Annema, A., Feed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation. Kluwer Academic Publishers, 1995.
Sarné, G. M. L. and Pastorino, M. N., “Application of neural network for the simulation of the traffic flows in a real transportation network,” in Proc. of the International Conference on Artificial Neural Networks ICANN94, pp. 831-833, 1994.
Bo, G. M., Caviglia, D. D. and Valle, M., “An analog VLSI neural architecture for handwritten numeric character recognition,” in Proc. of the International Conference on Artificial Neural Networks ICANN95, Industrial Conference, Paris, France, 1995.
Bourlard, H. and Morgan, N., “Hybrid connectionist models for continuous speech recognition.” In: Lee, C., Soong, F. K. and Paliwal, K. K. (eds.): Automatic Speech and Speaker Recognition. Kluwer Academic Publishers, pp. 259-283, 1996.
Diotalevi, F., Bo, G. M., Caviglia, D. D. and Valle, M., “Evaluation and validation of local and adaptive weight perturbation learning algorithms for optical character recognition applications,” in Proc. of the Third International ICSC Symposia on Intelligent Industrial Automation (IIA'99) and Soft Computing (SOCO'99), Genova, pp. 508-512, 1-4 June, 1999.
Diotalevi, F., Valle, M. and Caviglia, D. D., “An adaptive local rate technique for hierarchical feed forward architectures.” IEEE-INNS-ENNS International Joint Conference on Neural Networks. Piscataway, NJ, Como, Italy: IEEE Press, 24-27 July 2000.
Diotalevi, F., “Analog microelectronic supervised learning systems.” Ph.D. Thesys, http://www.micro.dibe.unige.it/works/FDiotalevi_PHD_Thesis.zip.
Diotalevi, F. and Valle, M., “Weight perturbation learning algorithm with local learning rate adaptation for the classification of remote-sensing images.” ESANN'01, Bruges (Belgium): D-Facto Publisher, pp. 217-222, 25-27 April 2001. (ISBN 2-930307-01-3).
Cauwenberghs, G., “An analog VLSI recurrent neural network learning a continuous-time trajectory.” IEEE Transaction on Neural Networks 7(2), pp. 346-361, 1996.
Hollis, P. W. et al., “The effects of precision constraints in a backpropagation learning network.” Neural Computation 2, pp. 363-373, 1990.
Anguita, D. et al., “Worst case analysis of weight inaccuracy effects in multilayer perceptrons.” IEEE Trans. on Neural Networks 10(2), pp. 415-418, March 1999.
Montalvo, A. J. et al., “An analog VLSI neural network with on-chip perturbation learning.” IEEE Journal of Solid State Circuits, 32(4), pp. 535-543, April 1997.
Montalvo, A. J. et al., “Towards a general-pourpose analog VLSI neural network with on-chip learning.” IEEE Trans. on Neural Networks 8(2), pp. 413-423, March 1997.
Horio, Y. et al., “Analog memories for VLSI neurocomputing.” IEEE International Symposium on Circuits and Systems 4, pp. 2986-2989, 1990.
Hollis, P. W. and Paulos, J. J., “Artificial neural networks using MOS analog multiplier.” IEEE JSSCs 25(3), pp. 849-855, June 1990.
van Der Spiegel, J. V. et. al., “An analog neural computer with modular architecture for real-time dynamic computations.” IEEE JSSCs 22(1), pp. 82-91, January 1992.
Kramer, A. et al., “Flash-based programmable nonlinear capacitor for switched-capacitor implementations of neural networks.” IEDM Tech. Dig., pp. 17.6.1-17.6.4, December 1994.
Pavan, P., Bez, R., Olivo, P. and Zanoni, E., “Flash memory cells-an overview,” in Proceedings of the IEEE 85(8), pp. 1248-1271, 1997.
Harrison, R. R. et al., “Special issue on floating-gate devices, circuits and systems.” IEEE Trans. on Circuits and Systems II: Analog and Digital Signal Processing 48(1), January 2001.
Kim, K., Lee, K., Jung, T. and Suh, K., “An 8-bit-resolution, 360-us write time non-volatile analog memory based on differentially balanced constant-tunnelling-current scheme (DBCS).” IEEE Journal of Solid State Circuits 33(11), pp. 1758-1762, 1998.
Harrison, R. R. et al., “A CMOS programmable analog memory-cell array using floating-gate circuits.” IEEE Trans. on Circuits and Systems II: Analog and Digital Signal Processing 48(1), pp. 4-11, January 2001.
Murray, A. F., “Pulse arithmetic in VLSI neural networks.” IEEE Micro 9(6), pp. 64-74, 1989.
Schwartz, D. B. et al., “A programmable analog neural network chip.” IEEE Journal of Solid State Circuits 24(2), pp. 313-319, 1989.
Kub, F. J. et al., “Programmable analog vector-matrix multipliers.” IEEE Journal of Solid State Circuits 25(1), pp. 207-214, February 1990.
Tsividis, Y., Mixed Analog-Digital VLSI Devices and Technology: An Introduction. McGraw-Hill, 1995.
Sheu, B. J., Shieh, J. and Patil, M., “Modelling charge injection in MOS analog switch.” IEEE Transaction on Circuits and Systems 34(2), pp. 214-216, 1987.
Vittoz, E. et al., “Analog storage of adjustable synaptic weights.” In: Ramacher, U. (ed.) Introduction to VLSI Design of Neural Networks. Kluwer Academic Publisher, 1991.
Castello, R., Caviglia, D. D., Franciotta, M. and Montecchi, F., “Self refreshing analogue memory cell for variable synaptic weights.” IEE Electronics Letters 27(20), pp. 1871-1872, 1991.
Hochet, B. et al., “Implementation of a learning Kohonen neuron based on a new multilevel storage technique.” IEEE Journal of Solid State Circuits 26(3), pp. 262-267, March 1991.
Cauwenberghs, G. and Yariv, A., “Fault-tolerant dynamic multilevel storage in analog VLSI.” IEEE Trans. on Circuits and Systems II 41(2), pp. 827-829, 1994.
Ehlert, M. and Klair, H., “A 12-bit medium-time analog storage device in a CMOS standard process.” IEEE Journal of Solid State Circuits 33(7), pp. 1139-1143, 1998.
Pelgrom, M. J. M., Aad, L., Duinmaijer, C. J. and Welbers, A. P. G., “Matching properties of MOS transistors.” IEEE Journal of Solid-State Circuits 24(5), pp. 1433-1440, October 1989.
Kinget, P. and Steyaert, M., “Analog VLSI integration of massive parallel processing systems.” Kluwer Academic Publishers, November 1996.
Shoval, A., Johns, D. A. and Snelgrove, W. M., “Comparison of DC offset effects in four LMS adaptive algorithms.” IEEE Transaction on Circuits and Systems II 42(3), pp. 176-185, 1995.
Satyanarayana, S., Tsividis, Y. P. and Graf, H. P., “A reconfigurable VLSI neural network.” IEEE Journal of Solid State Circuits 27(1), pp. 67-81, 1992.
Dolenko, B. K. and Card, H. C., “Tolerance to analog hardware of on-chip learning in backpropagation networks.” IEEE Trans. on Neural Networks 6(5), pp. 1045-1052, 1995.
Morie, T., “Analog VLSI implementation of self learning neural networks.” In: Cauwenberghs, G. and Bayoumi, M. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.
Alhalabi, B. A., Bayoumi, M. B. and Maaz, B., “Mixed-mode programmable and scalable architecture for on-chip learning.” Analog Integrated Circuits and Signal Processing (18), pp. 175-194, 1999.
Murray, A. F. and Woodburn, R., “The prospects for analogue neural VLSI.” International Journal of Neural Systems 8, pp. 559-579, October/December 1997.
Lehmann, T. and Woodburn, R., “Biologically-inspired learning in pulsed neural networks.” In: Cauwenberghs, G. and Bayoumi, M. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, pp. 105-130, 1999.
Gray, P. R. and Meyer, R. G., Analysis and Design of Analog Integrated Circuits, 2nd Edition, McGraw Hill, 1984.
Osa, J. I., Carlosena, A. and Lopez-Martin, A. J., “MOSFET-C filter with on-chip tuning and wide programming range.” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 48(10), pp. 944-951, October 2001.
Tsividis, Y., “Integrated continuous-time filter design-an overview.” IEEE Journal of Solid State Circuits 29(3), pp. 166-176, March 1994.
Andreou, A. G. and Furth, P. M., “An information theoretic framework for comparing the bit energy of signal representations at the circuit level.” In: Sanchez-Sinencio, E. and Andreou, A. G. (eds.): Low-Voltage Low-Power Integrated Circuits and Systems. IEEE Press, 1999.
Jespers, P. G. A., Integrated Converters. Oxford University Press, 2001.
Kramer, A. H., “Array-based computation: Principles, advantages and limitations,” in Proc. of Microneuro'96, pp. 68-79, 1996.
Frantz, G., “Digital signal processor trends.” IEEE Micro 20(6), pp. 52-59, November/December 2000.
Cauwenberghs, G., Bayoumi, M. and Sanchez-Sinencio, E. (eds.), Special Issue on Learning on Silicon, Analog Integrated Circuits and Signal Processing. Kluwer Academic Publisher, 18(2-3), 1999.
Lehmann, T., “Hardware learning in analog VLSI neural networks.” Ph.D. Thesis, Electronics Institute, Technical University of Denmark, 1994. (http://eivind.imm.dtu.dk/publications/phdthesis.html).
Berg, Y., Sigvartsen, R. L., Lande, T. S. and Abusland, A., “An analog feed-forward neural network with on-chip learning.” Analog Integrated Circuits and Signal Processing 9, pp. 65-75, 1996.
Bo, G. M., Chiblé, H., Caviglia, D. D. and Valle, M., “Analog VLSI on-chip learning neural network with learning rate adaptation.” In: Cauwenberghs, G., Bayoumi, M. A. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, pp. 305-330, June 1999.
Lu, C., Shi, B. and Chen, L., “An on-chip BP learning neural network with ideal neuron characteristics and learning rate adaptation.” Analog Integrated Circuits and Signal Processing 31, pp. 55-62, 2002.
http://dspvillage.ti.com/docs/dspproducthome.jhtml
Kramer, A. H., “Array-based analog computation,” in IEEE Micro 1996 16(5), pp. 20-29, October 1996.
Genov, R. and Cauwenberghs, G., “Charge-mode parallel architecture for vector-matrix multiplication.” IEEE Trans. on CAS II-Analog and Digital Signal Processing 48(10), pp. 930-936, October 2001.
Caviglia, D. D., Valle, M. and Bisio, G. M., “Effects of weight discretization on the back propagation learning method: Algorithm design and hardware realization.” in Proc. of the IEEE-INNS International Joint Conference on Neural Networks IJCNN, Ann Arbor, MI, San Diego (USA), pp. II 631-II 637, June 17-21, 1990.
Valle, M., Caviglia, D. D. and Bisio, G. M., “Back-propagation learning algorithms for analog VLSI implementation, in VLSI for neural networks and artificial intelligence,” in Proc. International Workshop on VLSI for Neural Networks and Artificial Intelligence, New York, Oxford UK: Plenum Press, 2-4 September 1992; In: Delgado-Frias, J. D. and Moore, W. R. (eds.), 1994, pp. 35-44. (ISBN: 0-306-44722-3).
Aslam-Siddiqi, A. et al., “A 16 × 16 nonvolatile programmable analog vector-matrix multiplier.” IEEE Journal of Solid-State Circuits, 33(10), pp. 1502-1509, October 1998.
Diotalevi, F., Valle, M., Bo, G. M., Biglieri, E. and Caviglia, D. D., “Analog CMOS current mode neural primitives.” In Proc. ISCAS'2000, IEEE International Symposium on Circuits and Systems, Geneva, Switzerland, 28-31 May, 2000, pp. I 419-I 422. (ISBN: 0-7803-5485-0).
Diotalevi, F., Valle, M., Bo, G. M., Biglieri, E. and Caviglia, D. D., “An analog on-chip learning circuit architecture of the weight perturbation algorithm.” ISCAS'2000, IEEE International Symposium on Circuits and Systems, Geneva, Switzerland, 28-31 May, 2000, pp. II 717-II 720. (ISBN: 0-7803-5485-0).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Valle, M. Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning. Analog Integrated Circuits and Signal Processing 33, 263–287 (2002). https://doi.org/10.1023/A:1020717929709
Issue Date:
DOI: https://doi.org/10.1023/A:1020717929709