Abstract
With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baddeley, A.: Oxford Psychology Series, no. 11. working memory. New York (1986)
Baddeley, A., Gathercole, S., Papagno, C.: The phonological loop as a language learning device. Psychol. Rev. 105(1), 158 (1998)
Barlow, H.: Unsupervised learning. Neural Comput. 1(3), 295–311 (1989). https://doi.org/10.1162/neco.1989.1.3.295
Benna, M.K., Fusi, S.: Computational principles of synaptic memory consolidation. Nat. Neurosci. 19(12), 1697–1708 (2016)
Bishop, C.: Neural Networks for Pattern Recognition, p. 1. Oxford University Press, Inc., New York (1995)
Byrd, R.H., Lu, P., Nocedal, J., Zhu, C.Y.: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16(6), 1190–1208 (1995). www.citeseer.ist.psu.edu/byrd94limited.html
Clark, R.C., Nguyen, F., Sweller, J., Baddeley, M.: Efficiency in learning: evidence-based guidelines to manage cognitive load. Perf. Improvement 45(9), 46–47 (2006). https://doi.org/10.1002/pfi.4930450920. www.onlinelibrary.wiley.com/doi/abs/10.1002/pfi.4930450920
Eck, D., Schmidhuber, J.: A First Look at Music Composition using LSTM Recurrent Neural Networks. Technical Report No. IDSIA-07-02, p. 1 (2002)
Eck, D., Schmidhuber, J.: Learning the long-term structure of the blues. In: Dorronsoro, J.R. (ed.) ICANN 2002. LNCS, vol. 2415, pp. 284–289. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-46084-5_47
Fahlman, S.: Faster learning variations on back-propagation: an empirical study. In: Touretszky, D., Hinton, G., Sejnowski, T. (eds.) Proceedings of the 1988 Connectionist Models Summer School, pp. 38–51. Morgan Kaufmann, San Mateo (1989)
Gathercole, S.E.: The development of memory. J. Child Psychol. Psychiat. 39(1), 3–27 (1998). https://doi.org/10.1111/1469-7610.00301. www.onlinelibrary.wiley.com/doi/abs/10.1111/1469-7610.00301
Goh, B.S.: New algorithms for unconstrained optimization problems. In: Proceedings of 1995 American Control Conference - ACC 1995, vol. 3, pp. 2071–2074 (1995). https://doi.org/10.1109/ACC.1995.531260
Goh, B.: Approximate greatest descent methods for optimization with equality constraints. J. Optim. Theory Appl. 148, 505–527 (2011)
Goh, B.: Numerical method in optimization as a multi-stage decision control system. In: Latest Advances in Systems Science and Computational Intelligence, pp. 25–30 (2012)
Grum, M.: Managing human and artificial knowledge bearers - the creation of a symbiotic knowledge management approach. In: Proceedings of the Tenth BMSD, pp. 182–201 (2020). https://doi.org/10.1007/978-3-030-24854-3_7
Grum, M.: NMDL repository (2020). www.github.com/MarcusGrum/CoNM/tree/main/meta-models/nmdl. www.github.com/MarcusGrum/CoNM/tree/main/meta-models/nmdl, version 1.0.0
Grum, M.: Construction of a Concept of Neuronal Modeling. Springer, Heidelberg (2021). https://doi.org/10.1007/978-3-658-35999-7
Grum, M.: Towards a concept of neuronal modeling (CoNM) (2021). www.youtu.be/Rasm-lfeZ68. www.youtu.be/Rasm-lfeZ68
Grum, M., Gronau, N.: A visionary way to novel process optimizations. In: Shishkov, B. (ed.) BMSD 2017. LNBIP, vol. 309, pp. 1–24. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-78428-1_1
Hebb, D.O.: The Organization of Behavior: A Neuropsychological Theory. Wiley, New York (1949)
Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bureau Stand. 49(6), 409–436 (1952)
Isikdogan, L.F., Nayak, B.V., Wu, C., Moreira, J.P., Sushma, R., Michael, G.: Semifreddonets: partially frozen neural networks for efficient computer vision systems. CoRR abs/2006.06888 (2020). www.arxiv.org/abs/2006.06888
Kumaran, D., Hassabis, D., McClelland, J.L.: What learning systems do intelligent agents need? complementary learning systems theory updated. Trends Cogn. Sci. 20(7), 512–534 (2016)
LeCun, Y., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998). https://doi.org/10.1007/3-540-49430-8_2
Li, Z., Hoiem, D.: Learning without forgetting. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 614–629. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_37
Lufi, D., Okasha, S., Cohen, A.: Test anxiety and its effect on the personality of students with learning disabilities. Learn. Disabil. Q. 27(3), 176–184 (2004). https://doi.org/10.2307/1593667
Maltoni, D., Lomonaco, V.: Continuous learning in single-incremental-task scenarios. arXiv:1806.08568 (2018)
Martens, J.: Deep learning via hessian-free optimization. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, Omnipress, Madison, WI, USA, pp. 735–742 (2010)
McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102, 419–457 (1995)
Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6(4), 525–533 (1993). https://doi.org/10.1016/S0893-6080(05)80056-5. www.sciencedirect.com/science/article/pii/S0893608005800565
Nocedal, J., Wright, S.J.: Numerical Optimization, 2e edn. Springer, New York (2006). https://doi.org/10.1007/0-387-22742-3_18
Oja, E.: Simplified neuron model as a principal component analyzer. J. Math. Biol. 15(3), 267–273 (1982). https://doi.org/10.1007/BF00275687
Parisi, G.I., Tani, J., Weber, C., Wermter, S.: Lifelong learning of humans actions with deep neural network self-organization. Neural Netw. 96, 137–149 (2017)
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. CoRR abs/1802.07569 (2018). arxiv.org/abs/1802.07569
Peffers, K., et al.: The design science research process: a model for producing and presenting information systems research. In: 1st International Conference on Design Science in Information System and Technology (DESRIST), vol. 24, no. 3, pp. 83–106 (2006)
Plaut, D.C., Nowlan, S.J., Hinton, G.E.: Experiments on learning backpropagation. Technical Report CMU-CS-86-126, Carnegie-Mellon University, Pittsburgh, PA, p. 1 (1986)
Qiao, J., Meng, X., Li, W., Wilamowski, B.: A novel modular rbf neural network based on a brain-like partition method. Neural Comput. Appl. 32 (2020). https://doi.org/10.1007/s00521-018-3763-z
Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: an astounding baseline for recognition. In: CVPR 2014, Columbus, OH, vol. 39, no. 1, pp. 806–813 (2014)
Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, pp. 586–591 (1993). www.citeseer.ist.psu.edu/riedmiller93irect.html
Ritter, H., Martinetz, T., Schulten, K.: Neuronale Netze: eine Einführung in die Neuroinformatik selbstorganisierender Netzwerke. Reihe künstliche Intelligenz, Addison-Wesley (1991). www.books.google.de/books?id=MfsARQAACAAJ
Robinson, A.J., Fallside, F.: The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, p. 1 (1987)
Rojas, R.: Neural Networks - A Systematic Introduction. Springer, Berlin (1996). www.inf.fu-berlin.de/inst/ag-ki/rojas_home/pmwiki/pmwiki.php?n=Books.NeuralNetworksBook
Rolls, E., Deco, G.: Computational Neuroscience of Vision. OUP Oxford (2001). www.books.google.de/books?id=SbFpuQAACAAJ
Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations. MIT Press (1986)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986). https://doi.org/10.1038/323533a0
Rusu, A.A., et al.: Progressive neural networks. arXiv:1606.04671 (2016)
Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2(6), 459–473 (1989). https://doi.org/10.1016/0893-6080(89)90044-0. www.sciencedirect.com/science/article/pii/0893608089900440
Shewchuk, J.R.: An introduction to the conjugate gradient method without the agonizing pain. Technical Report, Carnegie Mellon university, Pittsburgh, PA, USA, p. 1 (1994)
Sohl-Dickstein, J., Poole, B., Ganguli, S.: An adaptive low dimensional quasi-newton sum of functions optimizer. CoRR abs/1311.2115 (2013). arxiv.org/abs/1311.2115
Soltoggio, A.: Short-term plasticity as cause-effect hypothesis testing is distal reward learning. Biol. Cybern. 109, 75–94 (2015)
Sweller, J., van Merrienboer, J.J.G., Paas, F.G.W.C.: Cognitive architecture and instructional design. Educ. Psychol. Rev. 10(3), 251–296 (1998). https://doi.org/10.1023/A:1022193728205
Tan, H.H., Lim, K.H.: Review of second-order optimization techniques in artificial neural networks backpropagation. IOP Conf. Ser. Mater. Sci. Eng. 495(1), 012003 (2019). https://doi.org/10.1088/1757-899X/495/1/012003
van de Ven, G.M., Tolias, A.S.: Three scenarios for continual learning. CoRR abs/1904.07734 (2019). arxiv.org/abs/1904.07734
Wang, Y., Sun, D., Chen, K., Lai, F., Chowdhury, M.: Efficient DNN training with knowledge-guided layer freezing. CoRR abs/2201.06227 (2022). arxiv.org/abs/2201.06227
Wilamowski, B.M., Yu, H.: Improved computation for Levenberg-Marquardt training. IEEE Trans. Neural Netw. 21(6), 930–937 (2010). https://doi.org/10.1109/TNN.2010.2045657
Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Chauvin, Y., Rumelhart, D.E. (eds.) Back-Propagation: Theory, Architectures and Applications, pp. 433–486. Lawrence Erlbaum Publishers, Hillsdale (1995). www.citeseer.nj.nec.com/williams95gradientbased.html
Xiao, T., Zhang, J., Yang, K., Peng, Y., Zhang, Z.: Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In: Proceedings of the ACM International Conference on Multimedia, Orlando, FL, pp. 177–186 (2014)
Yoon, J., Yang, E., Lee, J. Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: ICLR 2018, Vancouver, Canada (2018)
Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: ICML 2017, Sydney, Australia (2017)
Zhou, G., Sohn, K., Lee, H.: Online incremental feature learning with denoising autoen- coders. In: International Conference on Artificial Intelligence and Statistics, pp. 1453–1461 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Grum, M. (2023). Learning Representations by Crystallized Back-Propagating Errors. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2023. Lecture Notes in Computer Science(), vol 14125. Springer, Cham. https://doi.org/10.1007/978-3-031-42505-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-42505-9_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42504-2
Online ISBN: 978-3-031-42505-9
eBook Packages: Computer ScienceComputer Science (R0)