Efficient Target Propagation by Deriving Analytical Solution

Authors

  • Yanhao Bao Tokyo Institute of Technology
  • Tatsukichi Shibuya Tokyo Institute of Technology
  • Ikuro Sato Tokyo Institute of Technology Denso IT Laboratory
  • Rei Kawakami Tokyo Institute of Technology
  • Nakamasa Inoue Tokyo Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i10.28977

Keywords:

ML: Deep Learning Algorithms, ML: Bio-inspired Learning

Abstract

Exploring biologically plausible algorithms as alternatives to error backpropagation (BP) is a challenging research topic in artificial intelligence. It also provides insights into the brain's learning methods. Recently, when combined with well-designed feedback loss functions such as Local Difference Reconstruction Loss (LDRL) and through hierarchical training of feedback pathway synaptic weights, Target Propagation (TP) has achieved performance comparable to BP in image classification tasks. However, with an increase in the number of network layers, the tuning and training cost of feedback weights escalates. Drawing inspiration from the work of Ernoult et al., we propose a training method that seeks the optimal solution for feedback weights. This method enhances the efficiency of feedback training by analytically minimizing feedback loss, allowing the feedback layer to skip certain local training iterations. More specifically, we introduce the Jacobian matching loss (JML) for feedback training. We also proactively implement layers designed to derive analytical solutions that minimize JML. Through experiments, we have validated the effectiveness of this approach. Using the CIFAR-10 dataset, our method showcases accuracy levels comparable to state-of-the-art TP methods. Furthermore, we have explored its effectiveness in more intricate network architectures.

Published

2024-03-24

How to Cite

Bao, Y., Shibuya, T., Sato, I., Kawakami, R., & Inoue, N. (2024). Efficient Target Propagation by Deriving Analytical Solution. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11016-11023. https://doi.org/10.1609/aaai.v38i10.28977

Issue

Section

AAAI Technical Track on Machine Learning I