skip to main content
10.1145/3549555.3549590acmotherconferencesArticle/Chapter ViewAbstractPublication PagescbmiConference Proceedingsconference-collections
research-article

Streaming learning with Move-to-Data approach for image classification

Published:07 October 2022Publication History

ABSTRACT

In Deep Neural Network training, the availability of a large amount of representative training data is the sine qua non-condition for a good generalization capacity of the model. In many real-world applications, data is not available at a glance, but coming on the fly. If a pre-trained model is fine-tuned on the new data, then catastrophic forgetting happens mostly. Incremental learning mechanisms propose ways to overcome catastrophic forgetting. Streaming learning is a type of incremental learning where models learn from new data instances as soon as they become available in a single training pass. In this work, we conduct an experimental study, on a large dataset, of an incremental/streaming learning method Move-to-Data we previously proposed, and propose an updated approach by ”re-targeting” with gradient descent which is faster than the popular streaming learning method ExStream. The method achieves better performances and computational efficiency compared to ExStream. Move-to-Data with gradient is on average 3.5 times faster than ExStream and has a similar accuracy, with 0.5% improvement compared to ExStream.

References

  1. Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV). 233–248.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Francisco M. Castro, Manuel J. Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-End Incremental Learning. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XII(Lecture Notes in Computer Science, Vol. 11216), Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer, 241–257. https://doi.org/10.1007/978-3-030-01258-8_15Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Anuvabh Dutt, Denis Pellerin, and Georges Quénot. 2017. Improving Hierarchical Image Classification with Merged CNN Architectures. In CBMI. ACM, 31:1–31:7.Google ScholarGoogle Scholar
  4. Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3, 4 (1999), 128–135.Google ScholarGoogle Scholar
  5. Tommaso Furlanello, Jiaping Zhao, Andrew M Saxe, Laurent Itti, and Bosco S Tjan. 2016. Active long term memory networks. arXiv preprint arXiv:1606.02355(2016).Google ScholarGoogle Scholar
  6. Tyler L Hayes, Nathan D Cahill, and Christopher Kanan. 2019. Memory efficient experience replay for streaming learning. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 9769–9776.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Tyler L Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, and Christopher Kanan. 2020. Remind your neural network to prevent catastrophic forgetting. In European Conference on Computer Vision. Springer, 466–483.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tyler L Hayes and Christopher Kanan. 2020. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 220–221.Google ScholarGoogle ScholarCross RefCross Ref
  9. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 2, 7 (2015).Google ScholarGoogle Scholar
  10. Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. 2016. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122(2016).Google ScholarGoogle Scholar
  11. Alex Krizhevsky, Geoffrey Hinton, 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  12. Ya Le and Xuan Yang. 2015. Tiny imagenet visual recognition challenge. CS 231N 7, 7 (2015), 3.Google ScholarGoogle Scholar
  13. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems 31 (2018).Google ScholarGoogle Scholar
  14. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40, 12(2017), 2935–2947.Google ScholarGoogle Scholar
  15. Edwin Lughofer. 2008. Extensions of vector quantization for incremental clustering. Pattern recognition 41, 3 (2008), 995–1011.Google ScholarGoogle Scholar
  16. Miltiadis Poursanidis, Jenny Benois-Pineau, Akka Zemmari, Boris Mansenca, and Aymar de Rugy. 2020. Move-to-Data: A new Continual Learning approach with Deep CNNs, Application for image-class recognition. arXiv preprint arXiv:2006.07152(2020).Google ScholarGoogle Scholar
  17. Amal Rannen, Rahaf Aljundi, Matthew B Blaschko, and Tinne Tuytelaars. 2017. Encoder based lifelong learning. In Proceedings of the IEEE International Conference on Computer Vision. 1320–1328.Google ScholarGoogle ScholarCross RefCross Ref
  18. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2001–2010.Google ScholarGoogle ScholarCross RefCross Ref
  19. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 3 (2015), 211–252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?Advances in neural information processing systems 27 (2014).Google ScholarGoogle Scholar

Index Terms

  1. Streaming learning with Move-to-Data approach for image classification

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CBMI '22: Proceedings of the 19th International Conference on Content-based Multimedia Indexing
        September 2022
        208 pages
        ISBN:9781450397209
        DOI:10.1145/3549555

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 October 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format