Skip to main content

Progressive Self-supervised Multi-objective NAS for Image Classification

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2024)

Abstract

We introduce a novel progressive self-supervised framework for neural architecture search. Our aim is to search for competitive, yet significantly less complex, generic CNN architectures that can be used for multiple tasks (i.e., as a pretrained model). This is achieved through cartesian genetic programming (CGP) for neural architecture search (NAS). Our approach integrates self-supervised learning with a progressive architecture search process. This synergy unfolds within the continuous domain which is tackled via multi-objective evolutionary algorithms (MOEAs). To empirically validate our proposal, we adopted a rigorous evaluation using the non-dominated sorting genetic algorithm II (NSGA-II) for the CIFAR-100, CIFAR-10, SVHN and CINIC-10 datasets. The experimental results showcase the competitiveness of our approach in relation to state-of-the-art proposals concerning both classification performance and model complexity. Additionally, the effectiveness of this method in achieving strong generalization can be inferred.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Bakhshi, A., Chalup, S., Noman, N.: Fast evolution of CNN architecture for image classification. In: Iba, H., Noman, N. (eds.) Deep Neural Evolution. NCS, pp. 209–229. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-3685-4_8

    Chapter  Google Scholar 

  2. Darlow, L.N., Crowley, E.J., Antoniou, A., Storkey, A.J.: CINIC-10 is not imagenet or CIFAR-10. CoRR abs/1810.03505 (2018). http://arxiv.org/abs/1810.03505

  3. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. Technical report. (2019). http://jmlr.org/papers/v20/18-598.html

  4. Feng, Z., Xu, C., Tao, D.: Self-supervised representation learning by rotation feature decoupling. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10356–10366 (2019). https://doi.org/10.1109/CVPR.2019.01061

  5. Fernandes Jr., F.E., Yen, G.G.: Pruning deep convolutional neural networks architectures with evolution strategy. Inf. Sci. 552, 29–47 (2021). https://doi.org/10.1016/j.ins.2020.11.009. https://linkinghub.elsevier.com/retrieve/pii/S0020025520310951

  6. Garcia-Garcia, C., Escalante, H.J., Morales-Reyes, A.: CGP-NAS. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, vol. 1, pp. 643–646. ACM, New York (2022). https://doi.org/10.1145/3520304.3528963

  7. Garcia-Garcia, C., Morales-Reyes, A., Escalante, H.J.: Continuous cartesian genetic programming based representation for multi-objective neural architecture search. Appl. Soft Comput. 147, 110788 (2023). https://doi.org/10.1016/j.asoc.2023.110788. https://www.sciencedirect.com/science/article/pii/S1568494623008062

  8. Grigorescu, S., Trasnea, B., Cocias, T., Macesanu, G.: A survey of deep learning techniques for autonomous driving. J. Field Rob. 37(3), 362–386 (2020)

    Article  Google Scholar 

  9. Gui, J., et al.: A survey on self-supervised learning: algorithms, applications, and future trends (2023)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  11. Heuillet, A., Tabia, H., Arioui, H.: Nasiam: Efficient representation learning using neural architecture search for siamese networks (2023). http://arxiv.org/abs/2302.00059

  12. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  13. Kolbk, M., Tan, Z.H., Jensen, J., Kolbk, M., Tan, Z.H., Jensen, J.: Speech intelligibility potential of general and specialized deep neural network based speech enhancement systems. IEEE/ACM Trans. Audio Speech Lang. Proc. 25(1), 153–167 (2017). https://doi.org/10.1109/TASLP.2016.2628641

  14. Larsson, G., Maire, M., Shakhnarovich, G.: Fractalnet: ultra-deep neural networks without residuals. CoRR abs/1605.07648 (2016). http://arxiv.org/abs/1605.07648

  15. Liu, Q., Wang, X., Wang, Y., Song, X.: Evolutionary convolutional neural network for image classification based on multi-objective genetic programming with leader-follower mechanism. Complex Intell. Syst. (2022). https://doi.org/10.1007/s40747-022-00919-y

  16. Lu, Z., Deb, K., Goodman, E., Banzhaf, W., Boddeti, V.N.: NSGANetV2: evolutionary multi-objective surrogate-assisted neural architecture search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 35–51. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_3

    Chapter  Google Scholar 

  17. Lu, Z., Sreekumar, G., Goodman, E., Banzhaf, W., Deb, K., Boddeti, V.N.: Neural architecture transfer. IEEE Trans. Pattern Anal. Mach. Intell. (2021). https://doi.org/10.1109/TPAMI.2021.3052758. https://ieeexplore.ieee.org/document/9328602/

  18. Lu, Z., et al.: NSGA-Net. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427. ACM, New York (2019). https://doi.org/10.1145/3321707.3321729

  19. Lu, Z., et al.: Multi-objective evolutionary design of deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. (2020). https://doi.org/10.1109/TEVC.2020.3024708. https://ieeexplore.ieee.org/document/9201169/

  20. Martinez, A.D., et al.: Lights and shadows in evolutionary deep learning: taxonomy, critical methodological analysis, cases of study, learned lessons, recommendations and challenges. Inf. Fusion 67, 161–194 (2021). https://doi.org/10.1016/j.inffus.2020.10.014

    Article  Google Scholar 

  21. Miikkulainen, R., et al.: Evolving deep neural networks. In: Artificial Intelligence in the Age of Neural Networks and Brain Computing, pp. 293–312. Elsevier (2019). https://doi.org/10.1016/B978-0-12-815480-9.00015-3. https://linkinghub.elsevier.com/retrieve/pii/B9780128154809000153

  22. Miller, J., Thomson, P., Fogarty, T., Ntroduction, I.: Designing electronic circuits using evolutionary algorithms. arithmetic circuits: a case study. Genetic Algor. Evol. Strat. Eng. Comput. Sci. (1999)

    Google Scholar 

  23. Nguyen, N., Chang, J.M.: CSNAS: contrastive self-supervised learning neural architecture search via sequential model-based optimization. IEEE Trans. Artif. Intell. 3(4), 609–624 (2022). https://doi.org/10.1109/TAI.2021.3121663

    Article  Google Scholar 

  24. Pinos, M., Mrazek, V., Sekanina, L.: Evolutionary approximation and neural architecture search. Genetic Program. Evol. Mach. (2022). https://doi.org/10.1007/s10710-022-09441-z. https://link.springer.com/10.1007/s10710-022-09441-z

  25. Real, E., et al.: Large-scale evolution of image classifiers (2017). http://arxiv.org/abs/1703.01041

  26. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  28. Suganuma, M., Kobayashi, M., Shirakawa, S., Nagao, T.: Evolution of deep convolutional neural networks using cartesian genetic programming (2020). https://doi.org/10.1162/evco_a_00253

  29. Sun, Y., Wang, H., Xue, B., Jin, Y., Yen, G.G., Zhang, M.: Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor. IEEE Trans. Evol. Comput. 24(2), 350–364 (2020). https://doi.org/10.1109/TEVC.2019.2924461. https://ieeexplore.ieee.org/document/8744404/

  30. Termritthikun, C., Jamtsho, Y., Ieamsaard, J., Muneesawang, P., Lee, I.: EEEA-Net: an early exit evolutionary neural architecture search. Eng. Appl. Artif. Intell. 104, 104397 (2021). https://doi.org/10.1016/j.engappai.2021.104397

    Article  Google Scholar 

  31. Torabi, A., Sharifi, A., Teshnehlab, M.: Using cartesian genetic programming approach with new crossover technique to design convolutional neural networks. Neural Process. Lett. (2022). https://doi.org/10.1007/s11063-022-11093-0

  32. Wang, B., Xue, B., Zhang, M.: Particle swarm optimization for evolving deep convolutional neural networks for image classification: single- and multi-objective approaches, pp. 155–184 (2020). https://doi.org/10.1007/978-981-15-3685-4

  33. Wei, C., Tang, Y., Chuang Niu, C.N., Hu, H., Wang, Y., Liang, J.: Self-supervised representation learning for evolutionary neural architecture search. IEEE Comput. Intell. Mag. 16(3), 33–49 (2021). https://doi.org/10.1109/MCI.2021.3084415

    Article  Google Scholar 

  34. Xie, L., Yuille, A.: Genetic CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1388–1397. IEEE (2017). https://doi.org/10.1109/ICCV.2017.154. http://ieeexplore.ieee.org/document/8237416/

  35. Xue, Y., Jiang, P., Neri, F., Liang, J.: A Multi-objective evolutionary approach based on graph-in-graph for neural architecture search of convolutional neural networks. Int. J. Neural Syst. 31(9) (2021). https://doi.org/10.1142/S0129065721500350

  36. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing [review article]. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018). https://doi.org/10.1109/MCI.2018.2840738

    Article  Google Scholar 

  37. Zagoruyko, S., Komodakis, N.: Wide residual networks. CoRR abs/1605.07146 (2016). http://arxiv.org/abs/1605.07146

Download references

Acknowledgements

The authors thankfully acknowledge computer resources, technical advice and support provided by Laboratorio Nacional de Supercómputo del Sureste de México (LNS), a member of the CONACYT national laboratories, with project No. 202103083C. This work was supported by CONACyT under grant CB-S-26314.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cosijopii Garcia-Garcia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Garcia-Garcia, C., Morales-Reyes, A., Escalante, H.J. (2024). Progressive Self-supervised Multi-objective NAS for Image Classification. In: Smith, S., Correia, J., Cintrano, C. (eds) Applications of Evolutionary Computation. EvoApplications 2024. Lecture Notes in Computer Science, vol 14635. Springer, Cham. https://doi.org/10.1007/978-3-031-56855-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56855-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56854-1

  • Online ISBN: 978-3-031-56855-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics