Skip to main content

Low-Memory Matrix Adaptation Evolution Strategies Exploiting Gradient Information and Lévy Flight

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2024)

Abstract

The Low-Memory Matrix Adaptation Evolution Strategy is a recent variant of CMA-ES that is specifically meant for large-scale numerical optimization. In this paper, we investigate if and how gradient information can be included in this algorithm, in order to enhance its performance. Furthermore, we consider the incorporation of Lévy flight to alleviate stability issues due to possibly unreliably gradient estimation as well as promote better exploration. In total, we propose four new variants of LMMA-ES, making use of real and estimated gradient, with and without Lévy flight. We test the proposed variants on two neural network training tasks, one for image classification through the newly introduced Forward-Forward paradigm, and one for a Reinforcement Learning problem, as well as five benchmark functions for numerical optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    https://boostedml.com/2020/07/gradient-descent-and-momentum-the-heavy-ball-method.html.

  2. 2.

    Note that this method can be useful also with real gradients. In fact, in certain circumstances, even real gradients may not help find better points in the space.

  3. 3.

    http://yann.lecun.com/exdb/mnist/.

  4. 4.

    https://www.cs.toronto.edu/~kriz/cifar.html.

  5. 5.

    https://gymnasium.farama.org/environments/classic_control/cart_pole/.

References

  1. Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)

    Article  Google Scholar 

  2. Ostermeier, A., Gawelczyk, A., Hansen, N.: Step-size adaptation based on non-local use of selection information. In: Davidor, Y., Schwefel, H.-P., Männer, R. (eds.) PPSN 1994. LNCS, vol. 866, pp. 189–198. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-58484-6_263

    Chapter  Google Scholar 

  3. Caraffini, F., Iacca, G., Neri, F., Picinali, L., Mininno, E.: A CMA-ES super-fit scheme for the re-sampled inheritance search. In: IEEE Congress on Evolutionary Computation, pp. 1123–1130. IEEE (2013)

    Google Scholar 

  4. Caraffini, F., Iacca, G., Yaman, A.: Improving (1+1) covariance matrix adaptation evolution strategy: a simple yet efficient approach. In: International Global Optimization Workshop (2019)

    Google Scholar 

  5. Igel, C., Hansen, N., Roth, S.: Covariance matrix adaptation for multi-objective optimization. Evol. Comput. 15(1), 1–28 (2007)

    Article  Google Scholar 

  6. Arnold, D.V., Hansen, N.: A (1+ 1)-CMA-ES for constrained optimisation. In: Genetic and Evolutionary Computation Conference, pp. 297–304 (2012)

    Google Scholar 

  7. de Melo, V.V., Iacca, G.: A CMA-ES-based 2-stage memetic framework for solving constrained optimization problems. In: IEEE Symposium on Foundations of Computational Intelligence, pp. 143–150. IEEE (2014)

    Google Scholar 

  8. de Melo, V.V., Iacca, G.: A modified covariance matrix adaptation evolution strategy with adaptive penalty function and restart for constrained optimization. Expert Syst. Appl. 41(16), 7077–7094 (2014)

    Article  Google Scholar 

  9. Ros, R., Hansen, N.: A simple modification in CMA-ES achieving linear time and space complexity. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 296–305. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87700-4_30

    Chapter  Google Scholar 

  10. Beyer, H.-G., Sendhoff, B.: Covariance matrix adaptation revisited – the CMSA evolution strategy –. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 123–132. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87700-4_13

    Chapter  Google Scholar 

  11. Beyer, H.-G., Sendhoff, B.: Simplify your covariance matrix adaptation evolution strategy. IEEE Trans. Evol. Comput. 21(5), 746–759 (2017)

    Article  Google Scholar 

  12. Jastrebski, G.A., Arnold, D.V.: Improving evolution strategies through active covariance matrix adaptation. In: IEEE Congress on Evolutionary Computation, pp. 2814–2821. IEEE (2006)

    Google Scholar 

  13. Arabas, J., Jagodziński, D.: Toward a matrix-free covariance matrix adaptation evolution strategy. IEEE Trans. Evol. Comput. 24(1), 84–98 (2019)

    Article  Google Scholar 

  14. Loshchilov, I., Glasmachers, T., Beyer, H.-G.: Large scale black-box optimization by limited-memory matrix adaptation. IEEE Trans. Evol. Comput. 23(2), 353–358 (2019)

    Article  Google Scholar 

  15. Salimans, T., Ho, J., Chen, X., Sidor, S., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017)

  16. Iacca, G., dos Santos Junior, V.C., de Melo, V.V.: An improved Jaya optimization algorithm with Lévy flight. Expert Syst. Appl. 165, 113902 (2020)

    Article  Google Scholar 

  17. Hinton, G.: The forward-forward algorithm: some preliminary investigations (2022)

    Google Scholar 

  18. Reddi, S.J., Kale, S., Kumar, S.: On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237 (2019)

  19. Pezeshki, M., Rahman, H., Yun, J.: pytorch_forward_forward. https://github.com/mohammadpz/pytorch_forward_forward

  20. Lee, H.-C., Song, J.: SymBa: symmetric backpropagation-free contrastive learning with forward-forward algorithm for optimizing convergence (2023)

    Google Scholar 

  21. Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. 13(5), 834–846 (1983)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Iacca .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lunelli, R., Iacca, G. (2024). Low-Memory Matrix Adaptation Evolution Strategies Exploiting Gradient Information and Lévy Flight. In: Smith, S., Correia, J., Cintrano, C. (eds) Applications of Evolutionary Computation. EvoApplications 2024. Lecture Notes in Computer Science, vol 14634. Springer, Cham. https://doi.org/10.1007/978-3-031-56852-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56852-7_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56851-0

  • Online ISBN: 978-3-031-56852-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics