Skip to main content

Metamodels for Fast Multi-objective Optimization: Trading Off Global Exploration and Local Exploitation

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6457))

Abstract

Metamodels can speed up the optimization process. Previously evaluated designs can be used as a training set for building surrogate models. Subsequently an inexpensive virtual optimization can be performed. Candidate solutions found in this way need to be validated (evaluated by means of the real solver).

This process can be iterated in an automatic way: this is the reason of the fast optimization algorithms. At each iteration the newly evaluated designs enrich the training database, permitting more and more accurate metamodels to be build in an adaptive way.

In this paper a novel scheme for fast optimizers is introduced: the virtual optimization - representing an exploitation process - is accompanied by a virtual run of a suited space-filler algorithm - for exploration purposes - increasing the robustness of the fast optimizer.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bongartz, I., Conn, A., Gould, N., Toint, P.: CUTE: constrained and unconstrained testing environment. ACM Trans. Math, Soft. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  2. Buhmann, M.D.: Radial Basis Functions: Theory and Implementations. Cambridge University Press, Cambridge (2003)

    Book  MATH  Google Scholar 

  3. Deb, K., Mathur, A., Meyarivan, T.: Constrained Test Problems for Multi-Objective Evolutionary Optimization. In: Zitzler, E., Deb, K., Thiele, L., Coello Coello, C.A., Corne, D.W. (eds.) EMO 2001. LNCS, vol. 1993, p. 284. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  4. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multi-objective genetic algorithm - NSGA-II. Report Number 2000001, KanGAL (2000)

    Google Scholar 

  5. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., Boston (1989)

    MATH  Google Scholar 

  6. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the marquardt algorithm. IEEE Trans. on Neural Networks 5(6) (1994)

    Google Scholar 

  7. Irie, B., Miyake, S.: Capabilities of three-layered perceptrons. In: Proceedings of the IEEE International Conference on Neural Networks, pp. I–641 (1998)

    Google Scholar 

  8. Jin, R., Chen, W., Simpson, T.W.: Comparative studies of metamodeling techniques under multiple modeling criteria. Structural and Multidisciplinary Optimization 23(1), 1–13 (2001)

    Article  Google Scholar 

  9. Knowles, J., Nakayama, H.: Meta-modeling in multiobjective optimization. In: Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.) Multiobjective Optimization. LNCS, vol. 5252, pp. 245–284. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  10. Lovison, A.: Kriging. Technical Report 2007-003, Esteco (2007)

    Google Scholar 

  11. Matheron, G.: Les variables régionalisées et leur estimation: une application de la théorie des fonctions aléatoires aux sciences de la nature. Masson, Paris (1965)

    Google Scholar 

  12. Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M.: Response surface methodology: process and product optimization using designed experiments. John Wiley & Sons, Inc., Chichester (2009)

    MATH  Google Scholar 

  13. Nelder, J.A., Mead, R.: A simplex method for function minimization. Computer Journal 7(4), 308–313 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  14. Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. In: Proceedings of IJCNN, vol. 3, pp. 21–96 (1990)

    Google Scholar 

  15. Poles, S.: The SIMPLEX method. Technical Report 2003-005, Esteco (2003)

    Google Scholar 

  16. Poloni, C., Pediroda, V.: GA coupled with computationally expensive simulations: tools to improve efficiency. In: Genetic Algorithms and Evolution Strategies in Engineering and Computer Science, pp. 267–288. John Wiley and Sons, Chichester (1997)

    Google Scholar 

  17. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  18. Rigoni, E.: Radial basis functions response surfaces. Technical Report 2007-001, Esteco (2007)

    Google Scholar 

  19. Rigoni, E., Lovison, A.: Automatic sizing of neural networks for function approximation. In: Proceedings of SMC 2007, pp. 2005–2010 (2007)

    Google Scholar 

  20. Rippa, S.: An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Adv. Adv. Comp. Math. 11, 193–210 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  21. Tanaka, M.: GA-based decision support system for multi-criteria optimization. In: Proc. of the Int. Conference on Systems, Man and Cybernetics - 2

    Google Scholar 

  22. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C., da Fonseca, V.: Performance assessment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary Computation 7(2), 117–132 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rigoni, E., Turco, A. (2010). Metamodels for Fast Multi-objective Optimization: Trading Off Global Exploration and Local Exploitation. In: Deb, K., et al. Simulated Evolution and Learning. SEAL 2010. Lecture Notes in Computer Science, vol 6457. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17298-4_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17298-4_56

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17297-7

  • Online ISBN: 978-3-642-17298-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics