Skip to main content

Simultaneous Generation of Accurate and Interpretable Neural Network Classifiers

  • Chapter
Multi-Objective Machine Learning

Part of the book series: Studies in Computational Intelligence ((SCI,volume 16))

Abstract

Generating machine learning models is inherently a multi-objective optimization problem. Two most common objectives are accuracy and interpretability, which are very likely conflicting with each other. While in most cases we are interested only in the model accuracy, interpretability of the model becomes the major concern if the model is used for data mining or if the model is applied to critical applications. In this chapter, we present a method for simultaneously generating accurate and interpretable neural network models for classification using an evolutionary multi-objective optimization algorithm. Lifetime learning is embedded to fine-tune the weights in the evolution that mutates the structure and weights of the neural networks. The efficiency of Baldwin effect and Lamarckian evolution are compared. It is found that the Lamarckian evolution outperforms the Baldwin effect in evolutionary multi-objective optimization of neural networks. Simulation results on two benchmark problems demonstrate that the evolutionary multi-objective approach is able to generate both accurate and understandable neural network models, which can be used for different purpose.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H.A. Abbass. A memetic Pareto evolutionary approach to artificial neural networks. Proc. of the 14th Australian Joint Conf. on Artificial Intelligence, pages 1–12, 2001

    Google Scholar 

  2. H.A. Abbass. An evolutionary artificial neural networks approach for breast cancer diagnosis. Artificial Intelligence in Medicine, 25(3):265–281, 2002

    Article  Google Scholar 

  3. H.A. Abbass. Pareto neuro-ensemble: Constructing ensembles of neural networks using multi-objective optimization. Congress on Evolutionary Computation, pages 2074–2080, 2003

    Google Scholar 

  4. H.A. Abbass. Speeding up back-propagation using multi-objective evolutionary algorithms. Neural Computation, 15(11):2705–2726, 2003

    Article  MATH  Google Scholar 

  5. D.H. Ackley, M.L. Littman. Interactions between learning and evolution. Artificial Life, 2:487–509, 1992

    Google Scholar 

  6. D.H. Ackley, M.L. Littman. A case for Lamarckian evolution. Artificial Life, 3:3–10, 1994

    Google Scholar 

  7. R. Andrews, J. Diederich, and A. Tickle. A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge Based Systems, 8(6):373–389, 1995

    Article  Google Scholar 

  8. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, UK, 1995

    Google Scholar 

  9. K.P. Burnham and D.R. Anderson. Model Selection and Multimodel Inference. Springer, New York, second edition, 2002

    MATH  Google Scholar 

  10. Evolutionary framework for the construction of diverse hybrid ensembles. In: Proc. of 13th European Symposium on Artificial Neural Networks. pages 253–258, 2005

    Google Scholar 

  11. C. Coello Coello, D. Veldhuizen, and G. Lamont. Evolutionary algorithms for solving multi-objective problems. Kluwer Academic, New York, 2002

    MATH  Google Scholar 

  12. K. Deb. Multi-objective Optimization Using Evolutionary Algorithms. Wiley, Chichester, 2001

    MATH  Google Scholar 

  13. K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In Parallel Problem Solving from Nature, volume VI, pages 849–858, 2000

    Google Scholar 

  14. W. Duch, R. Adamczak, and K. Grabczewski. Extraction of logical rules from backpropagation networks. Neural Processing Letters, 7:1–9, 1998.

    Article  Google Scholar 

  15. W. Duch, R. Setiono, and J. Zurada. Computational intelligence methods for rule-based data understanding. Proceedings of the IEEE, 92(5):771–805, 2004.

    Article  Google Scholar 

  16. J.E. Fieldsend, S. Singh. Optimizing forecast model complexity using multiobjective evolutionary algorithms. In Applications of Multi-objective Evolutionary Algorithms, C. Coello Coello, G.B. Lamont (eds.), pages 675–700. World Scientific, 2004

    Google Scholar 

  17. J. Handl and J. Knowles. Exploiting the trade-off -The benefits of multiple objectives in data clustering. Evolutionary Multi-Criterion Optimization, LNCS 3410, pages 547–560, 2005

    Google Scholar 

  18. K. Hornik, M. Stinchcombe, H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989

    Article  Google Scholar 

  19. M. Hüsken, J. E. Gayko, and B. Sendhoff. Optimization for problem classes –Neural networks that learn to learn. In Xin Yao and David B. Fogel, editors, IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks (ECNN 2000), pages 98–109. IEEE Press, 2000

    Google Scholar 

  20. C. Igel and M. Hüsken. Improving the Rprop learning algorithm. In Proc. of the 2nd ICSC Int. Symposium on Neural Computation, pages 115–121, 2000

    Google Scholar 

  21. C. Igel. Multi-objective model selection for support vector machines. Evolutionary Multi-Criterion Optimization, LNCS 3410, pages 534–546, 2005

    Google Scholar 

  22. H. Ishibuchi, T. Yamamoto. Evolutionary multiobjective optimization for generating an ensemble of fuzzy rule-based classifiers. In Genetic and Evolutionary Computation Conference, pages 1077–188, 2003

    Google Scholar 

  23. M. Ishikawa. Rule extraction by successive regularization. Neural Networks, 13:1171–1183, 2000

    Article  Google Scholar 

  24. Y. Jin. Advanced Fuzzy Systems Design and Applications. Springer, Heidelberg, 2003

    MATH  Google Scholar 

  25. Y. Jin, B. Sendhoff. Extracting interpretable fuzzy rules from RBF networks. Neural Processing Letters, 17(2):149–164, 2003

    Article  MATH  Google Scholar 

  26. Y. Jin, T. Okabe, B. Sendhoff. Evolutionary multi-objective optimization approach to constructing neural network ensembles for regression. In Applications of Multi-objective Evolutionary Algorithms, C. Coello Coello, G.B. Lamont (eds.), pages 653–673. World Scientific, 2004

    Google Scholar 

  27. Y. Jin, T. Okabe, B. Sendhoff. Neural network regularization and ensembling using multi-objective evolutionary algorithms. In Congress on Evolutionary Computation, pages 1–8. IEEE, 2004

    Google Scholar 

  28. Y. Jin, B. Sendhoff, E. Körner. Evolutionary multi-objective optimization for simultaneous generation of signal-type and symbol-type representations. Evolutionary Multi-Criterion Optimization, LNCS 3410, pages 752–766, 2005

    Google Scholar 

  29. K. Kottathra, Y. Attikiouzel. A novel multicriteria optimization algorithm for the structure determination of multilayer feedforward neural networks. Journal of Network and Computer Applications, 19:135–147, 1996.

    Article  Google Scholar 

  30. M.A. Kupinski, M.A. Anastasio. Multiobjective genetic optimization of diagnostic classifiers with implementations for generating receiver operating characteristic curves. IEEE Transactions on Medical Imaging, 18(8):675–685, 1999

    Article  Google Scholar 

  31. D.A. Miller, J.M. Zurada. A dynamical system perspective of structural learning with forgetting. IEEE Transactions on Neural Networks, 9(3):508–515, 1998

    Article  Google Scholar 

  32. S. Park, D. Nam, C.H. Park. Design of a neural controller using multiobjective optimization for nonminimum phase. Proc. of IEEE Int. Conf. on Fuzzy Systems, I:533–537, 1999

    Google Scholar 

  33. L. Prechelt. PROBEN1 - a set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, 1994

    Google Scholar 

  34. M. Riedmiller, H. Braun. A direct adaptive method for faster backpropgation learning: The RPROP algorithm. In IEEE Int. Conf. on Neural Networks, volume 1, pages 586–591, New York, 1993

    Google Scholar 

  35. R. Setiono. Generating concise and accurate classification rules for breast cancer diagnosis. Artificial Intelligence in Medicine, 18:205–219, 2000

    Article  Google Scholar 

  36. R. Setiono, H. Liu. Symbolic representation of neural networks. IEEE Computer, 29(3):71–77, 1996

    Google Scholar 

  37. R.D. Reed, R.J. Marks II. Neural Smithing. The MIT Press, 1999

    Google Scholar 

  38. I. Taha, J. Ghosh. Symbolic interpretation of artificial neural networks. IEEE Transactions on Knowledge and Data Engineering, 11(3):448–463, 1999

    Article  Google Scholar 

  39. R. de A. Teixeira, A.P. Braga, R. H.C. Takahashi, R. R. Saldanha. Improving generalization of MLPs with multi-objective optimization. Neurocomputing, 35:189–194, 2000

    Article  MATH  Google Scholar 

  40. H. Wang, S. Kwong, Y. Jin, W. Wei, K.F. Man. Agent-based evolutionary approach for interpretable rule-based knowledge extraction. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 35(2):143–155, 2005

    Article  Google Scholar 

  41. S. Wiegand, C. Igel, U. Handmann. Evolutionary multi-objective optimization of neural networks for face detection. Int. Journal of Computational Intelligence and Applications, 4:237–253, 2004

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer

About this chapter

Cite this chapter

Jin, Y., Sendhoff, B., Körner, E. (2006). Simultaneous Generation of Accurate and Interpretable Neural Network Classifiers. In: Jin, Y. (eds) Multi-Objective Machine Learning. Studies in Computational Intelligence, vol 16. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-33019-4_13

Download citation

  • DOI: https://doi.org/10.1007/3-540-33019-4_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30676-4

  • Online ISBN: 978-3-540-33019-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics