Skip to main content

Constructive Induction: Covering Attribute Spectrum

  • Chapter
Feature Extraction, Construction and Selection

Part of the book series: The Springer International Series in Engineering and Computer Science ((SECS,volume 453))

Abstract

Inductive algorithms rely strongly on their representational biases. Representational inadequacy can be mitigated by constructive induction. This chapter introduces several important issues for constructive induction and describes a new multi-strategy constructive induction algorithm (GALA2.0) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as a preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA2.0 on real domains for two different learners: C4.5 and backpropagation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Aha, D. (1991). Incremental Constructive Induction: An Instanced-Based Approach. in Proceedings of the 8th Machine Learning Workshop,pages 117–121.

    Google Scholar 

  • Clark, P. and Boswell, R. (1991). Rule Induction with CN2: some recent im-provements. European Working Session on Learning, pages 151–161.

    Google Scholar 

  • Clark, P. and Niblett, T. (1989) The CN2 Induction Algorithm. Machine Learn-ing 3, pages 261–283.

    Google Scholar 

  • Dietterich, T.G. and Michalski, R.S. (1981). Inductive Learning of Structural Description: Evaluation Criteria and Comparative Review of Selected Methods. Artificial Intelligence 16 (3), pages 257–294.

    Article  MathSciNet  Google Scholar 

  • Drastal, G., Czako, G. and Raatz, S. (1989). Induction on an Abstraction Space: A Form of Construction Induction. in Proceedings of the 11th International Joint Conference on Artificial Intelligence,pages 708–712.

    Google Scholar 

  • Fawcett, T.E. and Utgoff, P.E. (1992). Automatic Feature Generation for Problem Solving Systems. in Proceedings of the 9th International Workshop on Machine Learning, pages 144–153.

    Google Scholar 

  • Flann, N.S. and Dietterich, T.G. (1986). Selecting Appropriate Representations for Learning from Examples. in Proceedings of the 5th National Conference on Artificial Intelligence, pages 460–466.

    Google Scholar 

  • Hertz, J., Krogh, A. and Palmer, R.G. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley.

    Google Scholar 

  • Hu, Y. and Kibler, D. (1996). Generation of Attributes for Learning Algorithms. in Proceedings of the 18th National Conference on Artificial Intelligence, pages 806–811.

    Google Scholar 

  • Hu, Y. (1997). Learning Different Types of New Attributes by Combining thein Proceedings of the9th European Conference on Machine Learning,pages 124–137.

    Google Scholar 

  • Kung, S.Y. (1993). Digital Neural Networks. Prentice Hall.

    Google Scholar 

  • Matheus, C.J. and Rendell, L.A. (1989). Constructive Induction on Decision Trees. in Proceedings of the 11th International Joint Conference on Artificial Intelligence, pages 645–650.

    Google Scholar 

  • Matheus, C.J. (1991). The Need for Constructive Induction. in Proceedings of the 8th Machine Learning Workshop, pages 173–177.

    Google Scholar 

  • Norton, S.W. (1989). Generating better Decision Trees. in Proceedings of the 11 th International Joint Conference on Artificial Intelligence, pages 800–805.

    Google Scholar 

  • Pagallo, G. (1989). Learning DNF by Decision Trees. in Proceedings of the 11th International Joint Conference on Artificial Intelligence.

    Google Scholar 

  • Pagallo, G. and Haussier, D. (1990). Boolean Feature Discovery in Empirical Learning. Machine Learning 5, pages 71–99.

    Article  Google Scholar 

  • Perez, E. and Rendell, L. (1995). Using Multidimensional Projection to Find Relations. in Proceeding of the 12th Machine Learning Conference, pages 447–455.

    Google Scholar 

  • Perez, E. and Rendell, L. (1996). Learning Despite Concept Variation by Finding Structure in Attribute-based Date. in Proceeding of the 13th Machine Learning Conference,pages 391–399.

    Google Scholar 

  • Quinlan, J.R. (1983). Learning efficient classification procedures and their application to chess end games. in Michalski et. al.’s Machine Learning: An artificial intelligence approach. (Eds.)

    Google Scholar 

  • Quinlan, J.R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA.

    Google Scholar 

  • Ragavan, H., Rendell, L., Shaw, M., and Tessmer, A. (1993). Complex Concept Acquisition through Directed Search and Feature Caching. in Proceeding of the 13th International Joint Conference on Artificial Intelligence, pages 946–958.

    Google Scholar 

  • Ragavan, H. and Rendell, L. (1993). Lookahead Feature Construction for Learning Hard Concepts. in Proceeding of the 10th Machine Learning Conference,pages 252–259.

    Google Scholar 

  • Rendell L.A. and Ragavan, H. (1993). Improving the Design of Induction Methods by Analyzing Algorithm Functionality and Data-Based Concept Complexity. in Proceeding of the 13th International Joint Conference on Artificial Intelligence,pages 952–958.

    Google Scholar 

  • Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986). Learning Internal Representations by Error Propagation. in Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol 1, pages 318–362.

    Google Scholar 

  • Spackman, K. (1988). Learning Categorical Decision Criteria in Biomedical Domains. in Proceeding of the 5th International Workshop on Machine Learning, pages 36–46.

    Google Scholar 

  • Yang, D-S., Rendell, L.A., and Blix, G. (1991). A Scheme for Feature Construction and a Comparison of Empirical Methods. in Proceedings of the 12th International Joint Conference on Artificial Intelligence, pages 699–704.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer Science+Business Media New York

About this chapter

Cite this chapter

Hu, YJ. (1998). Constructive Induction: Covering Attribute Spectrum. In: Liu, H., Motoda, H. (eds) Feature Extraction, Construction and Selection. The Springer International Series in Engineering and Computer Science, vol 453. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5725-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-5725-8_16

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7622-4

  • Online ISBN: 978-1-4615-5725-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics