Abstract
Inductive algorithms rely strongly on their representational biases. Representational inadequacy can be mitigated by constructive induction. This chapter introduces several important issues for constructive induction and describes a new multi-strategy constructive induction algorithm (GALA2.0) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as a preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA2.0 on real domains for two different learners: C4.5 and backpropagation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aha, D. (1991). Incremental Constructive Induction: An Instanced-Based Approach. in Proceedings of the 8th Machine Learning Workshop,pages 117–121.
Clark, P. and Boswell, R. (1991). Rule Induction with CN2: some recent im-provements. European Working Session on Learning, pages 151–161.
Clark, P. and Niblett, T. (1989) The CN2 Induction Algorithm. Machine Learn-ing 3, pages 261–283.
Dietterich, T.G. and Michalski, R.S. (1981). Inductive Learning of Structural Description: Evaluation Criteria and Comparative Review of Selected Methods. Artificial Intelligence 16 (3), pages 257–294.
Drastal, G., Czako, G. and Raatz, S. (1989). Induction on an Abstraction Space: A Form of Construction Induction. in Proceedings of the 11th International Joint Conference on Artificial Intelligence,pages 708–712.
Fawcett, T.E. and Utgoff, P.E. (1992). Automatic Feature Generation for Problem Solving Systems. in Proceedings of the 9th International Workshop on Machine Learning, pages 144–153.
Flann, N.S. and Dietterich, T.G. (1986). Selecting Appropriate Representations for Learning from Examples. in Proceedings of the 5th National Conference on Artificial Intelligence, pages 460–466.
Hertz, J., Krogh, A. and Palmer, R.G. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley.
Hu, Y. and Kibler, D. (1996). Generation of Attributes for Learning Algorithms. in Proceedings of the 18th National Conference on Artificial Intelligence, pages 806–811.
Hu, Y. (1997). Learning Different Types of New Attributes by Combining thein Proceedings of the9th European Conference on Machine Learning,pages 124–137.
Kung, S.Y. (1993). Digital Neural Networks. Prentice Hall.
Matheus, C.J. and Rendell, L.A. (1989). Constructive Induction on Decision Trees. in Proceedings of the 11th International Joint Conference on Artificial Intelligence, pages 645–650.
Matheus, C.J. (1991). The Need for Constructive Induction. in Proceedings of the 8th Machine Learning Workshop, pages 173–177.
Norton, S.W. (1989). Generating better Decision Trees. in Proceedings of the 11 th International Joint Conference on Artificial Intelligence, pages 800–805.
Pagallo, G. (1989). Learning DNF by Decision Trees. in Proceedings of the 11th International Joint Conference on Artificial Intelligence.
Pagallo, G. and Haussier, D. (1990). Boolean Feature Discovery in Empirical Learning. Machine Learning 5, pages 71–99.
Perez, E. and Rendell, L. (1995). Using Multidimensional Projection to Find Relations. in Proceeding of the 12th Machine Learning Conference, pages 447–455.
Perez, E. and Rendell, L. (1996). Learning Despite Concept Variation by Finding Structure in Attribute-based Date. in Proceeding of the 13th Machine Learning Conference,pages 391–399.
Quinlan, J.R. (1983). Learning efficient classification procedures and their application to chess end games. in Michalski et. al.’s Machine Learning: An artificial intelligence approach. (Eds.)
Quinlan, J.R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA.
Ragavan, H., Rendell, L., Shaw, M., and Tessmer, A. (1993). Complex Concept Acquisition through Directed Search and Feature Caching. in Proceeding of the 13th International Joint Conference on Artificial Intelligence, pages 946–958.
Ragavan, H. and Rendell, L. (1993). Lookahead Feature Construction for Learning Hard Concepts. in Proceeding of the 10th Machine Learning Conference,pages 252–259.
Rendell L.A. and Ragavan, H. (1993). Improving the Design of Induction Methods by Analyzing Algorithm Functionality and Data-Based Concept Complexity. in Proceeding of the 13th International Joint Conference on Artificial Intelligence,pages 952–958.
Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986). Learning Internal Representations by Error Propagation. in Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol 1, pages 318–362.
Spackman, K. (1988). Learning Categorical Decision Criteria in Biomedical Domains. in Proceeding of the 5th International Workshop on Machine Learning, pages 36–46.
Yang, D-S., Rendell, L.A., and Blix, G. (1991). A Scheme for Feature Construction and a Comparison of Empirical Methods. in Proceedings of the 12th International Joint Conference on Artificial Intelligence, pages 699–704.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hu, YJ. (1998). Constructive Induction: Covering Attribute Spectrum. In: Liu, H., Motoda, H. (eds) Feature Extraction, Construction and Selection. The Springer International Series in Engineering and Computer Science, vol 453. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5725-8_16
Download citation
DOI: https://doi.org/10.1007/978-1-4615-5725-8_16
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7622-4
Online ISBN: 978-1-4615-5725-8
eBook Packages: Springer Book Archive