Skip to main content
Log in

Using single-layered neural networks for the extraction of conjunctive rules and hierarchical classifications

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Machine Learning is an area concerned with the automation of the process of knowledge acquisition. Neural networks generally represent their knowledge at the lower level, while knowledge based systems use higher level knowledge representations. The method we propose here, provides a technique which automatically allows us to extract production rules from the lower level representation used by a single-layered neural networks trained by Hebb's rule. Even though a single-layered neural network can not model complex, nonlinear domains, their strength in dealing with noise has enabled us to produce correct rules in a noisy domain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. F. Hayes-Roth, D. Waterman, and D. Lenat (eds.), Building Expert Systems, Reading, MA: Addison-Wesley, 1983.

    Google Scholar 

  2. J.G. Carbonell, R.S. Michalski, and T.M. Mitchell (eds.), “An overview of machine learning,” in Machine Learning, vol. 1, Palo Alto, CA: Tioga Publishing, pp. 3–23, 1983.

    Google Scholar 

  3. R. Michalski, “Theory and methodology of inductive learning,” Artificial Intelligence, vol. 20, pp. 111–161, 1983.

    Google Scholar 

  4. J.R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, pp. 81–106, 1986.

    Google Scholar 

  5. J.R. Quinlan, “Learning decision trees,” Technical Report 87.5, School of Computing Sciences, NSWIT, 1987.

  6. X. Zhou and T.S. Dillon, “A heuristic—statistical feature selection criteria for inductive machine learning in the real world,” Department of Computer Science, La Trobe University, Australia, 1986.

    Google Scholar 

  7. X. Zhou and T.S. Dillon, “Combining artificial intelligence with statistical methods for machine learning in the real world,” in Proceeding of the 2nd International Workshop on AI and Statistics, Fort Lauderdale, January 1989.

  8. D. Lenat, “The nature of heuristics,” Artificial Intelligence, vol. 19, pp. 189–249, 1982.

    Google Scholar 

  9. D. Lenat, “Theory formation by heuristic search. The nature of heuristics II: Background and examples,” Artificial Intelligence, vol. 21, pp. 31–59, 1983.

    Google Scholar 

  10. D. Lenat, “EURISKO: A program that learns new heuristics and domain concepts. The nature of heuristics III: Program design and examples,” Artificial Intelligence, vol. 21, pp. 61–98, 1983.

    Google Scholar 

  11. R.S. Michalski, J. Mozetic, J. Hong, and N. Lavrac, “The AQ15 inductive learning system: An overview and experiments,”

  12. S. Sestito, “Machine learning techniques for the automation of knowledge acquisition,” La Trobe University, Department of Computer Science, Technical Report 10/88, 1988.

  13. D.E. Rumelhart and J.L. McClelland, Parallel Distributed Processing, Volume 1: Foundations, Cambridge, MA: MIT Press, 1988.

    Google Scholar 

  14. J.L. McClelland and D.E. Rumelhart, Explorations in Parallel Distributed Processing. A Handbook of Models, Programs and Exercises, Cambridge, MA: MIT Press, 1988.

    Google Scholar 

  15. M.A. Boden, Artificial Intelligence and Natural Man, 2nd ed., expanded, New York: Basic Books, 1987.

    Google Scholar 

  16. S. Sestito and T.S. Dillon, “Using multi-layered neural networks for learning symbolic knowledge,” in AI '90 Conference Proceedings, Perth, Australia, November 1990.

  17. S. Sestito and T.S. Dillon, “Machine learning using single-layered and multi-layered neural networks,” in Tools for Artificial Intelligence (TAI-90) Conference, Washington, D.C., November 1990.

  18. S. Sestito and T.S. Dillon, “The automation of knowledge acquisition using multi-layered neural networks,” submitted to Machine Learning Journal, 1990.

  19. S. Patarnello and P. Carnevali, “Learning capabilities of boolean networks,” Neural Computing Architectures: The Design of Brain-like Machines, edited by I. Aleksander, North Oxford Academic Publishers Ltd, 1989.

  20. E. Rich, Artificial Intelligence, New York: McGraw-Hill, International Series Computer Science Series, 1983.

    Google Scholar 

  21. J.G. Wolff, “Cognitive development as optimisation,” in Computational Models of Learning, edited by L. Bolc, New York: Springer-Verlag, 1987.

    Google Scholar 

  22. International Encyclopedia of Social Sciences, vol. 9–10.

  23. R.S. Michalski, “Learning strategies and automated knowledge acquisition: An overview,” in Computational Models of Learning, edited by L. Bolc, New York: Springer-Verlag, 1987.

    Google Scholar 

  24. G. Tattersall, “Neural map applications,” in Neural Computing Architectures: The Design of Brain-like Machines, edited by I. Aleksander, North Oxford Academic Publishers Ltd, 1989.

  25. S. Ohlsson, “Transfer of training in procedural learning: A matter of conjectures and refutations?” in Computational Models of Learning, edited by L. Bolc, New York: Springer-Verlag, 1987.

    Google Scholar 

  26. P.H. Winston, Artificial Intelligence, 2nd ed., Reading, MA: Addison-Wesley, 1984.

    Google Scholar 

  27. J.R. Anderson, The Architecture of Cognition, Cambridge, MA: Harvard University Press, 1983.

    Google Scholar 

  28. J.R. Anderson, “Acquisition of cognitive skill,” in Reading in Cognitive Science: A Perspective from Psychology and Artificial Intelligence, edited by A. Collins and E.E. Smith, Morgan Kaufmann Publishers, 1988.

  29. J.L. McClelland, D.E. Rumelhart, and G.E. Hinton, “Appeal of parallel distributed processing,” Reading in Cognitive Science: A Perspective from Psychology and Artificial Intelligence, edited by A. Collins and E.E. Smith, Morgan Kaufmann Publishers, 1988.

  30. A.C. Guyton, Physiology of the Human Body, 6th ed., Philadelphia: Saunders College Publishing, 1984.

    Google Scholar 

  31. S.E. Fahlman and G.E. Hinton, “Connectionist architectures for artificial intelligence,” IEEE Computer, January 1987.

  32. G.E. Hinton, “Connectionist learning procedures,” Artificial Intelligence, vol. 40, pp. 185–234, 1989.

    Google Scholar 

  33. M. Caudill, “Neural networks primer, Part I to VIII,” AI Expert, 1987; Part I, Dec. 1987, pp. 46–52; Part II, Feb. 1988, pp. 55–61; Part III, June 1988, pp. 53–59; Part IV, Aug. 1988, pp. 61–67; Part V, Nov. 1988, pp. 57–65; Part VI, Feb. 1989, pp. 61–67; Part VII, May 1989, pp. 51–59; Part VIII, Aug. 1989, pp. 61–67.

  34. K.K. Obmeier and J.J. Barron, “Time to get fired up,” In depth, Neural Networks, Byte, August 1989.

  35. I. Aleksander, “Connectionist systems: Information technology goes brain-like (again!),” in Intelligent Systems in a Human Context: Development, Implications and Applications, edited by L.A. Murray and J.T.E. Richardson, New York: Oxford University Press, 1989.

    Google Scholar 

  36. D.F. Stubbs, “Neurocomputers,” article obtained at ICNN-Paris-90, reprint published with permission of Springer-Verlag International Publications, 1990.

  37. E.R. Caianiello, “A theory of neural networks,” in Neural Computing Architectures: The Design of Brainlike Machines, edited by I. Aleksander, North Oxford Academic Publishers Ltd, 1989.

  38. M. Recce and P. Treleavan, “Computing from the brain,” New Scientist, May 1988.

  39. C. Jorgensen and C. Matheus, “Catching knowledge in neural networks,” AI Expert, pp. 30–41, December 1986.

  40. D.A. Palmer, “Neural networks: Computers that never need programming,” I and CS, April 1988.

  41. W.P. Jones and J. Hoskins, “Back-propagation: A generalized delta learning rule,” Byte, October 1987.

  42. G. Josin, “Neural-network heuristics,” Byte, October 1987.

  43. J. Kinoshita and N.G. Palevsky, “Computing with neural networks,” High Technology, May 1987.

  44. H. Tomabechi and H. Kitano, “Beyond PDP: The frequency modulation neural network architecture,” IJCAI-89, Detroit, 1989.

  45. M.A. Jabri (ed.), “An introduction to electronic neural networks,” in Proceeding of Neural Networks-Their Implementation in VLSI, Sydney University Electrical Engineering, November 1988.

  46. T. Kohonen, “Speech recognition based on topologypreserving neural maps,” in Neural Computing Architectures: The Design of Brain-like Machines, edited by I. Aleksander, North Oxford Academic Publishers Ltd, 1989.

  47. P.D. Wasserman and T. Schwartz, “Neural Networks, Part 2,” IEEE Expert, Spring 1988.

  48. D.S. Touretzky and D.A. Pomerleau, “What's hidden in the hidden layers?” In depth, Neural Networks, Byte, August 1989.

  49. M.C. Mozer, “RAMBOT: A connectionist system that learns by example,” IEEE First International Conference on Neural Networks, vol. II, San Diego, CA, June 1987.

  50. R. Goodman, J. Miller, and P. Smyth, “An information theoretic approach to rule-based connectionist expert systems,” in Neural Information Processing Systems (NIPS) 1, edited by D.S. Touretzky, Morgan Kaufmann Publishers, 1989.

  51. R. Fozzard, G. Bradshaw, and L. Ceci, “A connectionist expert system that actually works,” in Neural Information Processing Systems (NIPS) 1, edited by D.S. Touretzky, Morgan Kaufmann Publishers, 1989.

  52. J.R. Quinlan, “Simplifying decision trees,” AI Laboratory, MIT, Cambridge, MA, 1986.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sestito, S., Dillon, T. Using single-layered neural networks for the extraction of conjunctive rules and hierarchical classifications. Appl Intell 1, 157–173 (1991). https://doi.org/10.1007/BF00058881

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00058881

Key words

Navigation