Abstract
Recent papers have demonstrated that both predicate invention and the learning of recursion can be efficiently implemented by way of abduction with respect to a meta-interpreter. This paper shows how Meta-Interpretive Learning (MIL) can be extended to implement a Bayesian posterior distribution over the hypothesis space by treating the meta-interpreter as a Stochastic Logic Program. The resulting \(MetaBayes\) system uses stochastic refinement to randomly sample consistent hypotheses which are used to approximate Bayes’ Prediction. Most approaches to Statistical Relational Learning involve separate phases of model estimation and parameter estimation. We show how a variant of the MetaBayes approach can be used to carry out simultaneous model and parameter estimation for a new representation we refer to as a Super-imposed Logic Program (SiLPs). The implementation of this approach is referred to as \(MetaBayes_{SiLP}\). SiLPs are a particular form of ProbLog program, and so the parameters can also be estimated using the more traditional EM approach employed by ProbLog. This second approach is implemented in a new system called \(MilProbLog\). Experiments are conducted on learning grammars, family relations and a natural language domain. These demonstrate that \(MetaBayes\) outperforms \(MetaBayes_{MAP}\) in terms of predictive accuracy and also outperforms both \(MilProbLog\) and \(MetaBayes_{SiLP}\) on log likelihood measures. However, \(MetaBayes\) incurs substantially higher running times than \(MetaBayes_{MAP}\). On the other hand, \(MetaBayes\) and \(MetaBayes_{SiLP}\) have similar running times while both have much shorter running times than \(MilProbLog\).
Keywords
- Stochastic Refinement
- Meta-interpretive Learning (MIL)
- Stochastic Logic Programs (SLP)
- ProbLog Program
- Hypothesis Space
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Abduce/3 only adds a higher-order atom \(a\) to a program \(P\) to give \(P'\) when \(a\not \in P\).
References
Abramson, H.: Definite clause translation grammars. Technical report, Vancouver, BC, Canada, Canada (1984)
Angelopoulos, N., Cussens, J.: Markov chain Monte Carlo using tree-based priors on model structure. In: UAI-2001. Kaufmann, Los Altos (2001)
Arvanitis, A., Muggleton, S.H., Chen, J., Watanabe, H.: Abduction with stochastic logic programs based on a possible worlds semantics. In: Short Paper Proceedings of the 16th International Conference on Inductive Logic Programming. University of Corunna (2006)
Bernardo, J.M., Smith, A.F.M.: Bayesian Theory. Wiley, New York (1994)
Bohan, D.A., Caron-Lormier, G., Muggleton, S.H., Raybould, A., Tamaddoni-Nezhad, A.: Automated discovery of food webs from ecological data using logic-based machine learning. PloS ONE 6(12), e29028 (2011)
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Buntine, W.: A theory of learning classification rules. Ph.D. thesis. School of Computing Science, University of Technology, Sydney (1990)
De Raedt, L., Kimmig, A., Toivonen, H.: Problog: a probabilistic prolog and its applications in link discovery. In: de Mantaras, R.L., Veloso, M.M. (eds.) Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 804–809 (2007)
Freund, Y., Shapire, R.: A decision theoretic generalisation of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)
Getoor, L.: Tutorial on statistical relational learning. In: Kramer, S., Pfahringer, B. (eds.) ILP 2005. LNCS (LNAI), vol. 3625, pp. 415–415. Springer, Heidelberg (2005)
Haussler, D., Kearns, M., Shapire, R.: Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. Mach. Learn. J. 14(1), 83–113 (1994)
Kersting, K., De Raedt, L.: Towards combining inductive logic programming with Bayesian networks. In: Rouveirol, C., Sebag, M. (eds.) ILP 2001. LNCS (LNAI), vol. 2157, pp. 118–131. Springer, Heidelberg (2001)
Lodhi, H., Muggleton, S.H.: Modelling metabolic pathways using stochastic logic programs-based ensemble methods. In: Danos, V., Schachter, V. (eds.) CMSB 2004. LNCS (LNBI), vol. 3082, pp. 119–133. Springer, Heidelberg (2005)
Mantadelis, T., Janssens, G.: Nesting probabilistic inference. In: Proceedings of the International Colloquium on Implementation of Constraint and LOgic Programming Systems (CICLOPS), pp. 1–16, Lexington, Kentucky. Springer-Verlag (2011)
Muggleton, S.H.: Stochastic logic programs. In: de Raedt, L. (ed.) Advances in Inductive Logic Programming, pp. 254–264. IOS Press, Amsterdam (1996)
Muggleton, S.H.: Stochastic logic programs. J. Logic Program. (2001). Accepted subject to revision
Muggleton, S.H.: Learning structure and parameters of stochastic logic programs. Electron. Trans. Artif.Intell. 6 (2002)
Muggleton, S.H., Lin, D.: Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. In: Proceedings of the 23rd International Joint Conference Artificial Intelligence (IJCAI 2013), pp. 1551–1557 (2013)
Muggleton, S.H., Lin, D., Pahlavi, N., Tamaddoni-Nezhad, A.: Meta-interpretive learning: application to grammatical inference. Mach. Learn. 94, 25–49 (2014)
Muggleton, S.H., Tamaddoni-Nezhad, A.: QG/GA: a stochastic search for Progol. Mach. Learn. 70(2–3), 123–133 (2007). doi:10.1007/s10994-007-5029-3
Pahlavi, N., Muggleton, S.H.: Towards efficient higher-order logic learning in a first-order datalog framework. In: Latest Advances in Inductive Logic Programming. Imperial College Press (2012) (in Press)
De Raedt, L., Kersting, K.: Probabilistic inductive logic programming. In: De Raedt, L., Frasconi, P., Kersting, K., Muggleton, S.H. (eds.) Probabilistic Inductive Logic Programming. LNCS (LNAI), vol. 4911, pp. 1–27. Springer, Heidelberg (2008)
Tamaddoni-Nezhad, A., Muggleton, S.: Stochastic refinement. In: Frasconi, P., Lisi, F.A. (eds.) ILP 2010. LNCS, vol. 6489, pp. 222–237. Springer, Heidelberg (2011)
Železný, F., Srinivasan, A., Page, D.L.: Lattice-search runtime distributions may be heavy-tailed. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, pp. 333–345. Springer, Heidelberg (2003)
Zhu, J., Zou, H., Rosset, S., Hastie, T.: Multi-class adaboost. Stat. Interface 2, 349–360 (2009)
Acknowledgements
The authors would like to acknowledge the support of Syngenta in its funding of the University Innovations Centre at Imperial College. The first author would like to thank the Royal Academy of Engineering and Syngenta for funding his present 5 years Research Chair.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Muggleton, S.H., Lin, D., Chen, J., Tamaddoni-Nezhad, A. (2014). MetaBayes: Bayesian Meta-Interpretative Learning Using Higher-Order Stochastic Refinement. In: Zaverucha, G., Santos Costa, V., Paes, A. (eds) Inductive Logic Programming. ILP 2013. Lecture Notes in Computer Science(), vol 8812. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44923-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-662-44923-3_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44922-6
Online ISBN: 978-3-662-44923-3
eBook Packages: Computer ScienceComputer Science (R0)