The Influence of Dimensions on the Complexity of Computing Decision Trees

Authors

  • Stephen G. Kobourov University of Arizona
  • Maarten Löffler Utrecht University
  • Fabrizio Montecchiani Università degli Studi di Perugia
  • Marcin Pilipczuk University of Warsaw
  • Ignaz Rutter University of Passau
  • Raimund Seidel Saarland University
  • Manuel Sorge TU Wien
  • Jules Wulms TU Wien

DOI:

https://doi.org/10.1609/aaai.v37i7.26006

Keywords:

ML: Classification and Regression, CSO: Applications, ML: Optimization, ML: Transparent, Interpretable, Explainable ML, SO: Applications

Abstract

A decision tree recursively splits a feature space \mathbb{R}^d and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work considers heuristic algorithms that compute a decision tree from training data, usually aiming to minimize in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number d of dimensions of the feature space \mathbb{R}^d, which contains n training examples. We show that it can be solved in O(n^(2d + 1)) time, but under reasonable complexity-theoretic assumptions it is not possible to achieve f(d) * n^o(d / log d) running time. The problem is solvable in (dR)^O(dR) * n^(1+o(1)) time, if there are exactly two classes and R is an upper bound on the number of tree leaves labeled with the first class.

Downloads

Published

2023-06-26

How to Cite

Kobourov, S. G., Löffler, M., Montecchiani, F., Pilipczuk, M., Rutter, I., Seidel, R., Sorge, M., & Wulms, J. (2023). The Influence of Dimensions on the Complexity of Computing Decision Trees. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8343-8350. https://doi.org/10.1609/aaai.v37i7.26006

Issue

Section

AAAI Technical Track on Machine Learning II