Large-Scale Optimistic Adaptive Submodularity

Authors

  • Victor Gabillon Inria Lille
  • Branislav Kveton Technicolor
  • Zheng Wen Stanford University
  • Brian Eriksson Technicolor
  • S. Muthukrishnan Rutgers

DOI:

https://doi.org/10.1609/aaai.v28i1.9003

Keywords:

Online Learning, Active Learning, Bandits, Submodularity, Generalized Linear Models

Abstract

Maximization of submodular functions has wide applications in artificial intelligence and machine learning. In this paper, we propose a scalable learning algorithm for maximizing an adaptive submodular function. The key structural assumption in our solution is that the state of each item is distributed according to a generalized linear model, which is conditioned on the feature vector of the item. Our objective is to learn the parameters of this model. We analyze the performance of our algorithm, and show that its regret is polylogarithmic in time and linear in the number of features. Finally, we evaluate our solution on two problems, preference elicitation and adaptive face detection, and demonstrate that high-quality policies can be learned sample efficiently.

Downloads

Published

2014-06-21

How to Cite

Gabillon, V., Kveton, B., Wen, Z., Eriksson, B., & Muthukrishnan, S. (2014). Large-Scale Optimistic Adaptive Submodularity. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9003

Issue

Section

Main Track: Novel Machine Learning Algorithms