Skip to main content

Minimax Bounds for Active Learning

  • Conference paper
Book cover Learning Theory (COLT 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4539))

Included in the following conference series:

Abstract

This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for “boundary fragment” classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mackay, D.J.C.: Information-based objective functions for active data selection. Neural Computation 4, 698–714 (1991)

    MathSciNet  Google Scholar 

  2. Cohn, D., Ghahramani, Z., Jordan, M.: Active learning with statistical models. Journal of Artificial Intelligence Research, pp. 129–145 ( 1996)

    Google Scholar 

  3. Freund, Y., Seung, H.S., Shamir, E., Tishby, N.: Selective sampling using the query by committee algorithm. Machine Learning 28(2-3), 133–168 (1997)

    Article  MATH  Google Scholar 

  4. Cesa-Bianchi, N., Conconi, A., Gentile, C.: Learning probabilistic linear-threshold classifiers via selective sampling. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS (LNAI), vol. 2777, Springer, Heidelberg (2003)

    Google Scholar 

  5. Blanchard, G., Geman, D.: Hierarchical testing designs for pattern recognition. The Annals of Statistics 33(3), 1155–1202 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  6. Dasgupta, S., Kalai, A., Monteleoni, C.: Analysis of perceptron-based active learning. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, Springer, Heidelberg (2005)

    Google Scholar 

  7. Dasgupta, S.: Coarse sample complexity bounds for active learning. In: Advances in Neural Information Processing (NIPS) (2005)

    Google Scholar 

  8. Dasgupta, S.: Analysis of a greedy active learning strategy. In: Advances in Neural Information Processing (NIPS) (2004)

    Google Scholar 

  9. Balcan, N., Beygelzimer, A., Langford, J.: Agostic active learning. In: 23rd International Conference on Machine Learning, Pittsburgh, PA, USA (2006)

    Google Scholar 

  10. Castro, R., Willett, R., Nowak, R.: Faster rates in regression via active learning. In: Proceedings of Neural Information Processing Systems (NIPS), extended version (2005), available at http://homepages.cae.wisc.edu/~rcastro/ECE-05-3.pdf

  11. Kääriäinen, M.: On active learning in the non-realizable case. NIPS Workshop on Foundations of Active Learning (2005)

    Google Scholar 

  12. Burnashev, M.V., Zigangirov, K.S.: An interval estimation problem for controlled observations. Problems in Information Transmission 10, 223–231 (1974) (Translated from Problemy Peredachi Informatsii, 10(3),51–61, July-September, 1974). Original article submitted (June 25, 1973)

    Google Scholar 

  13. Hall, P., Molchanov, I.: Sequential methods for design-adaptive estimation of discontinuities in regression curves and surfaces. The Annals of Statistics 31(3), 921–941 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  14. Golubev, G., Levit, B.: Sequential recovery of analytic periodic edges in the binary image models. Mathematical Methods of Statistics 12, 95–115 (2003)

    MathSciNet  Google Scholar 

  15. Bryan, B., Schneider, J., Nichol, R.C., Miller, C.J., Genovese, C.R., Wasserman, L.: Active learning for identifying function threshold boundaries. In: Advances in Neural Information Processing (NIPS) (2005)

    Google Scholar 

  16. Tsybakov, A.: Optimal aggregation of classifiers in statistical learning. The Annals of Statistics 32(1), 135–166 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  17. Cavalier, L.: Nonparametric estimation of regression level sets. Statistics 29, 131–160 (1997)

    MATH  MathSciNet  Google Scholar 

  18. Tsybakov, A.B.: On nonparametric estimation of density level sets. The Annals of Statistics 25, 948–969 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  19. Tsybakov, A.B.: Introduction à l’estimation non-paramétrique. In: Mathématiques et Applications, vol. 41, Springer, Heidelberg (2004)

    Google Scholar 

  20. Korostelev, A.P.: On minimax rates of convergence in image models under sequential design. Statistics & Probability Letters 43, 369–375 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  21. Korostelev, A., Kim, J.C.: Rates of convergence for the sup-norm risk in image models under sequential designs. Statistics & probability Letters 46, 391–399 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  22. Castro, R., Nowak, R.: Upper and lower bounds for active learning. In: 44th Annual Allerton Conference on Communication, Control and Computing (2006)

    Google Scholar 

  23. Castro, R.M., Nowak, R.D.: Minimax bounds for active learning. Technical report, ECE Dept. University of Wisconsin - Madison (2007), available at http://homepages.cae.wisc.edu/~rcastro/ECE-07-3.pdf

  24. de Boor, C.: The error in polynomial tensor-product and chung-yao, interpolation. In: LeMéhauté, A., Rabut, C., Schumaker, L., (eds.): Surface Fitting and Multiresolution Methods, Vanderbilt University Press, 35–50 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Nader H. Bshouty Claudio Gentile

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Castro, R.M., Nowak, R.D. (2007). Minimax Bounds for Active Learning. In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72927-3_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72925-9

  • Online ISBN: 978-3-540-72927-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics