- 1.D. Angluin, "Learning regular sets from queries and counterexamples", Information and Computation, vol. 75, n. 2, pp. 87-106, 1987. Google ScholarDigital Library
- 2.D. Angluin, "Queries and concept learning", Machine Learning, vol. 2, n. 4, pp 319-342, 1988. Google ScholarCross Ref
- 3.D. Angluin, M. Krikis, R. Sloan, G. Turfin, "Malicious Omissions and Errors in Answers to Membership Queries", Machine Learning, vol. 28, no. 2/3, pp. 211- 255, 1997. Google ScholarDigital Library
- 4.D. Angluin, P.D. Laird, "Learning from Noisy Examples'', Machine Learning, vol. 2, no. 2, pp. 343-370, 1988. Google ScholarDigital Library
- 5.B. Apolloni, C. Gentile, "Sample Size Lower Bounds in PAC Learning by Algorithmic Complexity Theory", Theor. Comp. Sc., to appear. Google ScholarDigital Library
- 6.J.A. Aslam, S.E. Decatur, "On the sample complexity of noise-tolerant learning", Inf. Proc. Lett., vol. 57, pp. 189-195, 1996. Google ScholarDigital Library
- 7.P. Bartlett, "Learning with a slowly changing distribution'', in Proc. of the 5th Workshop on Comput. Learn. Th., 1992, pp. 243-252. Google ScholarDigital Library
- 8.P. Bartlett, D. Helmbold, manuscript, 1996.Google Scholar
- 9.R.D. Barve, P.M. Long, "On the complexity of learning from drifting distributions", Information and Computation, vol. 138, no. 2, pp. 170-193, 1997. Google ScholarDigital Library
- 10.G. Benedek, A. Itai, "Leamability by Fixed Distributions'', Theor. Comp. Sc., vol. 86, no. 2, pp. 377-389, 1991. Google ScholarDigital Library
- 11.A. Blumer, A. Ehrenfeucht, D. Haussler, M. Warmuth, "Leamability and the Vapnik-Chervonenkis Dimension'', J. ofACM, vol. 36, pp. 929-965, 1989. Google ScholarDigital Library
- 12.N. Cesa-Bianchi, E. Dichterman, P. Fischer, E. Shamir, H.U. Simon, "Sample-efficient Strategies for Learning in the Presence of Noise", eCOLT Tech. Rep. 97- 003, WWW.' http://ecolt.informatik, uni-dortmund, de/. Preliminary versions in 28th STOC, 1996 and 3rd EuroCOLT, 1997Google Scholar
- 13.T.M. Cover, J.A. Thomas, Elements of information theory. NY: John Wiley & Sons, Inc., 1991. Google ScholarDigital Library
- 14.R.L. Dobrushin, "General formulation of Shannon's main theorem in information theory", Uspelchi Mat. Nauk, vol. 14, pp. 3-104, 1959; translated in Amer. Math. Soc. Translations, Ser. 2, vol. 33, pp. 323-438, 1963.Google Scholar
- 15.R.M. Dudley, A Course on Empirical Processes. Lecture Notes in Mathematics, vol. 1097, Spfinger-Vefiag, Berlin/New York, 1984.Google Scholar
- 16.A. Ehrenfeucht, D. Haussler, M. Kearns, L. Valiant, "A General Lower Bound on the Number of Examples Needed for Learning", Information and Computation, vol. 82, no. 3, pp. 247-261, 1989. Google ScholarDigital Library
- 17.B. Eisenberg, R.L. Rivest, "On the Sample Complexity of Pac-Leaming using Random and Chosen Examples'', in Proc. of the 3th Workshop on Comput. Learn. Th., 1990, pp. 154~162. Google ScholarDigital Library
- 18.C. Gentile, "A note on sample size lower bounds for PAC-leaming", manuscript, 1997.Google Scholar
- 19.D. Haussler, A. Barron, "How well do Bayes methods work for on-line prediction of {-1 ,+ 1 } values'?.", in Proc. of the 3rd NEC Symposium on Computation and Cognition, 1992, pp. 74-100.Google Scholar
- 20.D. Haussler, M. Keams, R. Schapire, "Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension", Machine Learning, vol. 14, pp. 84-114, 1994. Google ScholarDigital Library
- 21.D. Haussler, N. Littlestone, and M. K. Warmuth. Predicting {0, 1 } functions on randomly drawn points. Information and Computation, 115(2):284-293, 1994. Google ScholarDigital Library
- 22.D. Haussler, M. Opper, "Mutual Information, Metric Entropy, and Cumulative Relative Entropy Risk", Annals of Statistics, 1997. To appear.Google Scholar
- 23.D. Helmbold, N. Littlestone, P. Long, "Apple Tasting'', manuscript 1997. An extended abstract appeared in Proc. of the 33rd Symposium on the Foundations of Comp. Sci., 1992, pp.493-502.Google Scholar
- 24.S. Ihara, Information theory for continuous systems. River Edge, NJ: World Scientific, 1993.Google Scholar
- 25.M. Keams, M. Li, "Learning in the presence of malicious errors", SIAM J. Comput, vol. 22, pp. 807-837, 1993. Google ScholarDigital Library
- 26.A.N. Kolmogorov, V.M. Tihomirov, "e-entropy and ecapacity of sets in functional spaces", Amer. Math. Soc. Translations (Ser. 2), vol. 17, pp. 277-364, 1961.Google Scholar
- 27.P. Laird, Learning from Good and Bad Data. Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, Boston, MA, 1988. Google ScholarDigital Library
- 28.W. Maass, G. Turfin, "On the complexity of learning from counterexamples and membership queries", in Proc. of the 31th Symposium on the Foundations of Comp. Sci., 1990, pp. 203-210.Google ScholarDigital Library
- 29.K. Sakakibara, "On learning from queries and counterexamples in the presence of noise", Inf, Proc. Lett., vol. 37, no. 5, pp. 279-284, 1991. Google ScholarDigital Library
- 30.H.U. Simon, "General Bounds on the Number of Examples Needed for Learning Probabilistic Concepts", Journal of Comp. System Sci., vol. 52, no. 2, pp. 239- 254, 1996. Google ScholarDigital Library
- 31.R. Sloan, "Four types of noise in data for PAC learning'', Inf. Proc. Lett., vol. 54, pp. 157-162, 1995. Google ScholarDigital Library
- 32.G. Shackelford, D. Volper, "Leaming k-DNF with noise in the attributes", in Proc. of the 1988 Workshop on Comput. Learn. Th., 1988, pp. 97-103. Google ScholarDigital Library
- 33.G. Turhn, "Lower bounds for PAC learning with queries", in Proc. of the 6th Workshop on Comput. Learn. Th., 1993, pp. 384-391. Google ScholarDigital Library
- 34.L. Valiant, "A theory of the learnable", Communication ofACM, vol. 27, no. ll,pp. 1134-1142, 1984. Google ScholarDigital Library
- 35.V.N. Vapnik, Estimation of dependencies based on empirical data. NY: Springer Verlag, 1982. Google ScholarDigital Library
- 36.V.N. Vapnik, The Nature of Statistical Learning Theory. NY: Springer Verlag, 1995. Google ScholarDigital Library
- 37.B. Yu, "Lower Bounds on Expected Redundancy for Nonparametric Classes", IEEE Trans. on Inf. Th., vol. 42, no. 1, pp. 272-275, 1996. Google ScholarDigital Library
Index Terms
- Improved lower bounds for learning from noisy examples: an information-theoretic approach
Recommendations
Improved Lower Bounds for Learning from Noisy Examples
This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples required for Probably Approximately Correct (PAC) learning in the presence of noise. This approach deals directly with the fundamental ...
Improved upper bounds on sizes of codes
Let A(n,d) denote the maximum possible number of codewords in a binary code of length n and minimum Hamming distance d. For large values of n, the best known upper bound, for fixed d, is the Johnson bound. We give a new upper bound which is at least as ...
Average-Case Lower Bounds for Noisy Boolean Decision Trees
We present a new method for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions. The new method has the feature that expectations are taken with respect to a uniformly distributed random input, ...
Comments