Skip to main content

A New Learning Strategy for Classification Problems with Different Training and Test Distributions

  • Conference paper
Computational and Ambient Intelligence (IWANN 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4507))

Included in the following conference series:

Abstract

Standard machine learning techniques assume that the statistical structure of the training and test datasets are the same (i.e. same attribute distribution p(x), and same class distribution p(c|x)). However, in real prediction problems this is not usually the case for different reasons. For example, the training set is not usually representative of the whole problem due to sample selection biases during its acquisition. In addition, the measurement biases in training could be different than in test (for example, when the measurement devices are different). Another reason is that in real prediction tasks the statistical structure of the classes is not usually static but evolves in time, and there is usually a time lag between training and test sets. Due to these different problems, the performance of a learning algorithm can severely degrade. Here we present a new learning strategy that constructs a classifier in two steps. First, the labeled examples of the training set are used for constructing a statistical model of the problem. In the second step, the model is improved using the unlabeled patterns of the test set by means of a novel extension of the Expectation-Maximization (EM) algorithm presented here. We show the convergence properties of the algorithm and illustrate its performance with an artificial problem. Finally we demonstrate its strengths in a heart disease diagnosis problem where the training set is taken from a different hospital than the test set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Black, M., Hickey, R.J.: Maintaining the performance of a learned classifier under concept drift. Intelligent Data Analysis 3, 453–474 (1999)

    Article  Google Scholar 

  2. Chen, Y., Wang, G., Dong, S.: Learning with progressive transductive support vector machine. Pattern Recognition Letters 24, 1845–1855 (2003)

    Article  Google Scholar 

  3. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Statist. Soc. Ser. B. 39 (1977)

    Google Scholar 

  4. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley and Sons, New York (2001)

    MATH  Google Scholar 

  5. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  6. Heckman, J.J.: Sample selection bias as a specification error. Econometrica 47, 153–162 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  7. Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. University of California, Irvine, CA, Department of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  8. Salganicoff, M.: Tolerating concept and sampling shift in lazy learning using prediction error context switching. Artificial Intelligence Review 11, 133–155 (1997)

    Article  Google Scholar 

  9. Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90, 227–244 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  10. Sugiyama, M., Müller, K.-R.: Input-dependent estimation of generalization error under covariate shift. Statistics and Decisions 23, 249–279 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  11. Vapnik, V.: Statistical Learning Theory. Wiley Interscience, New York (1998)

    MATH  Google Scholar 

  12. Wang, H., Fan, W., Yu, P.S., Han, J.: Minining concept-drifting data streams using ensemble classifiers. In: Proc. 9th Int. Conf. on Knowledge Discovery and Data Mining, KDD (2003)

    Google Scholar 

  13. Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Machine Learning 23, 69–101 (1996)

    Google Scholar 

  14. Wiens, D.P.: Robust weights and designs for biased regression models: Least Squares and Generalized M-Estimation. Journal of Statistical Planning and Inference 12, 412–483 (2000)

    Google Scholar 

  15. Wu, D., Bennett, K., Cristianini, N., Shawe-Taylor, J.: Large Margin Trees for Induction and Transduction. In: Proc. 16th International Conf. on Machine Learning (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Francisco Sandoval Alberto Prieto Joan Cabestany Manuel Graña

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pérez, Ó., Sánchez-Montañés, M. (2007). A New Learning Strategy for Classification Problems with Different Training and Test Distributions. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds) Computational and Ambient Intelligence. IWANN 2007. Lecture Notes in Computer Science, vol 4507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73007-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73007-1_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73006-4

  • Online ISBN: 978-3-540-73007-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics