Abstract
Learning in non-stationary environments is challenging, because under such conditions the common assumption of independent and identically distributed data does not hold; when concept drift is present it necessitates continuous system updates. In recent years, several powerful approaches have been proposed. However, these models typically classify any input, regardless of their confidence in the classification – a strategy, which is not optimal, particularly in safety-critical environments where alternatives to a (possibly unclear) decision exist, such as additional tests or a short delay of the decision. Formally speaking, this alternative corresponds to classification with rejection, a strategy which seems particularly promising in the context of concept drift, i.e. the occurrence of situations where the current model is wrong due to a concept change. In this contribution, we propose to extend learning under concept drift with rejection. Specifically, we extend two recent learning architectures for drift, the self-adjusting memory architecture (SAM-kNN) and adaptive random forests (ARF), to incorporate a reject option, resulting in highly competitive state-of-the-art technologies. We evaluate their performance in learning scenarios with different types of drift.
This work was supported by Honda Research Institute Europe GmbH, Offenbach am Main, Germany.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
We subsitute a small \(\epsilon > 0\) for \(d(x, x_i)\) if \(d(x, x_i) < \epsilon \).
- 2.
The fixed window serves as a straight-forward example. Results for the adaptive window, SAM, and ARF are comparable – the largest difference in accuracy between all four is below 2%.
References
Cha, E., Dragan, A.D., Srinivasa, S.S.: Perceived robot capability. In: 24th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2015, Kobe, Japan, August 31–September 4 2015, pp. 541–548 (2015)
Desai, M., et al.: Impact of robot failures and feedback on real-time trust. In: HRI. IEEE/ACM, pp. 251–258 (2013)
Kwon, M., Huang, S.H., Dragan, A.D.: Expressing robot incapability. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2018, Chicago, IL, USA, 05–08 March 2018, pp. 87–95 (2018)
Chow, C.: On optimum recognition error and reject tradeoff. IEEE Trans. Inf. Theor. 16(1), 41–46 (2006). ISSN 0018–9448
Herbei, R., Wegkamp, M.H.: Classification with reject option. Can. J. Stat. 34(4), 709–721 (2006)
Bartlett, P.L., Wegkamp, M.H.: Classification with a reject option using a hinge loss. J. Mach. Learn. Res. 9, 1823–1840 (2008). ISSN 1532–4435
Villmann, T., et al.: Self-adjusting reject options in prototype based classification. In: Merényi, E., Mendenhall, M.J., O’Driscoll, P. (eds.) Advances in Self-organizing Maps and Learning Vector Quantization. AISC, vol. 428, pp. 269–279. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-28518-4_24
Fischer, L., Hammer, B., Wersing, H.: Optimal local rejection for classifiers. Neurocomputing 214, 445–457 (2016)
Vovk, V., Gammerman, A., Shafer, G.: Algorithmic Learning in a Random World. Springer, New York (2005). https://doi.org/10.1007/b106715. ISBN 0387001522
Ditzler, G.: Learning in nonstationary environments: a survey. IEEE Comput. Intell. Mag. 10(4), 12–25 (2015). ISSN 1556–603X
Gomes, H.M.: A survey on ensemble learning for data stream classification. ACM Comput. Surv. 50(2), 23:1–23:36 (2017)
Losing, V., Hammer, B., Wersing, H.: Tackling heterogeneous concept drift with the Self-Adjusting Memory (SAM). Knowl. Inf. Syst. 54(1), 171–201 (2018)
Loeffel, P.-X., Bifet, A., Marsala, C., Detyniecki, M.: Droplet ensemble learning on drifting data streams. In: Adams, N., Tucker, A., Weston, D. (eds.) IDA 2017. LNCS, vol. 10584, pp. 210–222. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68765-0_18
Gomes, H.M., et al.: Adaptive random forests for evolving data stream classification. Mach. Learn. 106, 1469–1495 (2017)
Loeffel, P.X., Marsala, C., Detyniecki, M.: Classification with a reject option under concept drift: the droplets algorithm. In: 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–9, October 2015
Platt, J.C.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in Large Margin Classifiers, pp. 61–74. MIT Press (1999)
Hellman, M.E.: The nearest neighbor classification rule with a reject option. IEEE Trans. Syst. Sci. Cybern. 6(3), 179–185 (1970). ISSN 0536–1567
Denoeux, T.: A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. 25(5), 804–813 (1995)
Delany, S.J., Cunningham, P., Doyle, D., Zamolotskikh, A.: Generating estimates of classification confidence for a case-based spam filter. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 177–190. Springer, Heidelberg (2005). https://doi.org/10.1007/11536406_16
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). ISSN 0885–6125
Fernández-Delgado, M.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15, 3133–3181 (2014)
Niculescu-Mizil, A., Caruana, R.: Predicting good probabilities with supervised learning. In: Proceedings of the 22nd International Conference on Machine Learning, ICML 2005, pp. 625–632. ACM, Bonn (2005). ISBN 1-59593-180-5
Nadeem, M.S.A., Zucker., Hanczar, B.: Accuracy-rejection curves (ARCs) for comparing classification methods with a reject option. In: Džeroski, S., Guerts, P., Rousu, J. (eds.) Proceedings of the Third International Workshop on Machine Learning in Systems Biology, Proceedings of Machine Learning Research, vol. 8, pp. 65–81. PMLR, Ljubljana (May 2009)
Bifet, A.: MOA: massive online analysis. J. Mach. Learn. Res. 11, 1601–1604 (2010). ISSN 1532–4435
Timothy, L.H., Watkin, A.R., Biehl, M.: The statistical mechanics of learning a rule. Rev. Mod. Phys. 65, 499–556 (1993)
Losing, V., Hammer, B., Wersing, H.: KNN classifier with self adjusting memory for heterogeneous concept drift. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 291–300. IEEE, Barcelona (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Göpfert, J.P., Hammer, B., Wersing, H. (2018). Mitigating Concept Drift via Rejection. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11139. Springer, Cham. https://doi.org/10.1007/978-3-030-01418-6_45
Download citation
DOI: https://doi.org/10.1007/978-3-030-01418-6_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01417-9
Online ISBN: 978-3-030-01418-6
eBook Packages: Computer ScienceComputer Science (R0)