Abstract
Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries. For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced.
To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries. We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.
Chapter PDF
Similar content being viewed by others
Keywords
- Root Mean Square Error
- Single Predictor
- Time Series Prediction
- Chaotic Time Series
- Good Single Predictor
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Atkeson, C., Moore, A., Schaal, S.: Locally weighted learning. Artificial Intelligence Review (1996)
Weigend, A., Gershenfeld, N.: Time Series Prediction. Forecasting the Future and Understanding the Past. Addison-Wesley Publishing Company, Reading (1994)
Casdagli, M., Weigend, A.: Exploring the continuum between deterministic and stochastic modeling. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 347–366 (1994)
Farmer, J., Sidorowich, J.: Predicting chaotic time series. Physical Review Letters 59(8), 845–848 (1987)
Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Computation 4(1), 1–58 (1992)
Goldin, D., Kanellakis, P.: On similarity queries for time-series data: Constraint specification and implementation. LNCS, vol. 976(7), pp. 137–153 (January 1995)
Keerthi, S., Lin, C.: Asymptotic behaviors of Support Vector Machines with Gaussian kernel. Neural Computation 15, 1667–1689 (2003)
McNames, J., Suykens, J., Vandewalle, J.: Winning entry of the K.U.Leuven time series prediction competition. Internation Journal of Bifurcation and Chaos 9(8), 1485–1500 (1999)
Murray, D.: Forecasting a chaotic time series using an improved metric for embedding space. Physica D 68(8), 318–325 (1993)
Sauer, T.: Time series prediction by using delay coordinate embedding. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 175–193 (1994)
Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)
Takens, F.: Detecting strange attractors in turbulence. Lecture Notes in Mathematics, Dynamical Systems and Turbulence 898(7), 366–381 (1981)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yankov, D., DeCoste, D., Keogh, E. (2006). Ensembles of Nearest Neighbor Forecasts. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds) Machine Learning: ECML 2006. ECML 2006. Lecture Notes in Computer Science(), vol 4212. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11871842_51
Download citation
DOI: https://doi.org/10.1007/11871842_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-45375-8
Online ISBN: 978-3-540-46056-5
eBook Packages: Computer ScienceComputer Science (R0)