Abstract
The standard SVR formulation for real-valued function approximation on multidimensional spaces is based on the ε-insensitive loss function, where errors are considered not correlated. Due to this, local information in the feature space which can be useful to improve the prediction model is disregarded. In this paper we address this problem by defining a generalized quadratic loss where the co-occurrence of errors is weighted according to a kernel similarity measure in the feature space. We show that the resulting dual problem can be expressed as a hard margin SVR in a different feature space when the co-occurrence error matrix is invertible. We compare our approach against a standard SVR on two regression tasks. Experimental results seem to show an improvement in the performance.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bianucci, A.M., Micheli, A., Sperduti, A., and Starita, A. (2000). Application of cascade correlation networks for structures to chemistry. Journal of Applied Intelligence (Kluwer Academic Publishers), 12:117–146.
Bianucci, A.M., Micheli, A., Sperduti, A., and Starita, A. (2003). A novel approach to QSPR/QSAR based on neural networks for structures. In Sztandera, L.M. and Cartwright, H.M., editors, Soft Computing Approaches in Chemistry. Springer-Verlag.
Chu, W., Keerthi, S. S., and Ong, C. J. (2004). Bayesian support vector regression using a unified loss function. IEEE Trans. on Neural Networks, 15(1):29–44.
Collins, M. and Duffy, N. (2002). Convolution kernels for natural language. In NIPS 14, Cambridge, MA. MIT Press.
Cristianini, N. and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, Cambridge.
Joachims, T. (1998). Text categorization with support vector machines: learning with many relevant features. In Proceedings of ECML-98, 10th European Conference on Machine Learning, pages 137–142.
Kondor, R. and Lafferty, J. (2002). Diffusion kernels on graphs and other discrete input spaces. In Proceedings of the Int. Conf. on Machine Learning, 2002.
Micchelli, C.A. (1998). Algebraic aspects of interpolation. In Proceedings of Symposia in Applied Mathematics, pages 36:81–102.
Portera, Filippo and Sperduti, Alessandro (2004). A generalized quadratic loss for support vector machines. In Proceedings of 16th European Conference on Artificial Intelligence, pages 628–632.
Vapnik, V.N. (1998). Statistical Learning Theory. Wiley, New York.
Zhang, T. and Oles, F. J. (2001). Text categorization based on regularized linear classification methods. Information Retrieval, 4:5–31.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer
About this paper
Cite this paper
Portera, F., Sperduti, A. (2005). Support Vector Regression with a Generalized Quadratic Loss. In: Apolloni, B., Marinaro, M., Tagliaferri, R. (eds) Biological and Artificial Intelligence Environments. Springer, Dordrecht. https://doi.org/10.1007/1-4020-3432-6_25
Download citation
DOI: https://doi.org/10.1007/1-4020-3432-6_25
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-3431-2
Online ISBN: 978-1-4020-3432-9
eBook Packages: Computer ScienceComputer Science (R0)