Abstract
Commercial web search engines are used by millions of users across the globe on a daily basis to assist their information needs. A user enters a query in the search box and expects the search engine to return relevant search results. Evaluating the quality of search results is a very important aspect in the development and maintenance of those systems. In this paper we describe some of the current approaches for assessing quality in an industrial setting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Alonso, O., Mizzaro, S.: Using Crowdsourcing for TREC Relevance Assessment. Information Processing and Management 48(6), 1053–1066 (2012)
Artstein, R., Poesio, M.: Inter-coder Agreement for Computational Linguistics. Journal of Computational Linguistics 34(4), 555–596 (2008)
Chapelle, O., Joachims, T., Radlinski, F., Yue, Y.: Large-scale Validation and Analysis of Interleaved Search Evaluation. ACM Trans. Inf. Syst. 30(1), 6 (2012)
Fernquist, J., Chi, E.: Perception and Understanding of Social Annotations in Web Search. In: Proc. of WWW (2013)
Harman, D.: Information Retrieval Evaluation. Morgan & Claypool (2011)
Hearst, M.: Search User Interfaces. Cambridge University Press (2009)
Joachims, J.: Evaluating Retrieval Performance Using Clickthrough Data. In: Text Mining, pp. 79–96 (2003)
Kelly, D.: Methods for Evaluating Interactive Information Retrieval Systems With Users. Foundations and Trends in Information Retrieval 3(1-2), 1–224 (2009)
Lease, M., Alonso, O.: Crowdsourcing for Search Evaluation and Social-algorithmic Search. In: Proc. of SIGIR (2012)
Lease, M., Yilmaz, E.: Crowdsourcing for Information Retrieval: Introduction to the Special Issue. Information Retrieval 16(2) (2013)
Muralidharan, A., Gyngyi, Z., Chi, E.: Social Annotations in Web Search. In: Proc. of CHI (2012)
Pantel, P., Gamon, M., Alonso, O., Haas, K.: Social Annotations: Utility and Prediction Modeling. In: Proc. of SIGIR (2012)
Radlinski, F., Hofmann, K.: Practical Online Retrieval Evaluation. In: Serdyukov, P., Braslavski, P., Kuznetsov, S.O., Kamps, J., Rüger, S., Agichtein, E., Segalovich, I., Yilmaz, E. (eds.) ECIR 2013. LNCS, vol. 7814, pp. 878–881. Springer, Heidelberg (2013)
Smucker, M., Kazai, G., Lease, M.: Overview of the TREC 2012 Crowdsourcing Track. In: TREC 2012 (2012)
Voorhees, E., Harman, D.: TREC Experiment and Evaluation in Information Retrieval. MIT Press (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Alonso, O. (2014). Evaluation with Respect to Usefulness. In: Ferro, N. (eds) Bridging Between Information Retrieval and Databases. PROMISE 2013. Lecture Notes in Computer Science, vol 8173. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-54798-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-54798-0_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-54797-3
Online ISBN: 978-3-642-54798-0
eBook Packages: Computer ScienceComputer Science (R0)