skip to main content
research-article

Crowdsourcing for relevance evaluation

Published:30 November 2008Publication History
Skip Abstract Section

Abstract

Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.

References

  1. Amazon Mechanical Turk, http://www.mturk.comGoogle ScholarGoogle Scholar
  2. Jeff Barr and Luis Felipe Cabrera. "AI Gets a Brain", ACM Queue, May 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Brendan O'Connor, "Search Engine Relevance: An Empirical Test", http://blog.doloreslabs.com/2008/04/search-engine-relevance-an-empirical-test/#more-35, accessed April 13, 2008.Google ScholarGoogle Scholar
  4. Jeff Howe. "The Rise of Crowdsourcing". Wired, June 2006. http://www.wired.com/wired/archive/14.06/crowds.htmlGoogle ScholarGoogle Scholar
  5. Peter Ingwersen and Kalervo Järvelin. The Turn: Integration of Information Seeking and Retrieval in Context, Springer, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Thorsten Joachims and Filip Radlinski, "Search Engines that Learn from Implicit Feedback", IEEE Computer, Vol. 40, No. 8, August 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Daniel E. Rose, "Why Is Web Search So Hard.. to Evaluate?" Journal of Web Engineering, Vol. 3, Nos. 3 & 4, pp. 171--181, December 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tefko Saracevic. "Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science. Part III: Behavior and Effects on Relevance". Journal of the American Society for Information Science and Technology, 58(13):212--2144, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ellen Voorhees. "TREC: Continuing Information Retrieval's Tradition of Experimentation". Comm. Of the ACM, Vol. 50, No. 11, November 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Crowdsourcing for relevance evaluation

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM SIGIR Forum
            ACM SIGIR Forum  Volume 42, Issue 2
            December 2008
            101 pages
            ISSN:0163-5840
            DOI:10.1145/1480506
            Issue’s Table of Contents

            Copyright © 2008 Authors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 30 November 2008

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader