skip to main content
10.1145/3077136.3084145acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

EvALL: Open Access Evaluation for Information Access Systems

Authors Info & Claims
Published:07 August 2017Publication History

ABSTRACT

The EvALL online evaluation service aims to provide a unified evaluation framework for Information Access systems that makes results completely comparable and publicly available for the whole research community. For researchers working on a given test collection, the framework allows to: (i) evaluate results in a way compliant with measurement theory and with state-of-the-art evaluation practices in the field; (ii) quantitatively and qualitatively compare their results with the state of the art; (iii) provide their results as reusable data to the scientific community; (iv) automatically generate evaluation figures and (low-level) interpretation of the results, both as a pdf report and as a latex source. For researchers running a challenge (a comparative evaluation campaign on shared data), the framework helps them to manage, store and evaluate submissions, and to preserve ground truth and system output data for future use by the research community. EvALL can be tested at http://evall.uned.es.

References

  1. Robert N. Allan. 2009. Virtual research environments: From portals to science gateways. Elsevier. Google ScholarGoogle ScholarCross RefCross Ref
  2. Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information retrieval Vol. 12, 4 (2009), 461--486. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Enrique Amigó, Julio Gonzalo, and Felisa Verdejo. 2013. A general evaluation measure for document organization tasks Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, 643--652.Google ScholarGoogle Scholar
  4. Timothy G. Armstrong, Alistair Moffat, William Webber, and Justin Zobel 2009. Improvements That Don't Add Up: Ad-hoc Retrieval Results Since 1998 Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM '09). ACM, New York, NY, USA, 601--610. http://dx.doi.org/10.1145/1645953.1646031 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Buckley and others. 2004. The trec_eval evaluation package. (2004).Google ScholarGoogle Scholar
  6. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter Vol. 11, 1 (2009), 10--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jimmy Lin and Miles Efron 2014. Infrastructure Support for Evaluation As a Service Proceedings of the 23rd International Conference on World Wide Web (WWW '14 Companion). ACM, New York, NY, USA, 79--82. http://dx.doi.org/10.1145/2567948.2577014 Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. Usbeck, M. Roder, and A. Ngonga 2015. GERBIL - General Entity Annotator Benchmarking Framework Proc. 25th World Wide Web Conference. ACM.Google ScholarGoogle Scholar
  9. WFMC 1994. Workflow reference model. Technical Report. Workflow Management Coalition, Brussels.Google ScholarGoogle Scholar
  10. Peilin Yang and Hui Fang 2016. A Reproducibility Study of Information Retrieval Models Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval (ICTIR '16). ACM, New York, NY, USA, 77--86. http://dx.doi.org/10.1145/2970398.2970415 Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SIGIR '17: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
    August 2017
    1476 pages
    ISBN:9781450350228
    DOI:10.1145/3077136

    Copyright © 2017 ACM

    © 2017 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 7 August 2017

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    SIGIR '17 Paper Acceptance Rate78of362submissions,22%Overall Acceptance Rate792of3,983submissions,20%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader