skip to main content
10.1145/2808194.2809464acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
research-article

Implicit Preference Labels for Learning Highly Selective Personalized Rankers

Published:27 September 2015Publication History

ABSTRACT

Interaction data such as clicks and dwells provide valuable signals for learning and evaluating personalized models. However, while models of personalization typically distinguish between clicked and non-clicked results, no preference distinctions within the non-clicked results are made and all are treated as equally non-relevant.

In this paper, we demonstrate that failing to enforce a prior on preferences among non-clicked results leads to learning models that often personalize with no measurable gain at the risk that the personalized ranking is worse than the non-personalized ranking. To address this, we develop an implicit preference-based framework that enables learning highly selective rankers that yield large reductions in risk such as the percentage of queries personalized. We demonstrate theoretically how our framework can be derived from a small number of basic axioms that give rise to well-founded target rankings which combine a weight on prior preferences with the implicit preferences inferred from behavioral data.

Additionally, we conduct an empirical analysis to demonstrate that models learned with this approach yield comparable gains on click-based performance measures to standard methods with far fewer queries personalized. On three real-world commercial search engine logs, the method leads to substantial reductions in the number of queries re-ranked (2x - 7x fewer queries re-ranked) while maintaining 85-95% of the total gain achieved by the standard approach.

References

  1. E. Agichtein, E. Brill, and S. Dumais. Improving web search ranking by incorporating user behavior information. In Proc. SIGIR, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Agichtein, E. Brill, S. Dumais, and R. Ragno. Learning user interaction models for predicting web search result preferences. In Proc. SIGIR, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. Bennett et al. Modeling the impact of short- and long-term behavior on search personalization. In Proc. SIGIR, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. P. N. Bennett, F. Radlinski, R. W. White, and E. Yilmaz. Inferring and using location metadata to personalize web search. In Proc. SIGIR, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Burges, R. Ragno, and Q. Le. Learning to rank with non-smooth cost functions. In Proc. NIPS, 2006.Google ScholarGoogle Scholar
  6. C. J. C. Burges, K. M. Svore, P. N. Bennett, A. Pastusiak, and Q. Wu. Learning to rank using an ensemble of lambda-gradient models. JMLR, 14, 2011.Google ScholarGoogle Scholar
  7. O. Chapelle and Y. Zhang. A dynamic bayesian network click model for web search ranking. In Proc. WWW, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Chuklin, P. Serdyukov, and M. de Rijke. Click model-based information retrieval metrics. In Proc. SIGIR, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Collins-Thompson. Reducing the risk of query expansion via robust constrained optimization. In Proc. CIKM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. Collins-Thompson, P. N. Bennett, F. Diaz, C. Clarke, and E. M. Voorhees. Trec 2013 web track overview. In TREC '13, 2013.Google ScholarGoogle Scholar
  11. N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proc. WSDM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. B. T. Dinçer, C. Macdonald, and I. Ounis. Hypothesis testing for the risk-sensitive evaluation of retrieval systems. In Proc. SIGIR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Dong et al. Time is of the essence: Improving recency ranking using twitter data. In Proc. WWW, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G. E. Dupret and B. Piwowarski. A user browsing model to predict search engine click data from past observations. In Proc. SIGIR, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. H. Fang and C. Zhai. An exploration of axiomatic approaches to information retrieval. In Proc. SIGIR, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. H. Fang and C. Zhai. Semantic term matching in axiomatic approaches to information retrieval. In Proc. SIGIR, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Fox et al. Evaluating implicit measures to improve web search. ACM Trans. Inf. Syst., 23 (2), Apr. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Gollapudi and A. Sharma. An axiomatic approach for result diversification. In Proc. WWW, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. F. Guo, C. Liu, A. Kannan, T. Minka, M. Taylor, Y.-M. Wang, and C. Faloutsos. Click chain model in web search. In Proc. WWW, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20 (4), Oct. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. Joachims. Optimizing search engines using clickthrough data. In Proc. SIGKDD, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, and G. Gay. Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Trans. Inf. Syst., 25 (2), Apr. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T.-Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3 (3), 2009.Google ScholarGoogle Scholar
  24. F. Radlinski and T. Joachims. Query chains: Learning to rank from implicit feedback. In Proc. SIGKDD, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect retrieval quality. In CIKM '08, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. K. Raman, T. Joachims, P. Shivaswamy, and T. Shnabel. Stable coactive learning via perturbation. In Proc. ICML, 2013.Google ScholarGoogle Scholar
  27. X. Shen, B. Tan, and C. Zhai. Context-sensitive information retrieval using implicit feedback. In Proc. SIGIR, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. P. Shivaswamy and T. Joachims. Online structured prediction via coactive learning. In Proc. ICML, 2012.Google ScholarGoogle Scholar
  29. M. Shokouhi. Learning to personalize query auto-completion. In Proc. SIGIR, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Shokouhi, R. W. White, P. Bennett, and F. Radlinski. Fighting search engine amnesia: Reranking repeated results. In Proc. SIGIR, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. A. Sieg, B. Mobasher, and R. Burke. Web search personalization with ontological user profiles. In Proc. CIKM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. K. M. Svore, M. N. Volkovs, and C. J. Burges. Learning to rank with multiple objective functions. In Proc. WWW, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. J. Teevan, S. T. Dumais, and D. J. Liebling. To personalize or not to personalize: modeling queries with variation in user intent. In Proc. SIGIR, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Teevan, D. J. Liebling, and G. Ravichandran Geetha. Understanding and predicting personal navigation. In Proc. WSDM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Y. Ustinovskiy and P. Serdyukov. Personalization of web-search using short-term browsing context. In Proc. CIKM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Y. Ustinovskiy, G. Gusev, and P. Serdyukov. An optimization framework for weighting implicit relevance labels for personalized web search. In Proc. WWW, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. L. Wang, P. N. Bennett, and K. Collins-Thompson. Robust ranking models via risk-sensitive optimization. In Proc. SIGIR, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Q. Wu, C. Burges, K. Svore, and J. Gao. Adapting boosting for information retrieval measures. Journal of Information Retrieval, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. B. Xiang, D. Jiang, J. Pei, X. Sun, E. Chen, and H. Li. Context-aware ranking in web search. In Proc. SIGIR, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Implicit Preference Labels for Learning Highly Selective Personalized Rankers

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICTIR '15: Proceedings of the 2015 International Conference on The Theory of Information Retrieval
      September 2015
      402 pages
      ISBN:9781450338332
      DOI:10.1145/2808194

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 September 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ICTIR '15 Paper Acceptance Rate29of57submissions,51%Overall Acceptance Rate209of482submissions,43%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader