skip to main content
10.1145/2556288.2557155acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Cognitively inspired task design to improve user performance on crowdsourcing platforms

Authors Info & Claims
Published:26 April 2014Publication History

ABSTRACT

Recent research in human computation has focused on improving the quality of work done by crowd workers on crowdsourcing platforms. Multiple approaches have been adopted like filtering crowd workers through qualification tasks, and aggregating responses from multiple crowd workers to obtain consensus. We investigate here how improving the presentation of the task itself by using cognitively inspired features affects the performance of crowd workers. We illustrate this with a case-study for the task of extracting text from scanned images. We generated six task-presentation designs by modifying two parameters - visual saliency of the target fields and working memory requirements - and conducted experiments on Amazon Mechanical Turk (AMT) and with an eye-tracker in the lab setting. Our results identify which task-design parameters (e.g. highlighting target fields) result in improved performance, and which ones do not (e.g. reducing the number of distractors). In conclusion, we claim that the use of cognitively inspired features for task design is a powerful technique for maximizing the performance of crowd workers.

Skip Supplemental Material Section

Supplemental Material

p3665-sidebyside.mp4

mp4

100.1 MB

References

  1. Baddeley, A. Working memory: Looking back and looking forward. Nature Reviews Neuroscience 4, 10 (2003), 829--839.Google ScholarGoogle ScholarCross RefCross Ref
  2. Ballard, D. H., Hayhoe, M. M., Pook, P. K., and Rao, R. P. Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences 20, 04 (1997), 723--742.Google ScholarGoogle ScholarCross RefCross Ref
  3. Bates, D., Maechler, M., and Bolker, B. lme4: Linear mixed-effects models using s4 classes.Google ScholarGoogle Scholar
  4. Biewald, L., and Van Pelt, C. Distributing a task to multiple workers over a network for completion while providing quality control, 2011. WO Patent 2,011,159,434.Google ScholarGoogle Scholar
  5. Chen, K., Kannan, A., Yano, Y., Hellerstein, J. M., and Parikh, T. S. Shreddr: pipelined paper digitization for low-resource organizations. In Proceedings of the 2nd ACM Symposium on Computing for Development, ACM (2012), 3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Clark, A. Supersizing the Mind: Embodiment, Action, and Cognitive Extension: Embodiment, Action, and Cognitive Extension. Oxford University Press, USA, 2008.Google ScholarGoogle Scholar
  7. Deshpande, Y. Adaptivity and Interface Design: A Human-Computer Interaction Study in E-Learning Applications. PhD thesis, Indian Institute of Technology, Guwahati, January 2014.Google ScholarGoogle Scholar
  8. Fisher, D. L., and Tan, K. C. Visual displays: The highlighting paradox. Human Factors: The Journal of the Human Factors and Ergonomics Society 31, 1 (1989), 17--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Gamow, G., and Stern, M. Puzzle-math. Macmillan, 1958.Google ScholarGoogle Scholar
  10. Gray, W. D., Sims, C. R., Fu, W.-T., and Schoelles, M. J. The soft constraints hypothesis: a rational analysis approach to resource allocation for interactive behavior. Psychological review 113, 3 (2006), 461.Google ScholarGoogle Scholar
  11. Green, S., Heer, J., and Manning, C. D. The efficacy of human post-editing for language translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2013), 439--448. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Heer, J., and Bostock, M. Crowdsourcing graphical perception: using mechanical turk to assess visualization design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2010), 203--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Huang, E., Zhang, H., Parkes, D. C., Gajos, K. Z., and Chen, Y. Toward automatic task design: A progress report. In Proceedings of the ACM SIGKDD workshop on human computation, ACM (2010), 77--85. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ipeirotis, P. G., Provost, F., and Wang, J. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, ACM (2010), 64--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, ACM (2008), 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Komarov, S., Reinecke, K., and Gajos, K. Z. Crowdsourcing performance evaluations of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2013), 207--216. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Korf, R. E. Toward a model of representation changes. Artificial intelligence 14, 1 (1980), 41--78.Google ScholarGoogle Scholar
  18. Landauer, T. K. Latent semantic analysis. Encyclopedia of Cognitive Science (2006).Google ScholarGoogle Scholar
  19. Law, E., and Ahn, L. v. Human computation. Synthesis Lectures on Artificial Intelligence and Machine Learning 5, 3 (2011), 1--121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Little, G., Chilton, L. B., Goldman, M., and Miller, R. C. Exploring iterative and parallel human computation processes. In Proceedings of the ACM SIGKDD workshop on human computation, ACM (2010), 68--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Mason, W., and Watts, D. J. Financial incentives and the performance of crowds. ACM SigKDD Explorations Newsletter 11, 2 (2010), 100--108. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Miller, G. The magical number seven, plus or minus two: Some limits on our capacity for processing information. The psychological review 63 (1956), 81--97.Google ScholarGoogle Scholar
  23. Morris, R., Dontcheva, M., and Gerber, E. Priming for better performance in microtask crowdsourcing environments. Internet Computing, IEEE 16, 5 (2012), 13--19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Olesen, P. J., Westerberg, H., and Klingberg, T. Increased prefrontal and parietal activity after training of working memory. Nature neuroscience 7, 1 (2003), 75--79.Google ScholarGoogle Scholar
  25. Paolacci, G., Chandler, J., and Ipeirotis, P. Running experiments on amazon mechanical turk. Judgment and Decision Making 5, 5 (2010), 411--419.Google ScholarGoogle ScholarCross RefCross Ref
  26. Parent, G., and Eskenazi, M. Speaking to the crowd: Looking at past achievements in using crowdsourcing for speech and predicting future challenges. In INTERSPEECH (2011), 3037--3040.Google ScholarGoogle Scholar
  27. Toomim, M. Economic utility of interaction in crowdsourcing. In Workshop on Crowdsourcing and Human Computation at CHI (2011).Google ScholarGoogle Scholar
  28. Treisman, A. Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human Perception and Performance 8, 2 (1982), 194--214.Google ScholarGoogle ScholarCross RefCross Ref
  29. Wickens, C. D., Ambinder, M. S., Alexander, A. L., and Martens, M. The role of highlighting in visual search through maps. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 48 (2004), 1895--1899.Google ScholarGoogle ScholarCross RefCross Ref
  30. Wolfe, J. M. Guided search 2.0 a revised model of visual search. Psychonomic bulletin and review 1, 2 (1994), 202--238.Google ScholarGoogle Scholar

Index Terms

  1. Cognitively inspired task design to improve user performance on crowdsourcing platforms

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2014
      4206 pages
      ISBN:9781450324731
      DOI:10.1145/2556288

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 April 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '14 Paper Acceptance Rate465of2,043submissions,23%Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader