Skip to main content

A Study of Daily Sample Composition on Amazon Mechanical Turk

  • Conference paper
  • First Online:
  • 3070 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9021))

Abstract

Amazon Mechanical Turk (AMT) has become a powerful tool for social scientists due to its inexpensiveness, ease of use, and ability to attract large numbers of workers. While the subject pool is diverse, there are numerous questions regarding the composition of the workers as a function of when the “Human Intelligence Task”(HIT) is posted. Given the “queue” nature of HITs and the disparity in geography of participants, it is natural to wonder whether HIT posting time/day can have an impact on the population that is sampled. We address this question using a panel survey on AMT and show (surprisingly) that except for gender, there is no statistically significant difference in terms of demographics characteristics as a function of HIT posting time.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Berinsky, A.J., Huber, G.A., Lenz, G.S.: Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Political Analysis 20(3), 351–368 (2012). http://pan.oxfordjournals.org/content/20/3/351

    Article  Google Scholar 

  2. Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP 2010, pp. 1–9. ACM, New York (2010). http://doi.acm.org/10.1145/1837885.1837889

  3. Crump, M.J.C., McDonnell, J.V., Gureckis, T.M.: Evaluating amazon’s mechanical turk as a tool for experimental behavioral research. PLoS ONE 8(3), e57410 (2013). http://dx.doi.org/10.1371/journal.pone.0057410

    Article  Google Scholar 

  4. Henrich, J., Heine, S.J., Norenzayan, A.: The weirdest people in the world? The Behavioral and Brain Sciences 33(2–3), 61–83; discussion 83–135 (2010)

    Google Scholar 

  5. Hogg, T., Lerman, K., Smith, L.M.: Stochastic models predict user behavior in social media. HUMAN 2(1), 25–39 (2013). http://ojs.scienceengineering.org/index.php/human/article/view/72

    Google Scholar 

  6. Horton, J.J., Rand, D.G., Zeckhauser, R.J.: The online laboratory: conducting experiments in a real labor market. Experimental Economics 14, 399–425 (2011)

    Article  Google Scholar 

  7. Huberman, B.A., Pirolli, P.L.T., Pitkow, J.E., Lukose, R.M.: Strong regularities in world wide web surfing. Science 280(5360), 95–97 (1998). http://www.sciencemag.org/content/280/5360/95

    Article  Google Scholar 

  8. Komarov, S., Reinecke, K., Gajos, K.Z.: Crowdsourcing performance evaluations of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 207–216. ACM, New York (2013). http://doi.acm.org/10.1145/2470654.2470684

  9. Lease, M., Hullman, J., Bigham, J.P., Bernstein, M.S., Kim, J., Lasecki, W., Bakhshi, S., Mitra, T., Miller, R.C.: Mechanical turk is not anonymous. SSRN Scholarly Paper ID 2228728, Social Science Research Network, Rochester, NY, March 2013. http://papers.ssrn.com/abstract=2228728

  10. Mason, W., Suri, S.: Conducting behavioral research on amazon’s mechanical turk. Behavior Research Methods 44(1), 1–23 (2012). http://link.springer.com/article/10.3758/s13428-011-0124-6

    Article  Google Scholar 

  11. Oppenheimer, D.M., Meyvis, T., Davidenko, N.: Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 45(4), 867–872 (2009). http://www.sciencedirect.com/science/article/pii/S0022103109000766

    Article  Google Scholar 

  12. Paolacci, G., Chandler, J.: Inside the turk understanding mechanical turk as a participant pool. Current Directions in Psychological Science 23(3), 184–188 (2014). http://cdp.sagepub.com/content/23/3/184

    Article  Google Scholar 

  13. Rand, D.G.: The promise of mechanical turk: How online labor markets can help theorists run behavioral experiments. Journal of Theoretical Biology 299, 172–179 (2012). http://www.sciencedirect.com/science/article/pii/S0022519311001330

    Article  MathSciNet  Google Scholar 

  14. Shapiro, D.N., Chandler, J., Mueller, P.A.: Using mechanical turk to study clinical populations. Clinical Psychological Science 1(2), 213–220 (2013). http://cpx.sagepub.com/content/1/2/213

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kiran Lakkaraju .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Lakkaraju, K. (2015). A Study of Daily Sample Composition on Amazon Mechanical Turk. In: Agarwal, N., Xu, K., Osgood, N. (eds) Social Computing, Behavioral-Cultural Modeling, and Prediction. SBP 2015. Lecture Notes in Computer Science(), vol 9021. Springer, Cham. https://doi.org/10.1007/978-3-319-16268-3_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16268-3_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16267-6

  • Online ISBN: 978-3-319-16268-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics