Abstract
Online experiments have recently become very popular, and—in comparison with traditional lab experiments— they may have several advantages, such as reduced demand characteristics, automation, and generalizability of results to wider populations (Birnbaum, 2004; Reips, 2000, 2002a, 2002b). We replicated Dandurand, Bowen, and Shultz’s (2004) lab-based problem-solving experiment as an Internet experiment. Consistent with previous results, we found that participants who watched demonstrations of successful problem-solving sessions or who read instructions outperformed those who were told only that they solved problems correctly or not. Online participants were less accurate than lab participants, but there was no interaction with learning condition. Thus, we conclude that online and Internet results are consistent. Disadvantages included high dropout rate for online participants; however, combining the online experiment with the department subject pool worked well.
Article PDF
Similar content being viewed by others
References
Birnbaum, M. H. (2004). Human research and data collection via the Internet. Annual Review of Psychology, 55, 803–832.
Bosnjak, M., & Tuten, T. L. (2003). Prepaid and promised incentives in Web surveys. An experiment. Social Science Computer Review, 21, 208–217.
Buchanan, T. (2002). Online assessment: Desirable or dangerous? Professional Psychology: Research & Practice, 33, 148–154.
Dandurand, F., Bowen, M., & Shultz, T. R. (2004). Learning by imitation, reinforcement and verbal rules in problem-solving tasks. In J. Triesch & T. Jebara (Eds.), Proceedings of the Third International Conference on Development and Learning: Developing social brains (pp. 88–95). La Jolla: University of California, San Diego, Institute for Neural Computation.
Eaton, J., & Struthers, C. W. (2002). Using the Internet for organizational research: A study of cynicism in the workplace. CyberPsychology & Behavior, 5, 305–313.
Eichstaedt, J. (2001). An inaccurate-timing filter for reaction time measurement by JAVA applets implementing Internet-based experiments. Behavior Research Methods, Instruments, & Computers, 33, 179–186.
Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust Web-based studies? A comparative analysis of six preconceptions about Internet questionnaires. American Psychologist, 59, 93–104.
Halbeisen, L., & Hungerbühler, N. (1995). The general counterfeit coin problem. Discrete Mathematics, 147, 139–150.
Hogg, R. V., & Craig, A. T. (1995). Introduction to mathematical statistics. Upper Saddle River, NJ: Prentice-Hall.
Konstan, J. A., Rosser, B. R. S., Ross, M. W., Stanton, J., & Edwards, W. M. (2005). The story of subject naught: A cautionary but optimistic tale of Internet survey research. Journal of Computer-Mediated Communication, 10, Article 11. Retrieved 2006 from jcmc.indiana.edu/vol10/issue2/konstan.html.
Krantz, J. H., & Dalal, R. (2000). Validity of Web-based psychological research. In M. H. Birnbaum (Ed.), Psychological experiments on the Internet (pp. 35–60). San Diego: Academic Press.
Meyerson, P., & Tryon, W. W. (2003). Validating Internet research: A test of the psychometric equivalence of Internet and in-person samples. Behavior Research Methods, Instruments, & Computers, 35, 614–620.
Michalak, E. E., & Szabo, A. (1998). Guidelines for Internet research: An update. European Psychologist, 3, 70–75.
Musch, J., & Klauer, K. C. (2002). Psychological experimenting on the World Wide Web: Investigating context effects in syllogistic reasoning. In B. Batinic, U.-D. Reips, & M. Bosnjak (Eds.), Online social sciences (pp. 181–212). Seattle: Hogrefe & Huber.
O’Neil, K. M., & Penrod, S. D. (2001). Methodological variables in Web-based research that may affect results: Sample type, monetary incentives, and personal information. Behavior Research Methods, Instruments, & Computers, 33, 226–233.
O’Neil, K. M., Penrod, S. D., & Bornstein, B. H. (2003). Web-based research: Methodological variables’ effects on dropout and sample characteristics. Behavior Research Methods, Instruments, & Computers, 35, 217–226.
Preckel, F., & Thiemann, H. (2003). Online- versus paper-pencil- version of a high potential intelligence test. Swiss Journal of Psychology, 62, 131–138.
Reips, U.-D. (2000). The Web experiment method: Advantages, disadvantages, and solutions. In M. H. Birnbaum (Ed.), Psychological experiments on the Internet (pp. 89–114). San Diego: Academic Press.
Reips, U.-D. (2002a). Standards for Internet-based experimenting. Experimental Psychology, 49, 243–256.
Reips, U.-D. (2002b). Theory and techniques of conducting Web experiments. In B. Batinic, U.-D. Reips, & M. Bosnjak (Eds.), Online social sciences (pp. 229–250). Seattle: Hogrefe & Huber.
Riva, G., Teruzzi, T., & Anolli, L. (2003). The use of the Internet in psychological research: Comparison of online and offline questionnaires. CyberPsychology & Behavior, 6, 73–80.
Salgado, J. F., & Moscoso, S. (2003). Internet-based personality testing: Equivalence of measures and assessees’ perceptions and reactions. International Journal of Selection & Assessment, 11, 194–205.
Simmel, M. L. (1953). The coin problem: A study in thinking. American Journal of Psychology, 66, 229–241.
Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34, 457–468.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work began as a project completed for a graduate seminar in Human Factors and Ergonomics, taught by D. C. Donderi in the McGill University Department of Psychology.
Rights and permissions
About this article
Cite this article
Dandurand, F., Shultz, T.R. & Onishi, K.H. Comparing online and lab methods in a problem-solving experiment. Behavior Research Methods 40, 428–434 (2008). https://doi.org/10.3758/BRM.40.2.428
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.3758/BRM.40.2.428