Compared to a small, supervised lab experiment, a large, unsupervised web-based experiment on a previously unknown effect has benefits that outweigh its potential costs

https://doi.org/10.1016/j.chb.2013.01.024Get rights and content

Abstract

Research on internet-based studies has generally supported their benefits. However, that research sometimes did not directly compare internet-based to traditional delivery, often used non-experimental methods and small samples, and has not used an entirely unknown effect for the comparison to completely rule out demand characteristics. Our lab experiment (N = 180), in which participants were supervised by an experimenter, demonstrated previously unexamined effects. Both the frighteningness and disgustingness of insects made people want to kill them, and females wanted to kill the insects more than males did. There were also some interesting patterns of interaction with gender, but they were not statistically significant. However, an unsupervised, but larger, web-based experiment (N = 1301) produced the same significant main effects as the lab study, and the same patterns of interaction that had occurred at a non-significant level in the lab study occurred at a statistically significant level in the web-based study. These results add support to the finding that although web-based studies may incur risks by being unsupervised, such as some participants not being genuinely motivated to follow the instructions correctly, the risks are compensated for by the much larger sample size afforded by the web-based approach.

Highlights

► A web-based and a lab study that were identical were directly compared. ► The web-based study produced essentially the same results as the lab study. ► Patterns that were not significant in the lab study were significant on the web. ► Lack of experimenter supervision did not adversely affect the web study. ► Demand characteristics were ruled out by using a previously unknown effect.

Introduction

As the use of technology increases, many psychologists are now using the internet as a means of conducting their research. This expansion into web-based studies could raise questions about the validity of this method (e.g., Hewson, 2003), but both advantages and disadvantages of internet research have been uncovered. After briefly reviewing those advantages and disadvantages, this paper will point out some gaps in the literature that the current study will address.

One of the advantages of internet based research is that at a relatively low cost it can provide large samples that are diverse and come from underrepresented populations (Birnbaum, 2004, Skitka and Sargis, 2006). Importantly, this efficiency can be obtained while producing similar results (Hewson, 2003, McGraw et al., 2000). For example Carlbring et al. (2007) found equivalent results for internet-based and paper-and-pencil questionnaires for panic disorder and agoraphobia. Whitaker (2007) found that there was no interaction between gender and method of administration on attitudinal measures in spite of gender differences in computer anxiety. Naus, Philipp, and Samsi (2009) found equivalent responses for measures of quality of life and depression, and also for some subscales (although not for others) of a personality measure. Vadillo and Matute (2009) initially found both similarities and some differences in discrimination learning between lab and internet studies. However, they later wished to test the validity of internet-based experimental research by showing similar results between lab and internet versions of a study on an effect that was not well known in the literature. To that end, they succeeded in showing such a similarity for the augmentation effect (a situation in which the usual blocking effect in association learning is reversed).

Another advantage of internet-based research is the lack of researcher presence. This can be beneficial in two ways. First, participants are more apt to be frank in their responses because of a decrease in anxiety over the social consequences (Hewson, 2003). Second, because the procedure can be replicated exactly for each subject there is no possibility of researcher bias (Birnbaum, 2004). Errors in data entry by a research assistant cannot occur when questionnaire studies are conducted over the internet because the subject enters the data directly (Pettit, 2002).

Although these advantages are appealing, there are a variety of potential disadvantages associated with internet research that warrant caution. According to Hewson (2003), the lack of researcher control poses huge problems. It is impossible to know such things as whether the instructions were followed correctly, the state the subject was in at the time of their participation, and whether they took the study seriously. Also, Birnbaum (2004) found that there was an increased dropout rate in web-based rather than lab studies. Another major disadvantage discussed in both Hewson (2003) and Skitka and Sargis (2006) are the ethical issues raised in internet research. These studies have found problems with the delivery of informed consent and debriefing forms, and with the concern of confidentiality in the experiments.

However, the previous literature on advantages and disadvantages of internet research has some gaps that the present study helps to fill. A first example of such a gap is that much of the literature comparing internet-based and lab studies used studies that, unlike the present study, were not true experiments. Instead, many were based on questionnaire or survey methods (Beldad et al., 2011, Epstein et al., 2001, Gosling et al., 2004, Kays et al., 2012, Lewis et al., 2009, Naus et al., 2009, Whitaker, 2007). Because true experiments allow the conclusions to be drawn that, first, a relationship between an independent and a dependent variable is specifically a cause and effect relationship, and, second, the direction of the causality, they afford the possibility of controlling, rather than just predicting, the effects of the independent variable. This gives true experimental research an added value that non-experimental research does not have. Therefore, it is important to show that not only non-experimental research, but also, true experiments can be conducted with just as much confidence in their validity when conducted on the internet as when conducted by a traditional delivery method.

A second gap in the present literature is that many studies used relatively small samples. Among the aforementioned studies, all but one (Gosling et al., 2004) used samples ranging only from 76 to 213 participants. We found a smaller number of comparisons between internet and traditional research in which the methodology of the studies being compared was experimental. However, in some of these studies the sample sizes were also small. For example, the samples used by McGraw et al. (2000) were 261, 128, and 81 participants, and those used by Vadillo and Matute (2009) were 20 and 75 participants. The present study used 1301 participants in the internet experiment. Thus, our conclusion that our internet experiment produced the same result as a traditional delivery experiment is less likely to be a chance result than if it had used fewer participants.

A third gap in the previous literature is that we found some studies that used the internet for an experimental methodology, but they compared their findings to previously conducted studies rather than by either randomly assigning participants to the internet and traditional delivery, or at least conducting the same study again in the traditional manner with a separate sample, but exactly as it had been conducted on the internet (Joinson et al., 2008, Mitchell et al., 2009, Vadillo and Matute, 2011). In the present study we filled this gap by not just finding a similar experiment that had been previously conducted by a traditional delivery method and comparing it to our internet experiment, but rather by conducting the exact same experimental study that we had conducted on the internet again, but in a lab and by the traditional face to face delivery method, and then making a direct comparison between the two.

Finally, as mentioned above, one shortcoming we noticed in the current literature is that there have not been as many demonstrations of the equivalence between internet-based and traditional delivery of true experimental results as non-experimental results. Furthermore, we noticed that among the comparisons of non-experimental studies there have been mixed results (e.g., Mitchell et al., 2009, Naus et al., 2009, Vadillo and Matute, 2009). This raises the possibility that mixed results could also occur among the experimental studies, thus arguing for continued attempts to replicate the equivalency finding for experiments.

The present study attempted to provide the needed further replication of the equivalency of internet-based and traditional delivery methods for experiments, as well as addressing a few other issues as well. For example, Vadillo and Matute (2009) pointed out that replicating an established experimental finding has the disadvantage that because it is well known, there is the possibility for demand characteristics to influence the participants. Therefore, in their follow up study (Vadillo & Matute, 2011) they rectified that shortcoming by demonstrating the equivalence of a less well known finding. However, in that study, as noted above, they did not do a direct comparison between the internet-based delivery method and an exactly similar traditional delivery method. They also used a relatively small sample size of only 130 participants. Finally, if a less well known effect helps reduce the probability of demand characteristics, then a completely unknown effect could help even more. Therefore, the present study attempted to (a) demonstrate the equivalence of an experimental manipulation of an entirely new and unknown effect, (b) to do so with a relatively large sample, and (c) to do so by making a direct comparison between the internet-based delivery and an exactly similar traditional delivery. To that end, our study made a direct test of the effect of the presence of experimenter supervision by making the materials and procedures for both delivery methods exactly the same except for the presence of an experimenter.

Another issue that the present study will address is the concern raised by Birnbaum (2004) that whereas it has been shown that internet-based research is equivalent to traditional delivery, whether it is actually better because of the larger sample sizes it affords has not been sufficiently demonstrated. The present study also addresses two methodological issues that are not always addressed in comparisons between internet-based and traditional research. First, as suggested by Hewson (2003), in order to remove duplicate responses from data collected from the internet, IP addresses, times, and dates were collected. Second, Birnbaum (2004) raised the concern that internet-based samples may be different from traditional samples in important ways. In order to address this concern in the present study, we collected demographic data which we used to show that our internet sample was in fact quite similar to the sample of college students we used for our traditional delivery.

Section snippets

Experiment 1

Experiment 1 was conducted in a laboratory under the supervision of an experimenter.

Experiment 2

Experiment 2 was identical to Experiment 1 except that there was no supervision of the participants by an experimenter because the directions were presented on web pages that were found on the public internet. The study was originally posted on a website maintained by Hanover College (http://psych.hanover.edu/research/exponnet.html).

Findings

This comparison of experimental studies mirrored the comparisons of the many studies that were not experimental (e.g., Gosling et al., 2004, Lewis et al., 2009). It adds to the growing literature supporting the validity of using the internet as a means of collecting data in two ways. First, we obtained similar results across the two methods of administration, even though, in spite of the procedures for these studies being relatively simple, there was still the potential for participants to make

Acknowledgments

Special thanks to Joseph Cipko, Alyssa Rizzo, Nikita Driscoll, Melissa Gilroy, Trisha Parker, Brittany Robison, Lisa Scala, Lora Seiverling, and Sarah Windfelder for help in data collection.

Cited by (0)

The initial results of Experiment 2 were reported as a poster presented at the 79th Annual Meeting of the Eastern Psychological Association, Boston, Massachusetts, March 14–16, 2008. The comparison of the two experiments was reported as a poster presented at the University of Scranton’s 25th Annual Psychology Conference, Scranton, PA, April 17, 2010.

1

An undergraduate student at Kutztown University when the research was conducted, is now a law student at the University of Nebraska, Lincoln.

2

An undergraduate student at Kutztown University when the research was conducted, is now an alumna residing at 4 Welsh Court, Pottstown, PA 19464.

View full text