1 Introduction

A vast number of laboratory and field studies have shown that many people contribute voluntarily to the provision of public goods, even when it is not in their monetary interest (e.g., Chaudhuri, 2011; Fehr & Fischbacher, 2003; Zelmer, 2003). The declining pattern of contributions over time in these social dilemmas is consistent with two behavioral explanations. First, conditional cooperation (or reciprocity) has been identified as being important for voluntary contributions to public goods; i.e., many decision makers contribute to public goods when others also contribute or are expected to do so (Fischbacher et al., 2001; Fischbacher & Gächter, 2010; Thöni & Volk, 2018). Second, confusion (or decision errors) have been invoked as an alternative explanation for the prevalence of voluntary contributions; i.e., many decision makers, particularly in laboratory experiments, are supposed to contribute to a public good not out of a preference motive or out of a reciprocity norm, but instead because they misunderstand the incentives of the game and therefore are unaware how to correctly pursue their self-interest (e.g., Andreoni, 1995; Burton-Chellew & West, 2013; Ferraro & Vossler, 2010; Houser & Kurzban, 2002). Confusion might be especially relevant in one-shot interactions or at the start of repeated interactions, and the observed decay in contribution levels, if it is due to learning, is consistent with this explanation. For example, Burton-Chellew et al. (2016), using the strategy method elicitation developed by Fischbacher et al. (2001), report that decision makers exhibit the same conditional contribution pattern, irrespectively of whether they interact with humans or computers, which seems to corroborate the second explanation based on confusion.

In this paper, we report on a novel way of testing whether confusion about optimal strategies, i.e., optimally implementing one’s preference, could be an important driver of voluntary contributions in a laboratory public goods game. Given the importance of public goods games in analyzing social dilemmas and developing policy-relevant designs and incentives for problems outside the laboratory (see, for instance, Schmidt & Ockenfels, 2021, for an application in climate policy), it seems relevant to assess the internal validity of the main paradigm used in experimental research. To this end, we experimentally vary in a linear public goods game whether decision makers receive information about the individually optimal strategy and the socially optimal strategy or not, and we analyze how this information affects elicited contribution preferences and cooperation in a public goods game. We believe that an assessment of whether experimental participants fully understand their strategic options and the relevant incentives provides us with the most direct test of the confusion hypothesis.

2 Experimental design and procedures

Our experimental design builds on the standard voluntary contribution mechanism with the following linear payoff function:

$$\pi_i = 20 - g_{i } + 0.5\mathop \sum \limits_{j = 1}^3 g_i ,$$
(1)

where \(g_{i }\) denotes the contribution of participant i to the public good. Each group consists of n = 3 randomly assigned participants, and each participant receives an endowment of 20 points. The marginal per capita return (MPCR) from investing in the public good is 0.5 and the social return is 1.5. The parameters are all known by participants. Assuming that participants are rational and selfish payoff maximizers, these parameters guarantee that it is individually optimal to contribute zero. From a social or efficiency perspective, they guarantee that it is collectively optimal to contribute the entire endowment. Hence, the setup and the parameters imply a social dilemma.

Participants were randomly assigned to one of four treatments in a 2 (information: standard vs. optimal strategies) × 2 (control questions: incentivized vs. not incentivized) between-subjects design (see Table 1). In the standard information treatment, participants received the standard instructions that explained the public goods problem (see Online Appendix A; online link is in the acknowledgments). In the optimal strategies information treatment, participants received the same instructions, but with an additional paragraph that was explicit about which strategies maximize their individual income and their group’s income. Specifically, participants received the information that their individual income is maximized by contributing zero to the public good, regardless of the behavior of the other group members, and why this is the case (see Online Appendix A); i.e., we explained strategic dominance. Participants additionally received the information that their group’s income is maximized by contributing one’s entire endowment to the public good if everybody does so, and why this is the case (see Online Appendix A). Participants were also explicitly informed that in case they contribute more to the public good than their group members, the other group members benefit more from their contributions and end up earning more, i.e., we explained the sucker’s payoff.

Table 1 Summary statistics for the four treatments

At the beginning, participants learned that the experiment consists of four parts.Footnote 1

Part 1 Participants were asked to answer 16 standard control questions in four separate blocks (see Online Appendix B). We often use these or similar questions in related public goods experiments to ensure a basic understanding. In the control questions incentivized treatment, participants could earn a bonus. They were told that, after completing all questions, one question would be randomly chosen and, if answered correctly, would result in a bonus of 12 experimental points. In the control questions not incentivized treatment, participants could not earn a bonus. After completing all questions, all participants received information for every question about the correct answer, their actual answer, and if their answer was correct or wrong.

Part 2 We then elicited contribution preferences using the strategy method of Fischbacher et al. (2001), validated for repeated interactions in Fischbacher and Gächter (2010). Group members first make an unconditional contribution to the public good, which is a single integer number that satisfies 0 ≤ \(g_{i }\) ≤ 20. Thereafter, group members make a conditional contribution for each of the 21 possible rounded averages from 0 to 20 (i.e., they submit a contribution schedule). Both the unconditional as well as the conditional contribution are potentially payoff relevant (for the way of how to incentivize both, see Fischbacher et al., 2001). Participants did not receive any information about other participants’ decisions at the end of Part 2.

Part 3 Finally, participants played ten periods of a repeated public goods game with partner matching (i.e., with constant group composition). We emphasized that the group composition was determined randomly and thus most likely different from the previous strategy method decisions in Part 2. In each period, we elicited beliefs about the other group members’ average contributions. At the end of each period, group members were informed about their group members` average contributions and their own payoffs. To avoid hedging, at the end of the session two periods were randomly selected for payment: In one period, the outcome of the public goods game was payoff relevant, and in another one, beliefs were incentivized.

We ran the experiment in the Munich Experimental Laboratory for Economic and Social Sciences (MELESSA), using the software z-Tree (Fischbacher, 2007) and the organizational software ORSEE (Greiner, 2015). In total, 93 undergraduates took part in the experiment with average earnings of €19 (including a show-up fee of €4).

3 Experimental results

Table 1 provides summary statistics for the three parts of the four treatments and shows the number of independent observations per treatment. Our analysis reveals that incentivizing control questions does not significantly affect participants’ behavior in any of the three parts (see Online Appendix C for a detailed discussion of the results). Since there is no effect of the incentivizing control questions, we pool the data and compare results for the two information treatments in the remainder of the paper.

In this context, it is also important to see whether different experimental instructions influence the number of correct answers. As a matter of fact, information about optimal strategies has neither a significant effect on how many correct answers participants give [p = 0.313; two-sided Mann–Whitney-U (MWU) test; all p values \(\ge\) 0.084 (MWU) on the level of the 16 individual control questions], nor on how many participants answer all control questions correctly (p = 0.804; two-sided Chi-square test). Hence, information about optimal strategies does not affect how well participants do in answering control questions.

3.1 Treatment information: standard instructions vs. optimal strategies instructions

We observe that information about the optimal strategies does not affect the distribution of contribution preferences elicited by the strategy method (p = 0.366; two-sided Chi-square test; definitions of types according to Fischbacher et al., 2001; see Table 2). Especially, the relative frequency of conditional cooperators is almost identical. Further, Mann–Whitney-U tests do not reveal any significant differences in slopes of the conditional cooperation schedulesFootnote 2 (p = 0.561) or mean unconditional contributions in Part 2 (p = 0.363) for the two treatments. Thus, we conclude that decision makers exhibit the same elicited preferences for cooperation, regardless of whether they receive standard instructions or instructions that explain the payoff consequences of all strategies in detail. Confusion does not seem to play a major role.

Table 2 Distribution of player types for the two information treatments

In Fig. 1, we show the dynamics of contributions over the ten periods of Part 3 for the two information treatments. We find that information about optimal strategies does not affect mean contributions (p = 0.304; MWU; group means as independent observations) and mean beliefs (p = 0.149; MWU; group means as independent observations) in the repeated public goods game (Part 3). To investigate this relationship, we run a regression of participants’ contributions in the public goods game in period 1, using OLS regressions (see Models 1–4; Table 3) and averaged over periods 1–10 using GLS regressions (see Models 5–8; Table 4). The models include different predictors. Models 1 and 5 include an information treatment dummy; Models 2 and 6 additionally include beliefs on others’ contributions; Models 3 and 7 add predicted contributions (i.e., contributions based on elicited preferences from Part 2 and the belief); and Models 4 and 8 further add interaction terms of the information treatment dummy with beliefs and predicted contributions as well as period (only Model 8). The analysis reveals that information about the optimal strategies has no significant main effect on average contributions, neither in period 1 (see Models 1–4) nor overall (see Models 5–8). Moreover, information about optimal strategies does not significantly interact with beliefs and predicted contributions, neither in period 1 (see Models 1 and 4) nor overall (see Models 5 and 8), nor with period (see Model 8). Based on these results, we conclude that participants’ contributions in a repeated public goods game, and their beliefs about others’ contributions, do not change when they receive extended information about which strategies maximize individual and group payoffs.

Fig. 1
figure 1

Average contributions over ten periods in Part 3

Table 3 Regression models of contributions in the repeated public goods game in period 1
Table 4 Regression models of contributions in the repeated public goods game in periods 1–10

3.2 Statistical power

Participants clearly provide positive contributions to the public good, even with extended information. Hence, our main empirical conclusion is safely established. Since we get a null result with regard to our treatment variation, however, it is relevant to give an impression of the statistical power that we operate with. Given our sample size reported in Table 1 and our data, we can perform ex post power calculations for null results, as recommended by Nikiforakis and Slonim (2015). Specifically, we can calculate the minimum treatment effect that we could have detected with 80% power at a 5% significance level.Footnote 3 For unconditional contributions elicited in the strategy method (Part 2), this analysis revealed that the minimum detectable effect of the information treatment that we could have detected is 3.81 (i.e., a decrease from an average of 9.23 in the standard information treatment to an average of 5.42 in the information about optimal strategies treatment). Taking into account covariates (i.e., slopes and intercepts of the reciprocity functions), the minimum detectable effect size reduces to 3.25. For contributions in a repeated linear public goods game (Part 3), the minimum detectable effect size of the information treatment was 4.07 in period 1 and 3.37 for periods 1–10. Taking into account covariates (i.e., a treatment dummy for incentivized control questions, beliefs, and predicted contributions), this minimum detectable effect size reduces to 2.18 in period 1 and to 1.60 for periods 1–10. In our experiment, the observed effect size of the information treatment on unconditional contributions is generally below these thresholds [observed effects sizes: 1.21 (Part 2), 0.65 in period 1 and 1.66 over periods 1–10 (Part 3)]. When interpreting the significance of our treatment differences, these aspects should be taken into account.

4 Conclusion

Players behave much more cooperatively than predicted by the self-interest hypothesis in social dilemmas such as public goods games. Some studies have suggested that the decay pattern in repeated public goods games, and the results from experiments in which humans interact with computers, are indicative of the hypothesis that many decision makers cooperate not because of following genuine cooperative preferences, but because of confusion about the incentive structure of the game, and thus not being aware of the dominant strategy in the game. The decay is interpreted by these studies as an indication of learning.

In this paper, we experimentally manipulate in a linear public goods game whether decision makers receive explicit information about the individually optimal strategy and the socially optimal strategy or not, and we analyze how this information affects elicited contribution preferences and cooperation. More precisely, we discuss the payoff consequences of strategies in the game explicitly and in detail in the experimental instructions in one treatment and use standard instructions in another. While the individually optimal strategy that we used in the instruction holds only under common knowledge in Part 3 of the experiment, we think that such common knowledge might have been implemented through reading the instructions out loud. However, in Part 2, the individually optimal strategy described in the instruction holds strictly in any case.

Our data do not reveal statistically significant effects of the treatment variation on participants’ understanding of the task (Part 1), elicited contribution preferences (Part 2), or unconditional contributions and beliefs in a repeated linear public goods game (Part 3). Contributions are positive in any case and the size of the contributions is similar to related experiments. We conclude that it is unlikely that confusion about optimal strategies is a relevant explanation for the widely observed cooperation patterns in social dilemmas such as public goods games.

Why do other approaches obtain results in favor of the confusion hypothesis? A set of studies on human–computer interactions in public goods games or prisoner’s dilemmas (e.g., Burton-Chellew et al., 2016) finds only small differences between choices against another human player (or: other human players) and a computer algorithm. We think that it would be interesting and worthwhile to conduct an experiment that systematically varies the information that participants receive about the algorithm used by the computer player (or: by the computer players). Such an experiment could rigorously establish whether different levels of information matter in human–computer interaction. Our results would indicate that for human–computer interactions, making the optimal strategy against the algorithm clearer, should result into play closer to the dominant strategy. However, only a rigorous experiment can establish such a claim empirically.