Journal of Economic Psychology

We investigate lying behavior when lying is undetectable and payoffs are split with charity. 524 participants roll a die in private, report the outcome, and receive the monetary equivalent of their reported number, i.e., there is a clear incentive to lie. Participants are randomly assigned to share all, some, or none of this payoff with a charity of their choice. This allows us to examine how lying behavior changes with the share of payoffs going to charity. Our results are as follows: (i) there are participants in every group who lie to inflate their reported number; (ii) overall lying behavior is significant for all groups, except that in which participants keep none of the payoff; and (iii) post-experiment surveys reveal that participants who keep the whole payoff are much less likely to admit to having cheated than all other participants. Finally, our data suggests that lying is not correlated with any observable sociodemographic characteristic.


Introduction
In traditional economic models of dishonesty, people are predicted to lie if the lie goes undetected and there are material gains to be made. Recent studies have shown, however, that people lie much less than expected: a meta-analysis of 90 experiments shows that subjects forgo around three-quarters of the potential private gains from lying, even when lying is incentivized and undetectable (Abeler et al., 2019). In general, people refuse to lie not only because they prefer to be honest, but also because they prefer to be perceived as honest (Abeler et al., 2019;Gneezy et al., 2018).
A different strand of the literature investigates ''other-regarding preferences'', i.e., how one's choices are affected by the welfare of others. These preferences have been shown to exist widely-e.g., evidence from over 100 dictator games reveals that most dictators share, on average, slightly more than a quarter of the pie (Engel, 2011). This paper unites these two strands of the literature by examining lying behavior when, by means of undetectable cheating, people can benefit others beyond themselves. Recent studies have compared selfish lying, where payoffs are kept for oneself, with prosocial lying, where payoffs are donated to (usually anonymous) others. Wiltermuth (2011) 2 finds more selfish lying than prosocial lying, while Gino et al. (2013) 3 find similar rates for both. Notably, both studies 4 find the highest rates of lying when payoffs are split evenly between the participant and another individual.
Our study examines lying behavior when the other recipient is a charity rather than an individual. We adapt the die-rolling task used by Fischbacher and Föllmi-Heusi (2013): participants privately roll a die then report the outcome, the monetary equivalent of which they either keep, donate, or split with charity, depending on their treatment group. (We implement three split-payoff treatments to better observe how marginal variations in payoff splits influence lying behavior.) We made every effort to ensure that participants (i) were aware that lying was possible and completely undetectable; (ii) would care about the charity with whom they would share their payoffs; and (iii) trusted that we would indeed make the donation as specified. Although we cannot detect dishonesty at the individual level, we can measure dishonesty at the aggregate level by comparing the distribution of reported outcomes against the expected discrete uniform distribution of a fair die roll.
We find that participants in all treatment groups unambiguously lie by inflating their reported outcomes. However, lying rates are heterogeneous with respect to payoff split. We estimate that when participants privately benefit from the lie (whether partially or in full), 1 in 4 participants who have an incentive to lie do so. However, only 1 in 10 are estimated to lie when payoffs go entirely to charity. These results suggest a clear discontinuity in lying rates, with a sharp decrease when the participant no longer benefits from the lie. In other words, the extensive margin (whether participants receive any benefit from lying) plays a larger role than the intensive margin (the size of the benefit they receive) in participants' decision to lie.
By means of a post-experiment questionnaire, we also examine who is most likely to admit having lied. We estimate that 1 in 4 participants who lied admit to having done so, with the exception of those who take home all the payoff from lying. Among them, only 1 in 30 who lied admit to it. This suggests that when charity is involved, being honest and wanting to be seen as honest operate under different mechanisms. Participants who donate all payoff to charity are least likely to lie, while those who keep all payoff for themselves are least likely to admit to lying.
Our study thus contributes to the emerging experimental literature that focuses on the interplay between (dis)honesty and other-regarding preferences, focusing on the case where the ''other'' is an organization.

Die-rolling task
Participants are brought inside a private room, one at a time, by an experimenter. Three items are provided inside the room: a pouch containing a die, a timed lock-box containing pen and paper, and an abridged copy of the task instructions, reproduced in Supplementary Materials (SM) A.2. The experimenter sets the lock-box timer to one minute then leaves the room. The lock-box is transparent so participants can see the pen and blank paper slips inside, while the timer counts down on a screen built into the lid.
Participants are instructed to roll the die once at the moment the box unlocks, and write the outcome on a now-accessible slip of paper. 5 Clearly, since there is no one else in the room, participants have an opportunity to lie, i.e., misreport the outcome of the final die roll. Participants are then asked to put the die back in the pouch, leaving no trace of the true outcome. Afterwards, participants only submit their outcome slips to the experimenter, so they never need to verbally lie. Indeed, as Benistant and Villeval (2019) note, it is crucial to make every effort to convey that dishonesty can never be observed nor punished, without explicitly saying so in order to avoid priming.
We implement the one-minute wait for three reasons. First, the time delay is an opportunity for participants who suspect the die is rigged to verify its fairness. Second, the time delay serves as a mandatory period of deliberation, which could increase participants' awareness of the opportunity to lie (Lohse et al., 2018). Third, participants might have strong, idiosyncratic predispositions toward selfishness or prosociality that could strongly influence lying behavior. The time delay-which Alós-Ferrer and Garagnani (2020) suggest might moderate these heterogeneous predispositions-thus serves as an additional safeguard (on top of our random assignment protocol) that treatment condition is the main driver of differences in lying behavior.

Treatment groups
We randomly distribute participants across five different groups. In our control group (the ''self-only'' group), participants take home 100% of the payoff, consistent with previous die-roll experiments (e.g., Hermann and Mußhoff (2019)). Participants in the three split-payoff groups (''90-self'', ''50-self'', and ''10-self'') take home only a fraction of the payoff (90%, 50%, and 10%, respectively), and donate the rest to a charity of their choice. In the last group (the ''charity-only'' group), participants are required to donate all their payoff, leaving them with no take-home share. Table 1 summarizes the payoff schemes of the different treatment groups.
S.L. Chua et al. charity-only 0 100 Table 2 Shares of participants in self-only group who reported each payoff, compared to a similar study in the same country (Shen et al., 2016), and a meta-study of 90 experiments (Abeler et al., 2019).

Study
Share of participants (%)

Conducting the experiment
Our experiment was conducted over 17 days in March 2019 at four sites within the National University of Singapore. 524 participants (60% female) took part. We solicited demographic information from participants (SM B.3). We required at least 94 participants per treatment group to detect a 0.5 difference in mean reported dice roll outcome from the expected outcome of 3.5, with 80% power and 5% significance level (SM A.6). All our treatment groups had at least 101 participants.
The experiment was advertised via research recruitment websites, mass emails from university personnel, and posters at hightraffic hubs around campus. Participants could sign up online or simply ''walk in''. On average, the experiment took 10 minutes per participant, who were paid a show-up fee of $5 SGD (∼ 4USD). We precluded participants from participating more than once.
Upon arrival, participants randomly drew a unique ID, which also determined their treatment group. Once assigned to a group, participants (except the charity-only group) were informed that they could earn additional payoff based on their outcomes during the experiment. Participants were not given information on other treatment conditions, and were prohibited from communicating with each other.
Participants were provided with written instructions (SM A.1), a payoff chart showing how their reported outcome corresponded to additional payoffs (SM A.3), and, with the exception of the self-only group, a menu of charities. Questions were raised and addressed privately. Participants were then individually brought to a private room to carry out the die-rolling task described in Section 2.1. Upon completion, participants proceeded to a private payoff station, where an experimenter paid participants (in cash) and/or their charities of choice (via instant online bank transfer) accordingly. Participants could choose from five reputable charities representing a diverse range of social causes our target population (undergraduates) might care about: women's rights, prisoner rehabilitation, animal welfare, crisis relief, and terminally ill children (SM A.5). Afterwards, participants privately answered a questionnaire (SM A.4), which was later matched to their reported outcomes. Finally, subjects were debriefed and paid their show-up fee.

Results
As a preliminary sensitivity check, we compare the outcomes of our self-only baseline group with those of Shen et al. (2016), a die-roll experiment also conducted in Singapore, and the meta-study by Abeler et al. (2019). Table 2 shows that the reported outcome distributions from all three experiments are similar. We are therefore confident that our experimental design is not inherently biased-or at least, is biased in the same way as most previous studies. (We also regress reported outcome against various demographic characteristics and find no results of significance, as reported in SM B.5.) Our results for the charity-only group (see Table 3) are nearly identical to those in Lewis et al. (2012), whose UK-based participants also roll a die to benefit a charity: exactly 24.5% of participants report 6s in both studies, while the reported frequencies of 4s and 5s are similar.

Maximal lying occurs in all groups
Fig. 1 depicts graphically the share of participants in each treatment group who reported each possible payoff, the numerical frequencies for which are reported in Table 3. Table 3 Share of participants in each treatment group who reported each payoff. Stars indicate significance for one-sided binomial tests that the observed share is smaller (larger) than the expected share of 16.67% for payoff values 1, 2, and 3 (4, 5, and 6).

Group
Share of participants (%)  As 6 is the highest value on a die, we call misreporting a 6 maximal lying. If all participants were honest, we would expect onesixth of them to report a 6. However, in every treatment group, 6 is reported significantly more frequently than expected if nobody lied-even in the charity-only group, where participants cannot increase their own payoffs (one-tailed binomial tests, < 0.05 at least). The frequency of reported 6s decreases monotonically with take-home share.
The opposite of maximal lying is pure honesty. If, following Fischbacher and Föllmi-Heusi (2013), we assume that participants do not lie to decrease their payoff, then participants who report a 1 must be purely honest. As seen in Table 3, there are participants in every treatment group who do report 1s. Indeed, the frequency of reported 1s weakly increases with charity share, and is only significantly less than expected in the self-only group ( < 0.001).

Overall lying behavior is significant in all groups except charity-only
We can test for overall lying behavior in treatment groups by comparing participants' reported outcomes against the values we would expect if all participants were honest.
We first consider reported outcome means. If all participants were honest, we would expect the mean outcome from repeatedly rolling a die to be 3.5. We find that the mean reported outcome is significantly higher than 3.5 for all groups except charity-only (two-sided, one-sample t-tests, < 0.001). In the charity-only group we find no such significance ( = 0.14).  Conducting (two-sided, two-sample) pairwise t-tests, we find that the charity-only group has significantly lower mean reported outcome than the self-only group ( < 0.05). Under the prior that reported outcome decreases with participant's share (i.e., one-sided, two-sample tests), we find that the charity-only group has significantly lower mean reported outcome than both the self-only and 10-self groups (both < 0.05). In other words, charity-only participants not only have insignificant lying behavior, but also, when compared to other groups, they seem to lie less than those in self-only and 10-self groups.
We next consider reported outcome distributions. If all participants were honest, the outcomes of a fair die would be distributed according to a discrete uniform with support [1, 6]. We conduct two non-parametric tests: the Kolmogorov-Smirnov test, which examines the single largest vertical difference between two distributions, and the Wilcoxon rank-sum test, which pools the two distributions and tests for clustering. As detailed in SM B.1, we observe similar results under both tests: reported outcome distributions deviate significantly from the uniform in self-only and all split-payoff groups ( < 0.05 at least). Once participants do not benefit from the lie, however, overall lying behavior drastically diminishes, such that we cannot reject the null that the charity-only group's reported outcome distribution is uniform. 6 Furthermore, we follow Abeler et al. (2019) in estimating the share of participants who lied in each treatment group. If we assume that participants who actually did roll 5 or 6 have no incentive to further inflate their reports, we expect two-thirds of participants to have had incentive to lie, i.e., those who rolled 1, 2, 3, or 4. For each treatment group, we compute the difference between the expected and reported frequencies of {1,2,3,4}, as a fraction of the expected frequency. This yields the estimated share of participants in each group who lied, conditional on having had incentive to do so. As seen in the first column of Table 4, all treatment groups have remarkably similar estimated lying shares of around 25% -except, once again, for charity-only, which has an estimated lying share of only 10%. 7

Self-only participants are most likely to lie about having lied
We also asked participants through a post-task questionnaire (SM A.4) whether they were honest in their report. 90.5% of all participants said ''Yes'', 4.4% said ''No'', and 5.2% did not respond. The share of each treatment group who admitted to lying (i.e., responded ''Yes'') is reported in the second column of Table 4. Participants answered this questionnaire after receiving their game payoff, but before receiving their show up fee-so dishonest participants might have reason to conceal their lie. However, it is hard to imagine honest participants falsely claiming to having told a lie. Hence, self-admitted lying can be considered a lower bound on the number of misreports.
Participants' responses to this question may serve as a signal on what they perceive is socially acceptable. As seen in Table 4, the estimated lying share is greater than admitted lying share across treatment groups-evidence that participants lie about lying. As seen in the third column of Table 4, only 3.70% of estimated liars in the self-only group admit to having done so, compared to 22%-29% in all other treatment groups, including charity-only. Indeed, sharing any portion of the payoff with charity seems to make it easier for participants to admit having lied. However, increasing charity's share of payoff to 100% seems to have little additional effect on participants' willingness to admit having lied.
Following the terminology of Erat and Gneezy (2012), our results suggest that lying exclusively for a good cause is perceived as neither more nor less acceptable than a ''Pareto white lie'', a lie that helps both others and the liar. Different motives could explain this. Participants who lied may feel guilty immediately thereafter, and relieve (part of) this guilt by confessing in the post-experiment questionnaire-a confession that seems easier with a charity involved. Alternatively, participants who lied may want to signal their inherent honesty, a contradiction that seems easier to express with a charity involved (i.e., ''I want to let researchers know that I am actually honest, and I only lied because I was helping a good cause'').
Last, perhaps participants who did not answer the question on lying are all liars, and their very non-response is an admission of guilt. Our finding is robust to considering non-respondents as self-admitted liars (SM B.4).

Discussion
In our self-only group, we find that maximal lying and pure honesty occur at frequencies comparable with Fischbacher and Föllmi-Heusi (2013) and Gneezy et al. (2018), whose experimental designs closely resemble ours. Meanwhile, in our charity-only group, we find the lowest occurrence of maximal lying and the highest occurrence of pure honesty. This result is consistent with Wiltermuth (2011) and Klein et al. (2017), who find significantly less lying when it only benefits other participants, 8 but not with Lupoli et al. (2017), who find that participants lie for charity approximately as much as for themselves. 9 Notably, our experiment allows us to study how lying behavior changes as we gradually move along these two extremes. We find that estimated lying share (Table 4) remains virtually unchanged as charity share increases: reducing private material gains from 100% to 90%, 50%, or even 10% has no apparent effect. However, reducing private material gains from 10% to 0% results in a 15 percentage point reduction in lying rates. Our results suggest that lying is less costly when participants take home at least some share of the payoff, and that lying costs do not further diminish when others (beyond oneself) benefit from the lie. That is, participants seem insensitive to the size of their private payoff (intensive margin), but sensitive to its existence (extensive margin).
Our results are consistent with Klein et al. (2017), who also implement a range of split-payoff treatments, albeit with an anonymous participant rather than a charity as recipient. Our results, however, differ from Wiltermuth (2011) and Gino et al. (2013), who find highest rates of lying when payoffs are split.
What could explain these inconsistent results? We consider three reasons. First, the identity of the recipient may matter. Maggian (2019) suggests that the greater psychological distance to organizations may hinder willingness to incur lying costs. In other words, participants may find it equally difficult to internalize a lie when the beneficiary is a faceless organization, no matter how noble. This might explain why, unlike Wiltermuth (2011) and Gino et al. (2013)-whose participants split payoffs with fellow research participants-we find higher rates of selfish lying than of prosocial lying. In other words, lying costs may change depending on the identity of the recipient.
Second, the nature of the task at hand may also matter. Both Wiltermuth (2011) and Gino et al. (2013) ask participants to privately perform a semi-skilled task (e.g., solve matrices, unscramble anagrams) and report their performance. In contrast, the tasks in Klein et al. (2017) and our study (coin flip and die roll, respectively) are luck-based and virtually effortless: we both find no increase in lying when payoffs are split. This suggests that when reported scores implicitly signal about participants' effort or ability, participants might inflate scores to impress the ''other'' participant or the experimenter. That is, lying may be less costly when effort is exerted in the task.
Third, reputational costs may matter. In a single-blind study like ours, participants may believe that experimenters update their beliefs about participants' honesty based on their reported numbers. As Abeler et al. (2019) show, two types of lying costs are consistent with stylized results in the literature: a preference for being moral (self-image), and a preference for being seen as moral (social image/reputational cost). The latter usually serves to encourage socially applauded behavior when it can be directly observed or indirectly inferred: in dictator games, for instance, participants tend to share more in single-blind than in double-blind studies (Franzen & Pointner, 2012;Hoffman et al., 1996). In other words, being perceived as a liar might carry a smaller social punishment (or, indeed, a social reward à la ''Robin Hood'') if it were done for charitable reasons.
However, we only find no significant lying behavior when payoffs go entirely to charity. This does not mean reputational costs are irrelevant: indeed, we find that lying about lying is higher when payoffs go entirely to the self. Nonetheless, rates of lying about lying remain roughly constant whether the payoff goes partially or wholly to charity (Table 4). These findings suggest that lying solely for oneself is less socially acceptable than lying for shared benefit, but lying solely for charity is neither more nor less acceptable. Thus, on average, the decrease in reputational lying costs seems insufficient to warrant lying more solely for charity than for shared benefit. 10 Finally, we consider three procedural considerations that could have influenced our findings: (i) participant disinterest in the available charities; (ii) distrust in experimenters to actually make the donations; and (iii) charities acting as an ex ante nudge. We discuss them briefly in turn.
First, perhaps participants did not really care about the charity the money goes to. In that case, those in the charity-only group would have no incentive to lie, as their utility gain from lying would be essentially non-existent. However, we do observe maximal lying in that particular group. While it is possible that a handful of respondents did not care about any of the charities, we believe that the majority did, as we populated our menu of charities with a diverse range of social issues (SM A.5).
Second, perhaps participants did not trust experimenters to actually make the donations. We believe this is not the case: as soon as participants received task instructions, they were made aware that the donation process would happen online, in their presence, before they could leave the experiment site (SM A.1). The menu of charities provided to participants contained the official websites, logos and mission statements of each charity, to demonstrate their legitimacy. Our experiment was also supported and approved by the National University of Singapore, which is considered a reputable institution in the country. Furthermore, donations were 8 We note that Klein et al. (2017) use a within-subject design, i.e., all subjects were asked for their choices in each of the treatments. 9 Maggian (2019) also has a die-rolling experiment about lying for charity, but it is not directly comparable with ours. While her control group is similar to our self-only group, her charity group confronts participants with a tradeoff: all money they claim for themselves is taken away from a pre-set donation to charity. Hence, in order to increase donations, participants must under-report. Maggian finds no difference between the two groups. 10 To the best of our knowledge, no study comparing single-and double-blind studies has yet been done for prosocial lying, and no post-experiment questionnaires have included direct questions on whether participants did lie. Future research in this direction could shed more light on the issue.
indeed made in front of participants as promised, so word-of-mouth could not have hurt the reputation of our experiment. For all these reasons, we believe that our results could not have been affected by participants' mistrust of experimenters. Third, perhaps charitable donation served as a nudge for honest behavior: since most participants were forced to think about charities before performing the die-rolling task, their willingness to lie might have been affected ex ante. If this were true, then we would expect participants in the self-only group, who were unaware of the purpose of the experiment, to lie significantly more than all other participants. We find that this is not the case: participants in all split-payoff groups exhibit significant lying behavior (Result 3.2) and lie at estimated rates comparable to the self-only group (Table 4).
Overall, our findings are mostly consistent with previous studies, with differences likely arising from the identity of the recipient and the nature of the task. Future work could further investigate the effect of reputational lying costs by comparing single-and double-blind versions of our prosocial lying paradigm. As ours is the first study on prosocial lying not carried out in a ''Western'' country, we cannot disregard the possibility that differences in our results are due to prevailing cultural norms in Southeast Asia. Future work could investigate the extent to which such norms mediate prosocial lying behavior.