Controlling the narrative: Euphemistic language affects judgments of actions while avoiding perceptions of dishonesty

The present work (N = 1906 U.S. residents) investigates the extent to which peoples' evaluations of actions can be biased by the strategic use of euphemistic (agreeable) and dysphemistic (disagreeable) terms. We find that participants' evaluations of actions are made more favorable by replacing a disagreeable term (e.g., torture) with a semantically related agreeable term (e.g., enhanced interrogation) in an act's description. Notably, the influence of agreeable and disagreeable terms was reduced (but not eliminated) when making actions less ambiguous by providing participants with a detailed description of each action. Despite their influence, participants judged both agreeable and disagreeable action descriptions as largely truthful and distinct from lies, and judged agents using such descriptions as more trustworthy and moral than liars. Overall, the results of the current study suggest that a strategic speaker can, through the careful use of language, sway the opinions of others in a preferred direction while avoiding many of the reputational costs associated with less subtle forms of linguistic manipulation (e.g., lying). Like the much-studied phenomenon of "fake news," manipulative language can serve as a tool for misleading the public, doing so not with falsehoods but rather the strategic use of language.


Introduction
As social creatures people are often interested in the contents of other peoples' minds, both from the perspective of wanting to know what other people know, and from the standpoint of wanting to influence other peoples' thoughts in favor of their own individual goals. Language plays an important role in these equations, facilitating both pro-and anti-social behaviors. While the ability to communicate has permitted fantastic innovation via the collaborative exchange of accurate information, people may be just as interested in exchanging deceptive and self-serving misinformation when the opportunity arises. Lying, gossiping, and bullshitting represent some of the communicative tools people can use in competition in order to enhance their own prestige, sabotage their competitor's reputation, and misrepresent reality to suit their needs (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996;Robbins & Karan, 2020;Turpin et al., 2019). Although at times innocuous, these forms of deceit have the potential to be harmful to human life and flourishing. Much of society, from legal systems to democratic elections, depends on peoples' ability to avoid being deceived by those attempting to manipulate with language. The current study examines peoples' susceptibilities to linguistic manipulation by investigating how judgments of actions can be influenced by the strategic and ostensibly honest use of euphemistic and dysphemistic terms. described with an agentive framing (e.g., "She had ignited the napkin!") as opposed to a non-agentive framing (e.g., "The napkin had ignited!"; . Similarly, consumer evaluations of products can be influenced by information frame. For example, people give more positive evaluations of ground beef when it is labelled as "75% lean" as opposed to "25% fat" (Levin, 1987;Levin & Gaeth, 1988). Furthermore, the simple substitution of verbs (e.g., collided, bumped, contacted, hit, or smashed) in a question assessing the speed of a car immediately prior to an accident has been demonstrated to reliably bias peoples' estimates of speed as well as their memory of the accident (Loftus & Palmer, 1974). More generally, studies examining the effects of deceptive advertisements (Olson & Dover, 1978) and partisan political messages (Druckman & Parkin, 2005) reveal how these types of messages can influence thought (e.g., by creating false beliefs about a product) and behavior (e.g., by influencing voting choices).
A great deal of research has examined how the framing of a message (e.g., positive or negative) influences its persuasiveness (Arceneaux & Nickerson, 2010;Levin, Schneider, & Gaeth, 1998;O'Keefe & Jensen, 2007,2009Rothman & Salovey, 1997). While message framing has a clear impact on peoples' emotional responses (Bilandzic, Kalch, & Soentgen, 2017;Nabi, Gustafson, & Jensen, 2018;Shen & Dillard, 2007), with positively framed messages evoking more positive emotions (e.g., happiness and hope), and negatively framed messages evoking more negative emotions (e.g., anger and fear), neither message frame appears to reliably be more persuasive (Nabi et al., 2020;O'Keefe & Jensen, 2007,2009). Instead, a consistent finding from this literature is that the effectiveness of a positively or negatively framed message depends on the individual characteristics of a message recipient. For instance, negatively framed messages have been found to be more persuasive than positively framed messages for individuals who are highly ambivalent (Broemer, 2002), have a propensity to engage in and enjoy effortful cognitive activities (Rothman, Martino, Bedell, Detweiler, & Salovey, 1999;Sanchez, 2006;Umphrey, 2003), and for those completing a fear-inducing task (Yan, Dillard, & Shen, 2012). In contrast, positively framed messages appear more influential for individuals low in ambivalence (Broemer, 2002) and enjoyment of effortful cognitive activities (Steward, Schneider, Pizarro, & Salovey, 2003), as well as those made to feel happy or angry (Yan et al., 2012). Persuasive messages appear to be most effective not only when they are matched to the individual characteristics of a recipient, but also when their emotional framing matches the current phenomenological state of their recipients (DeSteno, Petty, Rucker, Wegener, & Braverman, 2004;Gerend & Maner, 2011). For example, a message framed to highlight the saddening (angering) problems that a proposed policy was intended to fix was found to be most persuasive for sad (angry) individuals (DeSteno et al., 2004).
The aforementioned research exhibits how various manipulations of language, from subtle framing effects to more explicitly deceptive advertisements, can sway peoples' thoughts in a manner that may be desired for the language user (e.g., in favor of a product). Even without this body of evidence, people appear to share the intuition that they can influence the thoughts of others with deceptive language, as evidenced by its widespread use (DePaulo et al., 1996). Some have even posited that the primary function of language is to deceive and manipulate the behavior of others in a self-serving way (Dawkins & Krebs, 1978;Krebs & Dawkins, 1984;Scott-Phillips, 2006), although evidence seems to point to deception as being a secondary and not primary function of language (Oesch, 2016). Nevertheless, whether for pro-or anti-social purposes, deception is an important part of language use.
The most obvious form of deceptive language is lying, in which a speaker states something they believe to be false as if it were true. This form of deception, specifically when carried out maliciously, may come with a high risk, as those caught lying may face severe punishments. However, more subtle forms of linguistic manipulation, such as the strategic framing of a message or question Levin & Gaeth, 1988;Loftus & Palmer, 1974), may similarly allow a speaker to sway others' opinions in a desired direction, while simultaneously keeping themselves free from some of the reputational risks associated with lying. Therefore, a potentially attractive alternative for those wishing to influence the thoughts of others is to do so by strategically describing the truth in a self-serving manner.

Doublespeak
Through the careful use of language, a speaker can attempt to make the unpleasant seem pleasant, the unethical seem righteous, and the horrific seem acceptable. The purposeful use of language to distort, obscure, or misrepresent an event or piece of information is referred to as doublespeak (Herman, 1992;Lutz, 1988Lutz, , 1990Lutz, , 2000. Doublespeak is language that only pretends to communicate; it is language carefully constructed to feature an incompatibility between what is said and reality. Thus, doublespeak does not involve accidental misuses of language, but rather involves the careful choosing of words to deliberately deceive. Furthermore, doublespeak does not involve making objectively false claims but rather involves the strategic use of language to stretch the truth in ways that impart a reality that is most desirable for the speaker. One form of doublespeak often discussed in the doublespeak literature is that of euphemisms (Lutz, 1987(Lutz, , 1990(Lutz, , 2000Moore, 1991), which can be used by individuals strategically to communicate unpleasant acts of questionable morality (e.g., torture) in a way in which these acts may appear more innocuous (e.g., as "enhanced interrogation"). Similarly, dysphemisms can be used strategically by individuals to make their opponents appear worse than they really are (e.g., referring to a group of activists as extremists). Of course, not all uses of euphemisms and dysphemisms are deceptive. Notably, doublespeak does not involve the charitable use of euphemisms to spare feelings (e.g., stating that a loved one has "passed away" instead of "died") as the reality in such situations is likely to be fully understood by both parties and thus no deception is intended or likely to occur. Nevertheless, euphemisms are often used for self-serving purposes (e.g., selfpresentation) as opposed to out of concern for the sensibilities of others (McGlone & Batchelor, 2003).
Much has been written about the use of doublespeak in politics, advertising, education, science, and business, establishing the use of doublespeak as a common real-world phenomenon (Gibson, 1975;Herman, 1992;Lutz, 1988Lutz, , 1990Lutz, , 2000Moore, 1991;Pulley, 1994). In a society where an informed populace is relied upon to elect high-ranking officials and contribute to decisions of public policy, the deceptive and thought-shaping nature of doublespeak can have dire effects. On a smaller scale, doublespeak can be used to unfairly paint harmless individuals as dangerous or paint dangerous individuals as harmless. This potential may be greater than ever before with the emergence of social media allowing the dissemination of claims, deceptive or not, to become increasingly widespread (Allcott & Gentzkow, 2017;Lazer et al., 2018;Matsa & Shearer, 2018). Thus, it is clear that doublespeak can have negative consequences if successful in its attempts to deceive. However, despite its noted use and potential for harm, no empirical research (to our knowledge) has explored the effectiveness, consequences, or mechanisms of doublespeak in a psychological context.

The present research
In the present work we investigate the degree to which the strategic use of language characteristic of doublespeak 1 can be used to influence 1 In the present research, we strategically described actions using either euphemistic or dysphemistic terms in order to investigate whether this would bias participants' judgments of the described actions. Such strategic use of euphemisms and dysphemisms mirrors many real-world instances of doublespeak yet necessarily exists under constrained experimental conditions, describing fictional actions and lacking a purposefully deceptive speaker.
peoples' evaluations of actions. Specifically, utilizing euphemisms and dysphemisms identified in the non-empirical literature on doublespeak, we assess whether the simple substitution of a euphemistic (agreeable) or dysphemistic (disagreeable) term in an action's description can make that action appear more or less acceptable (see Fig. 1). Notably, we avoid dividing pairs of terms into the plain language and doublespeak categories in which they are typically discussed, as we feel that the deceptiveness of a term depends on the event for which it is being used to describe. For example, referring to a group of protestors as "political activists" or "political extremists" can range from perfectly accurate to greatly deceptive depending on the actions of the protesters. In fact, there is likely to be a host of situations in which the use of either term may not be considered dishonest by a majority of people. What is interesting to us in these cases, is whether someone wishing to represent the protesters in either a positive or negative light can effectively and permissibly do so via the strategic use of language. While those writing about doublespeak appear to share the intuition that it is effective, criticizing the use of doublespeak in part for its assumed ability to unjustly bias recipients' perceptions of reality (Herman, 1992;Lutz, 1987Lutz, , 1988Lutz, , 1990Lutz, , 2000Moore, 1991), the degree to which doublespeak is able to influence assessments of events or actions remains an open question. In Study 1, we test whether the simple substitution of an agreeable term (e.g., working at a meat processing plant) in place of a semantically related disagreeable term (e.g., working at a slaughterhouse) influences the perceived acceptability of a hypothetical agent's actions. Consistent with the idea that through the careful use of language, individuals can make the unpleasant seem pleasant, the unethical seem righteous, and the horrific seem acceptable (Herman, 1992;Lutz, 1988Lutz, , 1990Lutz, , 2000, we hypothesize that participants' evaluations of actions will be made more favorable through the inclusion of an agreeable term, as opposed to a disagreeable term, in each act's description. An important characteristic of doublespeak, distinguishing it from less subtle forms of deceptive language (e.g., lying), is that it does not feature objectively false claims. Rather, those well-versed in doublespeak attempt to influence the opinions of others by carefully and strategically representing the truth in a self-serving manner. For example, the skilled user of doublespeak can use euphemisms in an attempt to make their actions appear more innocuous and dysphemisms in an attempt to make the actions of their opponents seem less acceptable. Importantly, the avoidance of objectively false statements may provide the user of doublespeak some protection from the reputational risks associated with lying. That is, stating something objectively false comes with the risk that this falsehood may be revealed, and the liar punished, whereas stating something linguistically manipulative (yet not objectively false) is likely to avoid a majority of this risk by providing plausible deniability of dishonesty. Therefore, doublespeak may not only be harmful for its effectiveness but also for the protection it affords its users, potentially allowing a user multiple attempts at deception with minimal correction.
In Study 2, we assess whether the semantically related agreeable and disagreeable terms featured in Study 1 can be used interchangeably to describe the same unambiguous action in a way that a majority of people find honest and permissible. In Study 3, we evaluate the degree to which hypothetical speakers using these agreeable and disagreeable terms to describe an unambiguous action are judged as trustworthy and moral and compare judgments of these speakers to those describing actions using objectively false lies. Consistent with the claim that the subtle linguistic manipulation characteristic of doublespeak protects its users from the reputational risks associated with lying, we hypothesize that the strategic use of agreeable and disagreeable terms (when describing an unambiguous action) will be judged as largely honest and permissible and that hypothetical speakers using such strategic language will be judged as largely moral, trustworthy, and undeserving of criticism. Additionally, we predict that liars, using objectively false statements to describe the same unambiguous actions, will be judged less favorably (e. g., as less trustworthy) compared to those using agreeable and disagreeable terms. Overall, if people find it honest and permissible to use either more (e.g., enhanced interrogation) or less agreeable (e.g., torture) terms when describing a well-known action (Study 2), and if the choice of term can be shown to bias peoples' evaluation of the action described (Study 1), then a motivated speaker could presumably, through the strategic use of language, bias public opinion while simultaneously avoiding a majority of the reputational risk associated with less subtle forms of deception (Study 3).
Study 1 demonstrates that the strategic use of euphemistic and dysphemistic terms can bias participants' evaluations of actions. In Study 4, we investigate ambiguity as a potential mechanism helping to explain why participants' evaluations of actions are biased by a speaker's strategic choice of terms. We propose that while people may be susceptible to subtle forms of linguistic manipulation (such as doublespeak) when evaluating an action that is ambiguous (i.e., for which the specific details of the act are unknown), this susceptibility may be reduced as people become more knowledgeable about the act in question. That is, it may be the case that when the details of an action are unknown people have the affordance to imagine several types of acts, and in doing so may be guided by the language used in an act's description. In these circumstances, if euphemistic terms evoke less severe moral transgressions compared to dysphemistic terms, then this may offer an explanation as to how the strategic choice of semantically related agreeable and disagreeable terms influence peoples' evaluations of actions. In contrast, when fully knowledgeable about an act, both euphemistic and dysphemistic terms may evoke the same moral transgression (i.e., the one that occurred), reducing the influence of a speaker's strategic choice of terms. Therefore, we hypothesize that the influence of agreeable and disagreeable terms will be dependent on the level of ambiguity surrounding the actions being evaluated. As such, in Study 4, we had one half of participants evaluate actions for which the specific details of each act were left unstated (as in Study 1), while other participants evaluated actions for which additional act information was provided. We predict that when actions are described with a high level of ambiguity, actions described using an agreeable term (e.g., enhanced interrogation) will be judged more favorably compared to those described with a disagreeable term (e.g., torture). In contrast, when actions are described in greater detail, we predict that the use of agreeable and disagreeable terms will exhibit a reduced impact on participants' action evaluations.

Study 1
The goal of Study 1 was to assess the influence of a set of euphemistic (agreeable) and dysphemistic (disagreeable) terms on participants' evaluations of a variety of actions (see Fig. 1). We hypothesized that the substitution of agreeable terms (e.g., working at a meat processing plant) in place of semantically related disagreeable terms (e.g., working at a slaughterhouse) in action-depicting statements would lead to more positive evaluations of actions.

Participants
A sample of 404 participants (52% female; M age = 37.34, SD age = 11.63; 61% obtained a college degree or higher) was recruited from Amazon Mechanical Turk and received $1.00 upon completion of an eight-minute online questionnaire. Participants were recruited under the condition that they be U.S. residents and possess a Mechanical Turk HIT approval rate greater than or equal to 95%. For all studies, we recruited convenience samples from Amazon Mechanical Turk and selected sample sizes to yield a minimum of 80% power to detect effect sizes (Cohen's d) of 0.40 for planned analyses. 2 We collected our full sample prior to data analyses and report all data exclusions, all manipulations, and all measures used. Data and materials from all studies can be accessed via the following link: osf.io/ngfje.

Materials
Study 1 featured 13 pairs of euphemistic (agreeable) and dysphemistic (disagreeable) terms inspired by real-world examples of doublespeak (Herman, 1992;Lutz, 1987Lutz, , 1988Lutz, , 2000Moore, 1991). All terms were presented within an action-depicting statement (see Appendix for a full list of items). Selection of these thirteen items was informed by two initial studies (see supplementary materials Part B) assessing the similarity of paired agreeable and disagreeable statements and the effectiveness of these statements in biasing participants' action evaluations.

Action evaluation
For each action-depicting statement, participants responded to the question "How much do you agree or disagree with [Name's] actions?" using a 7-point scale that ranged from 1 (Strongly Disagree with) to 7 (Strongly Agree with).

Design and procedure
Participants were presented with and evaluated 13 action-depicting statements featuring either the relevant agreeable (e.g., Mitchell, a political activist, protesting outside of City Hall) or disagreeable term (e.g., Mitchell, a political extremist, protesting outside of City Hall). All participants were randomly assigned to one of two conditions, dictating the specific item set they were presented with. Participants randomly assigned to Condition A were presented with agreeable action-depicting statements for items 2, 4, 5, 6, 8, 9, 10, 12, and 13 (see Appendix) and disagreeable action-depicting statements for items 1, 3, 7, and 11, with the opposite being true for those randomly assigned to Condition B. 3 Thus, at the item level, Study 1 employed a between-subjects design with participants evaluating either an item's agreeable or disagreeable action-depicting statement. However, across all 13 items, participants were presented with and evaluated both agreeable and disagreeable statements. The order in which these 13 items were presented within each condition was randomized for each participant. Following all action evaluations, participants were asked "Did you perceive some of the language used to describe actions in this survey to be intentionally deceptive?" responding with either a "Yes" or "No" response. Participants who responded "Yes" to this item were asked to indicate the percentage of action describing statements which they felt featured deceptive language. Participants concluded the study by responding to five demographic questions (i.e., age, gender, highest level of education, political identity, and political ideology). 4

Results and discussion
We fit a generalized mixed effects regression model using the lme4 package in R (Bates, Mächler, Bolker, & Walker, 2015) with Statement Type (dummy coded as 0 -Disagreeable and 1 -Agreeable) as a level 1 predictor and judgments nested within participants. There was a significant effect of Statement Type, B = .96, β = .26, 95% CI [.23, .28], SE = .05, t = 19.55, p < .001, Marginal R 2 = .07, with agreeable actiondepicting statements (M = 3.99, SD = 1.82) being judged more favorably (i.e., agreed with more) than their disagreeable counterparts (M = 3.00, SD = 1.76). Additionally, for each item, we conducted a one-tailed independent-samples t-test comparing participants' agreement with an action when the action was depicted using an agreeable versus disagreeable term (see Table 1). A sensitivity power analysis 5 indicated 80% power to detect a minimum effect size of d = 0.25 for item-level one-tailed independent-samples t-tests. For all items, action descriptions featuring an agreeable term were judged as more acceptable by participants (all p's < .004). Overall, the results of Study 1 suggest that the strategic use of a euphemistic (dysphemistic) term in an act's description can make that action appear more (less) acceptable. Furthermore, as the presented items imitated real-world uses of doublespeak, these results are consistent with the belief that real-world doublespeak may often be effective in its attempts to influence peoples' judgments.
One potential reason for this influence is that the deceptive nature of such strategic language may often go unnoticed, as only 39% of our sample (n = 158) perceived any instances of deceptive language in Study 1, despite all participants being described several actions using potentially self-serving euphemistic and dysphemistic terms. Of the participants who did recognize the presence of deceptive language, these participants on average indicated that 45% of actions were described using deceptive language. Notably, restricting our sample to only those endorsing the presence of deceptive language did not prevent agreeable statements from being judged more favorably compared to disagreeable statements. That is, participants detecting deceptive language judged actions described with an agreeable term (M = 4.12, SD = 1.83) as more acceptable than those described with a disagreeable term (M = 3.18, SD = 1.85), B = 0.89, β = 0.23, 95% CI [0.19, 0.27], SE = 0.08, t = 11.34, p < .001, Marginal R 2 = .06.

Study 2
Study 1 demonstrated the ability for the strategic use of semantically related euphemistic and dysphemistic terms to bias peoples' evaluations 2 This statement reflects a minimum power requirement followed for Studies 1-4, with a majority of planned analyses yielding 80% power to detect effect sizes smaller than d = 0.40. Practical constraints (e.g., cost) also informed sample size selection for Studies 1-4, explaining differences in power across studies. Notably, for each study, we report the results of sensitivity power analyses detailing the smallest effect size that a collected sample could detect with 80% power. All power analyses reported in this manuscript were calculated using G*Power (Faul, Erdfelder, Lang, & Buchner, 2007) with the assumption of α = 0.05. 3 Our intention was to have participants in Condition A be presented with agreeable statements for all even number items and disagreeable statements for all odd number items, with the opposite being true for those assigned to Condition B. However, an error during survey creation resulted in the item sets specified above, representing a limitation of Study 1. Nevertheless, this error was corrected for all remaining studies, most notably for Study 4 in which we observed a similar pattern of results when randomly assigning participants to evaluate all agreeable or disagreeable statements. 4 Questions assessing participants' political identity and ideology were collected for exploratory purposes. As such analyses involving these items are reported in the supplementary materials (Part C). 5 All sensitivity power analyses presented in this manuscript were calculated based on the design used and the sample size analyzed.
A.C. Walker et al.
of actions. Nevertheless, the extent to which people view the use of these euphemistic and dysphemistic terms as honest remains unknown. Unlike liars, those well-versed in doublespeak avoid making objectively false claims, preferring instead to carefully and strategically stretch the truth in a self-serving manner. Stating something objectively false comes with the risk that this falsehood may be revealed, along with the dishonesty of the liar. Conversely, the strategic speakers' avoidance of objectively false claims may make their calculated use of euphemistic and dysphemistic terms appear honest. In Study 2, we investigate the extent to which participants judge the use of our euphemistic (agreeable) and dysphemistic (disagreeable) terms as deceptive, truthful, and permissible when used to describe an unambiguous action. Furthermore, we compare these judgments to those given to action-descriptions featuring objectively false lies. We hypothesize that describing well-known actions using either agreeable or disagreeable terms will be judged as less deceptive, more truthful, and more permissible compared to descriptions of these actions that feature lies.

Participants
Three hundred and one US residents (52% female; M age = 37.52, SD age = 11.63; 55% obtained a college degree or higher) were recruited from Amazon Mechanical Turk using the same recruitment criteria used in Study 1. Those who participated in Study 1 were restricted from participating in Study 2.

Materials
In order to assess how honest participants viewed the use of our agreeable and disagreeable terms, we created a detailed action description for each of our 13 items. For each item, participants were presented with a detailed action description (referred to as a "factual event description") and judged how deceptively, truthfully, and permissibly an action-depicting statement represented the factually described event. Each factual event description was created such that it detailed an action in an unambiguous way and described an action which could plausibly be referred to using both the relevant agreeable and disagreeable terms (for an example see Fig. 2; for a full list of items see Part A of the supplementary materials). Importantly, participants were informed that "all event descriptions are factual and contain nothing but truth. Therefore, any potential difference between an event description and a statement is a reflection of the statement not being completely true." Additionally, we created a lie statement for each of our 13 items (see Fig. 2) in order to compare the truthfulness, deceptiveness, and permissibility of an outright lie to that of the presented agreeable and disagreeable statements.

Deception
We assessed the extent to which participants viewed action-depicting statements as deceptively representing a described event with the item "How deceptive is this statement?" Responses to this item were provided on a 7-point scale that ranged from 1 (Not at all Deceptive) to 7 (Very Deceptive).

Truthfulness
We examined the degree to which participants viewed each actiondepicting statement as truthfully representing a factually described event with the item "How true is this statement?" Participants provided their responses to this item on a 7-point scale ranging from 1 (Completely False) to 7 (Completely True).

Permissibility
For each action-depicting statement, we asked "Setting aside how deceptive this statement may or may not be, is this statement strictly speaking a lie?" Participants responded to this item by selecting one of two response options: "Yes, this statement is strictly speaking a lie" or "No, this statement is at least somewhat true."

Design and procedure
Participants were presented with and judged 13 action-depicting statements in relation to a factual event description (see Fig. 2). For each item, participants were presented, with equal likelihood, either an agreeable, disagreeable, or lie statement, and judged how deceptively, truthfully, and permissibly this statement represented a corresponding factual event description. Thus, at the item level, Study 2 employed a between-subjects design with participants being randomly assigned to evaluate either an item's agreeable, disagreeable, or lie statement. However, across all 13 items, this random assignment resulted in participants evaluating statements of all three types (agreeable, disagreeable, and lie). The order in which these 13 items were presented was randomized for each participant. Participants concluded the study by responding to five demographic questions asking them to report their age, gender, level of education, political identity, and political ideology.

Results and discussion
Participants' mean deceptiveness, truthfulness, and permissibility judgments for all agreeable and disagreeable terms can be viewed in Table 2. We fit a generalized mixed effects regression model with Statement Type (dummy coded as 0 -Lie statements and 1 -Agreeable and Disagreeable statements) as a level 1 predictor and judgments nested within participants for the dependent variables of deception, truthfulness, and permissibility. These comparisons allowed us to examine whether agreeable and disagreeable action-depicting statements could be distinguished from outright lies, such that they be judged by participants as less deceptive, more truthful, and more permissible. We observed a significant effect of Statement Type for judgments of  We also compared participants' judgments of the deceptiveness, truthfulness, and permissibility of each agreeable and disagreeable action-depicting statement to those made to the relevant lie statement at the item-level using independent-samples ttests (see Table 2). A sensitivity power analysis indicated 80% power to detect a minimum effect size of d = 0.40 for item-level two-tailed independent-samples t-tests. Additionally, consistent with the idea that action descriptions featuring euphemistic or dysphemistic terms are primarily judged as honest, participants judged statements featuring agreeable and disagreeable terms as significantly less deceptive, t(300) = − 9.29, p < .001, d = 0.54, and more truthful, t(300) = 23.55, p < .001, d = 1.35, than the midpoint value of each scale. Similarly, comparing participants permissibility judgments for agreeable and disagreeable statements to the "disagreement" proportion of .50 revealed that participants reliably judged agreeable and disagreeable statements as "somewhat true", t (300) = 29.90, p < .001, d = 1.74. A similar result was observed when analyzing only those participants (n = 142) who exclusively endorsed lie statements as lies, t(141) = 26.79, p < .001, d = 2.19. 6 Overall, this pattern of results suggests that the presented agreeable and disagreeable terms can be strategically used to describe the same unambiguous actions in a way in which they are easily distinguished from factually incorrect lies and judged as more honest than dishonest.
We conducted exploratory analyses investigating whether participants viewed agreeable action-depicting statements as less honest than their disagreeable counterparts. While strategic language (such as doublespeak) can be used to make the actions of opponents appear less favorable, it may be more commonly used to describe one's own actions in a way that conceals their unpleasantness. As such, it may be the case that the selected euphemistic terms, inspired by real-world uses of doublespeak, may be purposefully more evasive and therefore deceptive than their dysphemistic counterparts, on account that their goal is often  Note. All statements were judged with relation to a factual event description. Participants' mean deceptiveness and truthfulness judgments for each item is shown above as is the percentage of participants endorsing each item as "somewhat true." Inferential statistics represent the results of independent-samples t-tests examining differences between judgments of each statement and its corresponding lie statement using a Bonferroni adjusted alpha level of .002 per test (.05/26). A = Agreeable; D = Disagreeable. * p < .002. 6 It is noteworthy that, as all lie statements featured objectively false claims, participants who endorsed a lie statement as true on one or more occasions demonstrated some lack of attention to the presented statements. Nevertheless, removing these participants did not alter the observed pattern of results (see supplementary materials Part C). Similarly, comparing participants' permissibility judgments for agreeable statements to the "disagreement" proportion of .50 revealed that participants reliably judged agreeable statements as "somewhat true," t (297) = 22.49, p < .001, d = 1.33. Therefore, while agreeable statements were seen as less honest than their disagreeable counterparts, they were still judged as largely honest and were easily distinguished from lies.

Study 3
In Study 2, participants judged action-depicting statements featuring agreeable and disagreeable terms as more honest (e.g., more truthful) compared to lies. Nevertheless, how people judge those using such strategic language remains unknown. In Study 3, we investigate how people evaluate the character (e.g., trustworthiness) of hypothetical speakers using agreeable and disagreeable terms to describe unambiguous actions. As in Study 2, we compare evaluations of these speakers' character with the character evaluations of speakers describing the same set of unambiguous actions with objectively false lies. We hypothesize that speakers using euphemistic and dysphemistic terms will be judged as more moral, more trustworthy, and as less deserving of criticism compared to liars. Such a finding would provide further support for the claim that individuals strategically using euphemisms and dysphemisms in attempt to bias peoples' perceptions of actions could do so while largely avoiding the reputational risk associated with less subtle forms of linguistic manipulation (e.g., lying).

Participants
A sample of 401 participants (43% female; M age = 41.09, SD age = 13.91; 69% obtained a college degree or higher) was recruited from Amazon Mechanical Turk and received $1.00 upon completing an eightminute online questionnaire. Participants were recruited under the condition that they be U.S. residents and possess a Mechanical Turk HIT approval rate greater than or equal to 99%. To help ensure data quality, all participants were required to correctly answer two simple questions prior to their participation. Those who participated in Studies 1 or 2 were restricted from participating in Study 3.

Materials
The materials used in Study 3 were identical to those used in Study 2, with the exception that all statements (agreeable, disagreeable, and lie) were now attributed to a hypothetical member of the public with full knowledge of the action they were describing.

Measures
Following the presentation of each statement, participants were asked to judge the person making the statement on three different dimensions presented in a randomized order within a matrix table.

Trustworthiness
Participants judged the trustworthiness of each hypothetical agent on the basis of the public statement they made and its correspondence with the relevant factual event description using a 7-point scale that ranged from "Untrustworthy" to "Trustworthy."

Moral character
Participants assessed the morality of each hypothetical agent using a 7-point scale that ranged from "Immoral" to "Moral."

Criticism
Participants indicated how much criticism they felt each hypothetical agent deserved using a 7-point scale that ranged from "Deserves No Criticism" to "Deserves Criticism."

Design and procedure
The design and procedure of Study 3 mirrored that of Study 2. Participants were presented with the action-depicting statements of 13 hypothetical agents and judged each agent on the basis of the statement they made and its correspondence to a factual event description. For each item, participants were presented, with equal likelihood, an agent making either an agreeable, disagreeable, or lie statement, and judged each agent on three dimensions (i.e., trustworthiness, moral character, and criticism deserved). Therefore, at the item level, Study 3 featured a between-subjects design with participants being randomly assigned to evaluate a hypothetical agent asserting either an item's agreeable, disagreeable, or lie statement. However, across all 13 items, this random assignment resulted in participants evaluating agent's making statements of all three types (agreeable, disagreeable, and lie). The order in which these 13 items were presented was once again randomized for each participant. Participants concluded the study by completing the same set of demographic questions administered in Studies 1 and 2.

Results and discussion
Participants' mean trustworthiness, moral character, and criticism judgments for all agreeable and disagreeable terms can be viewed in Table 3. As in Study 2, we fit a generalized mixed effects regression model with Statement Type (dummy coded as 0 -Lie statements and 1 -Agreeable and Disagreeable statements) as a level 1 predictor and judgments nested within participants for each dependent variable assessed in Study 3 (i.e., trustworthiness, moral character, and criticism). These comparisons allowed us to examine whether those using euphemistic and dysphemistic terms to describe a well-known action were judged as more trustworthy, more moral, and less deserving of criticism compared to those lying about the action that took place. We We also compared participants' judgments of agents making agreeable and disagreeable actiondepicting statements to judgments of agents making the corresponding lie statement at the item-level using independent-samples t-tests (see Table 3). A sensitivity power analysis indicated 80% power to detect a minimum effect size of d = 0.34 for item-level two-tailed independentsamples t-tests.
Consistent with the idea that the potentially manipulative use of euphemistic and dysphemistic terms often fails to result in severe reputational consequences, participants judged agents using agreeable and disagreeable terms as significantly more trustworthy, t(400) = 9.14, p < .001, d = 0.46, moral, t(400) = 6.35, p < .001, d = 0.32, and less deserving of criticism, t(400) = − 3.19, p = .002, d = 0.16, than the midpoint value of each scale. Overall, this pattern of results suggests that those wishing to strategically use euphemistic and dysphemistic terms to bias participants' evaluations of actions may be able to so with minimal reputational costs.
We conducted exploratory analyses investigating whether participants judged agents using agreeable action-depicting statements less favorably than those using their disagreeable counterparts. These analyses revealed that those using agreeable statements were viewed as less trustworthy (M 22. Therefore, while agents using agreeable terms to describe unambiguous actions were judged less favorably (i.e., less trustworthy, less moral, and more deserving of criticism) than agents using disagreeable terms, they were still judged considerably more favorably than liars.

Study 4
Study 1 demonstrated how the strategic use of semantically related agreeable (e.g., enhanced interrogation) and disagreeable (e.g., torture) terms bias peoples' evaluations of actions. However, Study 1 had participants evaluate actions that were somewhat ambiguous. Thus, one may wonder how susceptible people are to the strategic use of euphemistic and dysphemistic terms when more knowledgeable about the acts they are evaluating. It may be the case that when the details of an act are unknown people have the affordance to imagine several types of actions, and in doing so may be easily influenced by the language used in an act's description. In these circumstances, if more agreeable terms evoke less severe moral transgressions compared to disagreeable terms, one may expect actions to be viewed more favorably when described with a euphemistic term. In contrast, when fully knowledgeable about an act, both agreeable and disagreeable terms may evoke the same moral transgression (i.e., the one that occurred), reducing the influence of a speaker's linguistic choices. In Study 4, we assess the impact of ambiguity by examining the influence of agreeable and disagreeable terms in situations of both high (i.e., no additional act information presented) and low (i.e., additional act information presented) act ambiguity. We hypothesize that the influence of agreeable and disagreeable terms will be dependent on the level of ambiguity surrounding the actions described, such that we expect these terms to have a reduced impact on participants' evaluations of actions when actions are described in greater detail.

Participants
A sample of 800 US residents was recruited from Amazon Mechanical Turk using the same recruitment criteria as Study 3. Those who participated in Studies 1, 2, or 3 were restricted from participating in Study 4. We excluded data from eight participants who reported responding randomly at some point during the experiment, 7 leaving data from 792 participants (52% Female; M age = 42.11, SD age = 13.29; 67% obtained a college degree or higher) to be analyzed. This experiment was preregistered through Open Science Framework (osf.io/m3eqk). Note. All public statements made by agents were judged with relation to a factual event description. Participants' mean trustworthiness, moral character, and criticism judgments for each item is shown above. Inferential statistics represent the results of independent-samples t-tests examining differences between judgments of agents making either an agreeable or disagreeable statement and a corresponding liar using a Bonferroni adjusted alpha level of .002 per test (.05/26). A = Agreeable; D = Disagreeable. * p < .002.
7 Specifically, these participants responded "Yes" to the item: "Is there any reason that we shouldn't use your data (e.g., did you randomly select responses at any point during the survey)?" This exclusion criterion was pre-registered.

Materials
Study 4 presented the agreeable and disagreeable action-depicting statements used in Studies 1-3. For participants randomly assigned to the Details Condition, these action-depicting statements were presented along with additional information about each act, mirroring the information provided in the factual event descriptions of Studies 2 and 3 (see Fig. 3). Conversely, participants in the No Details Condition viewed agreeable and disagreeable statements without any additional information about the described action (as in Study 1).

Measures
Study 4 used the same measure (i.e., action evaluation task) as Study 1. That is, for each action-depicting statement, participants responded to the question "How much do you agree or disagree with [Name's] actions?" using a 7-point scale that ranged from 1 (Strongly Disagree with) to 7 (Strongly Agree with).

Design and procedure
Study 4 featured a 2 (Statement Type: agreeable, disagreeable) x 2 (Information Type: details, no details) between-subjects design. Based on this design, participants in Study 4 were randomly assigned to exclusively evaluate either agreeable or disagreeable action-depicting statements. Likewise, based on random assignment, participants evaluated all 13 action-depicting statements with or without the presence of additional act details. The order in which these 13 items were presented was once again randomized for each participant. Following all action evaluations, participants were asked whether they perceived some of the language used in the current study to be intentionally deceptive, providing either a "Yes" or "No" response. Participants who responded "Yes" to this item were asked to indicate the percentage of actiondepicting statements which they felt featured deceptive language. As in Studies 1-3, participants concluded Study 4 by responding to five demographic questions (i.e., age, gender, highest level of education, political identity, and political ideology).

Results and discussion
In order to assess whether describing actions in greater detail reduced the influence of agreeable and disagreeable terms, we conducted a 2 (Statement Type: agreeable, disagreeable) x 2 (Information Type: details, no details) between-subjects ANOVA with participants' action evaluations as the dependent variable. 8 Sensitivity power analyses indicated 80% power to detect minimum effect sizes of η p 2 = .010 for the conducted between-subjects ANOVA and d = 0.28 for follow-up two-tailed independent samples t-tests. Consistent with the results of Study 1, we observed a main effect of Statement Type, F(1, 788) = 200.06, p < .001, η p 2 = .202, such that actions described using an agreeable term (M = 3.56, SD = 0.76) were judged more positively (i.e., agreed with more) than those described using a semantically related disagreeable term (M = 2.91, SD = 0.68). We also observed a main effect of Information Type, F(1, 788) = 44.63, p < .001, η p 2 = .054, such that actions described in greater detail (M = 3.08, SD = 0.70) were judged less positively than those presented without additional act information (M = 3.39, SD = 0.84). Crucially, we found a Statement Type by Information Type interaction, F(1, 788) = 121.62, p < .001, η p 2 = .134, as the effect of Statement Type was reduced when participants were provided with more details about the actions they evaluated (see Fig. 4). 9 Follow-up independent-sample t-tests revealed a large effect of Statement Type within No Details conditions (i.e., agreeable/no details and disagreeable/no details conditions), t(393) = 19.10, p < .001, d = 1.93, with agreeable statements (M = 3.97, SD = 0.56) being judged more positively than disagreeable statements (M = 2.80, SD = 0.65). A smaller effect of Statement Type was observed within Details conditions, t(395) = 2.07, p = .039, d = 0.20, once again revealing participants' more positive evaluations of actions described using an agreeable (M = 3.15, SD = 0.70) as opposed to disagreeable term (M = 3.01, SD = 0.69). Overall, the results of Study 4 suggest that peoples' evaluations of actions become less susceptible to a speaker's linguistic choices as they become more knowledgeable about the actions they are evaluating. Nevertheless, even when provided a detailed description of each action, participants still indicated greater agreement with actions described using an agreeable as opposed to disagreeable term. Thus, understanding the details surrounding an action may not make one immune to having their opinion of that action be swayed by a linguistically manipulative speaker.
Similar to Study 1 and consistent with participants distinguishing our agreeable and disagreeable statements from lies in Studies 2 and 3, less than half of our sample (42%, n = 331) perceived any instances of deceptive language in Study 4 (with these participants on average indicating that 40% of actions were described using deceptive language). The between-subjects design of Study 4 allowed us to explore whether participants detected more deceptive language a) when exposed to agreeable as opposed to disagreeable statements and b) when provided with additional act details. We conducted a 2 (Statement Type: agreeable, disagreeable) x 2 (Information Type: details, no details) between-subjects ANOVA with participants' assessments of the percentage of actions described using deceptive language as the dependent variable. This analysis demonstrated a main effect of Statement Type, F (1, 788) = 5.34, p = .021, η p 2 = .007, revealing that participants detected a greater proportion of actions being described with deceptive language when presented with agreeable (M = .19, SD = .25) as opposed to disagreeable statements (M = .15, SD = .24). Such a finding is consistent with agreeable statements being judged as more deceptive, less truthful, and less permissible in Study 2. We observed no main effect of Infor- Therefore, providing additional details about each action did not appear to make either agreeable or disagreeable statements appear more deceptive.
Lastly, we conducted item-level analyses assessing the influence of Statement Type (agreeable, disagreeable) within both Details and No Details conditions. That is, for both Details and No Details conditions, we conducted a one-tailed independent-samples t-test for each item comparing participants' agreement with an action when the action was described using an agreeable versus disagreeable term (see Table 4). A sensitivity power analysis indicated 80% power to detect a minimum effect size of d = 0.25 for these analyses. Further demonstrating the reduced influence of agreeable and disagreeable terms in the Details condition, the advantage for actions described using an agreeable term in this condition was found to be statistically significant (p < .05) for only 3 of the 13 items. 10 Conversely, in the No Details condition, all items were judged more positively when described with an agreeable as opposed to disagreeable term (all p's < .001).

General discussion
The present work demonstrates how peoples' evaluations of actions can be biased by the strategic use of euphemistic (agreeable) and dysphemistic (disagreeable) terms. In Studies 1 and 4, we exhibit how substituting a disagreeable term (e.g., "Emily working at a slaughterhouse") with a semantically related agreeable term (e.g., "Emily working at a meat processing plant") in an act's description can make that action appear more acceptable. Additionally, we demonstrate how the influence of agreeable and disagreeable terms relies upon the details of actions being somewhat ambiguous, as providing participants with more information about each action reduced (but did not eliminate) the impact of each action's linguistic framing. Therefore, it appears that the effectiveness of such strategic language may be reliant upon its audience having some uncertainty about the action, event, or idea they are evaluating, such that when people are knowledgeable about a topic they appear less susceptible to a speaker's strategic choice of terms.
Despite their impact on action evaluations, participants in Study 2 viewed the use of agreeable and disagreeable terms as largely honest and permissible. Most notably, participants readily distinguished the strategic use of agreeable and disagreeable terms from that of lies, judging both agreeable and disagreeable action-depicting statements as less deceptive and more truthful compared to lies when used to describe the same unambiguous actions. Similarly, Study 3 exhibits how those using either agreeable or disagreeable terms to describe a well-known action are judged more favorably (i.e., as more trustworthy, more moral, and less deserving of criticism) compared to liars. Taken together, these data suggest that a speaker can, through the careful use of language, sway the opinions of others in a direction congruent with their individual goals while largely avoiding the risks and condemnation associated with less subtle forms of linguistic manipulation.
Given that people can often benefit from successfully deceiving others, many investigating the evolution of language have wondered  Fig. 3. An example of an item used in Study 4 for which one half of participants were provided with additional details regarding the actions they evaluated. These details were presented along with corresponding agreeable or disagreeable action-depicting statements, for which participants stated their level of agreement with each action described. Fig. 4. Study 4 Results. Each bar displays participants' mean action evaluation judgments within one of the four conditions participants were randomly assigned to in Study 4. Error bars represent 95% confidence intervals. Note. Results of one-tailed independent-samples t-tests comparing participants' agreement with an action when the action was depicted using an agreeable versus disagreeable term. Degrees of freedom were adjusted for items in which a Levene's test indicated unequal variances. Positive effect sizes represent items for which agreeable action-depicting statements were judged more favorably compared to disagreeable action-depicting statements. All differences in the No Details condition, and one difference in the Details condition, remained statistically significant when using a Bonferroni adjusted alpha level of .002 per test (.05/26) to correct for multiple comparisons.
how the majority of linguistic communication remains honest (Fitch, 2010). One hypothesis is that the social and reputational sanctions used to punish known liars encourages people to be honest in their communication with others (Oesch, 2016(Oesch, , 2017Silk, Kaldor, & Boyd, 2000). Nevertheless, empirical findings demonstrating the reputational cost of lying are limited (Oesch, 2017). In Study 3, we find evidence of such reputational costs with participants judging liars as highly immoral, untrustworthy, and deserving of criticism. Interestingly, these reputational consequences appeared to be reserved for liars, as those using potentially self-serving euphemistic and dysphemistic terms to describe well-known actions were on average judged above the midpoint of the administered moral character and trustworthiness scales. Therefore, the risk of social and reputational costs argued to be a primary deterrent of lying may be far less effective in dissuading individuals from using more subtle forms of linguistic deception (e.g., doublespeak) to their benefit. As showcased in the literature surrounding doublespeak (Herman, 1992;Lutz, 1988Lutz, , 1990Lutz, , 2000, there exist countless ways to use language in attempt to bias peoples' opinions of actions, ideas, or people. In the current study we demonstrate how a diverse set of terms can influence peoples' attitudes towards a diverse set of actions. Notably, as doublespeak is often utilized in highly contentious political and moral domains, we examined the influence of the strategic choice of euphemistic and dysphemistic terms in the domain of moral reasoning, having participants evaluate many actions for which they may have already had a strong prior belief. These actions included interrogative acts, acts of political protest, the boycotting of an entertainer for objectionable speech, sexual acts, and acts of military violence. Thus, the present work demonstrates how the subtle choice of more or less agreeable terms in an act's description can influence peoples' judgments of an act even in contentious moral contexts. Similarly, the present work displays how even the most subtle of linguistic changes (e.g., the substitution of a single euphemism or dysphemism in an act's description) can bias peoples' evaluations of actions. Thus, attempts at influencing the opinions of others with language need not be drastic to be effective.

Ambiguity as a mechanism for linguistic manipulation
The results of Study 4 suggest that possessing more knowledge about an act protects people from being biased by the language used in an act's description. When participants were presented with a more detailed description of each action during action evaluations agreeable and disagreeable terms were substantially less effective in influencing participants' judgments. Therefore, we propose that while people may be highly influenced by the strategic use of euphemistic and dysphemistic terms when evaluating an action that is ambiguous (i.e., for which the specific details of the act are unknown), this susceptibility is reduced as people become more knowledgeable about an act in question. That is, when actions are described with some degree of ambiguity people have the affordance to imagine several types of actions. In these cases, the use of a euphemistic term may guide people to imagine actions that are more acceptable whereas the use of a dysphemistic term may guide people to imagine more severe moral transgressions. Conversely, when a person is fully knowledgeable about the act they are evaluating, this reduction in ambiguity may restrict people into imagining a similar moral transgression regardless of how an action is described. For example, if witnessing an interrogative act first hand, a speaker using the terms "torture" or "enhanced interrogation" to refer to that act may conjure up identical images of the act that was witnessed. However, if largely unaware of the act that took place (besides knowing that some act of interrogation occurred), a person may imagine more or less agreeable acts depending on whether the act is described as "torture" or "enhanced interrogation." Interestingly, the results of Study 4 suggest that even in situations of low ambiguity, the strategic use of euphemisms and dysphemisms when describing an action may still exert some small influence on peoples' evaluations of that action. While these results are in no way definitive, this finding would not be unprecedented, as prior work has shown the ability for subtle linguistic changes to influence peoples' judgments of car crashes they witnessed and food that they tasted (Levin, 1987;Levin & Gaeth, 1988;Loftus & Palmer, 1974). Therefore, as memory is not impenetrable by linguistic suggestion, it is likely that possessing knowledge of an act does not leave one immune to linguistic manipulation, perhaps especially as time passes between the time in which a person witnessed or learned about an act and the time in which they encounter a deceptive description of that action.

Manipulative language in the real-world
Of course, people often make moral judgments and form important beliefs without full knowledge about the actions they are judging or the ideas they are evaluating. In today's information age, characterized by attention-grabbing headlines and 280-character quips, information may be missed or necessarily left out. In this context, the potential influence of linguistic manipulation may be great. While much has been written about the proliferation and dangers of overtly biased and objectively false reporting (Allcott & Gentzkow, 2017;Lazer et al., 2018;Lewandowsky, Ecker, & Cook, 2017;Vosoughi, Roy, & Aral, 2018), the potential for influential figures to shape public opinion in their favor with subtle and strategic manipulations of language may be just as worrisome. One reason this form of deception may be especially harmful is its seemingly covert nature as well as the plausible deniability of dishonesty it affords its users. Thus, an influential figure spouting objectively false claims may be exposed and suffer reputational damage, making their future claims less credible in the eyes of many. However, if the same individual were to use language strategically to represent events in their favor (a tactic that the current study demonstrates can be effective) this tactic is likely to be far harder to detect, let alone expose, 11 protecting the individuals' credibility from reputational damage. With a lower level of risk, individuals may be able to utilize such forms of linguistic manipulation, such as doublespeak, often without correction.
An interesting question surrounding the use of doublespeak and similar forms of linguistic manipulation is the role it plays in our increasingly polarized societies (USA: Dimock, Kiley, Keeter, & Doherty, 2014;Motyl, 2016;Canada: Proudfoot, 2019;USA & Canada: Frimer, Skitka, & Motyl, 2017;UK: Ridgwell, 2018). As people may be motivated to seek out news sources that reinforce their existing viewpoints (Frimer et al., 2017;Washburn & Skitka, 2018), they may also be unknowingly selectively exposing themselves to particular self-serving linguistic framings of popular events making their beliefs appear more justified then they otherwise would be given a neutral framing. One can speculate whether exclusive exposure to objectively true, yet linguistically biased claims unnecessarily furthers the divide between individuals ideologically opposed, potentially hindering productive communication.

Limitations and future directions
While the variability of the presented item's euphemistic and dysphemistic terms is in some ways a strength speaking to the generalizability of our effect, a related limitation is that we are unsure what properties of these terms make them effective. One possibility is that the ability of these terms to bias participants' evaluations of actions stems from their diverging valences. Unsurprisingly, messages framed in a more positive manner have been shown to evoke more positive emotions 11 Support for this claim comes from the present work in which we find that a) many people do not endorse the presence of deceptive language when described several actions using euphemistic and dysphemistic terms shown to bias peoples' evaluations of actions and that b) a majority of people endorse both euphemistic and dysphemistic action descriptions as "at least somewhat true." with negatively framed messages arousing more negative emotions (Bilandzic et al., 2017;Nabi et al., 2018;Shen & Dillard, 2007), at times directed at the message sender (Erlandsson, Nilsson, & Västfjäll, 2018). Thus, participants may have endorsed more positive evaluations of actions described using more positively valanced agreeable terms (e.g., enhanced interrogation), on account that these terms may have evoked more positive emotions, or at least fewer negative emotions, compared to their disagreeable counterparts (e.g., torture). Another possibility is that agreeable terms simply evoked less arousal than disagreeable terms, resulting in participants responding with less severe moral condemnation when evaluating actions. As we primarily had participants evaluate what were largely perceived to be immoral acts (e.g., acts of violence), it is possible that simply using terms that evoked more or less arousal effectively biased participants' judgments.
Additionally, related to the observed effect of ambiguity in Study 4, it is possible that euphemistic terms operated by representing superordinate categories that include a larger array of actions (including more agreeable actions) compared to the related disagreeable terms. For example, the term "enhanced interrogation," while including actions that can be referred to as torture, may simply encompass far more acts (including more agreeable acts) compared to the term torture and as such may cause participants to judge a somewhat ambiguous action less harshly. Of course, some combination of these three mechanisms may be involved in making the use of agreeable and disagreeable terms lead to more or less favorable evaluations of actions. Future studies should attempt to tease apart these factors in order to learn more about what makes the strategic selection of semantically related euphemistic and dysphemistic terms effective for altering peoples' evaluations of actions.
Another potential limitation of the current study is that participants were not presented with any information about the person describing each action. Of course, in a real-world context people can be attentive to source information, specifically whether a piece of information is possibly biased by a source's favored beliefs or individual goals. Thus, future work could investigate whether peoples' susceptibilities to a speaker's strategic use of euphemistic and dysphemistic terms is reduced when provided information about the person describing each action. Lastly, while the present work suggests that people are in general susceptible to the strategic use of euphemistic and dysphemistic terms characteristic of doublespeak, this susceptibility almost certainly varies systematically across individuals. Past work investigating the persuasiveness of positively and negatively framed messages demonstrates how the persuasiveness of a message often depends on the individual characteristics of message recipients (Covey, 2014;Rothman & Salovey, 1997). Thus, future studies should investigate how various individual differences correlate with peoples' susceptibility to the form of linguistic manipulation studied here. Such an investigation would be informative for learning about who is most likely to be a victim of doublespeak and other similar forms of manipulative language.

Conclusion
As social creatures people are often interested in trying to influence other peoples' minds. Subtle linguistic manipulation such as doublespeak exemplifies a potentially harmful real-world attempt to shape the beliefs of others through the strategic use of language. The current study suggests that peoples' evaluations of actions can be predictably biased by the strategic use of euphemistic and dysphemistic terms in an act's description. This is especially true when people possess some degree of uncertainty about the act, event, or idea they are evaluating. Unfortunately, people often make important moral judgments and form consequential beliefs without a perfect understanding of the acts they are evaluating or the ideas they are contemplating. Therefore, understanding the impact of doublespeak and other related forms of linguistic manipulation in both situations of high and low ambiguity is important to further our understanding of how the strategic use of language can bias people's perceptions of important and highly contentious actions.

Declaration of Competing Interest
None.