The influence of cognitive bias on crisis decision-making: Experimental evidence on the comparison of bias effects between crisis decision-maker groups

A crisis requires the affected population, governments or non-profit organizations, as well as crisis experts, to make urgent and sometimes life-critical decisions. With the urgency and uncertainty they create, crises are particularly amenable to inducing cognitive biases that influence decision- making. However, there is limited empirical evidence regarding the impact of cognitive biases on estimation, judgment, and decision-making tasks in crises. Possible biases occurring in crises are: (1) to be influenced by how information is framed (i.e., framing effect), (2) to overly rely on information that confirms rather than opposes preliminary assumptions (i.e., confirmation bias), (3) to rely heavily on a skewed informational cue when making estimations (i.e., anchoring bias), and (4) to see the own decision-making as less biased than decision-making of others (i.e., bias blind spot). We investigate these four cognitive biases using three online survey experiments targeting crisis-affected people of the general public (n = 460, mTurk workers), governmental and non-profit workers (n = 50, mTurk workers), and crisis experts (n = 21, purposefully sampled). Our findings show that crisis experts are the least biased group but are still significantly affected by anchoring, framing, and bias blind spot. Crisis-affected people from the general public showed the strongest susceptibility to all four biases studied. The findings have implications for future research on crisis information systems (IS) design. As crisis response is increasingly facilitated through IS, we propose debiasing functions that account for biased user behavior in crises.


Introduction
Decisions in crises are made by crisis experts, response organizations (i.e., often government and non-profit organizations), and the crises-affected people from the general public [1,2].
While crisis responders strive to make optimal choices, they have to do so in the urgent and uncertain crisis environment. Human reasoning is often guided by mental simplifications and shortcuts that can ease and accelerate judgment in the form of heuristics but can also lead to flawed understandings, estimations, and decisions in the form of cognitive biases [3,4]. Biases can have grave consequences in high-stakes scenarios where decision outcomes can substantially affect people's lives [5][6][7]. The understanding of bias

Background
We review the theoretical underpinnings of crisis decision-making, information processing, and cognitive bias in the following subsections. We then focus on four biases to develop our hypotheses.

Stakeholder groups within crisis response
We follow a definition of the term crisis as given by Boin, 't Hart, Stern, and Sundelus [15]. According to them, the key components of a crisis are the threats (to the population, the environment, etc.), the uncertainties around what is happening and going to happen, and the urgency to act (ibid.). When a crisis, such as COVID-19, disrupts a society's social fabric, the general population is primarily affected. Globally, there have been over 6.2 million deaths related to COVID-19 [16]. Unemployment and poverty have increased during the pandemic [17]. Countries in the global north faced severe, unanticipated challenges and were often unprepared, resulting in ineffective policies to protect communities [18]. This could be predicted as previous research found that established crisis management practices are insufficient to handle transboundary crises [19]. COVID-19 further worsened ongoing humanitarian crises in the global south, where protracted conflicts had already put stress on existing social and health infrastructure [20]. Affected people have faced life-threatening circumstances by visiting public spaces or going to work, and many had to make life-altering choices to protect themselves and their families [21]. COVID-19 further showed the stress crises put on essential services that are provided by governmental and non-profit organizations, such as public health care, and the distribution of ventilators for clinics or the allocation of materials such as masks. Response efforts are influenced by advice from crisis experts who make decision recommendations with regard to potential future trends [22]. In summary, crisis-affected people, governmental and non-profit workers, and crisis experts must make frequent decisions during urgent, uncertain, high-stakes, and resource-constraint circumstances.

Biases in crisis estimation, judgment, decision-making
The urgency and uncertainty of crises require that decisions are satisficing rather than optimal [2]. Quick decisions are paramount. Decision-makers rationalize using fast heuristics and utilize fast estimations and judgments [23,24]. People in crises likely rely more on heuristics because of the difficulty of formulating problems in ill-defined contexts [25] and bounded rationality, i.e., the limitations of their cognitive resources to receive, process, and store information [26]. Because people are 'cognitive misers' and avoid cognitive load as much as possible in uncomfortable situations, they will rely on quick and congenial heuristics that confirm previous assumptions and reduce cognitive dissonance to arrive at decisions [27,28]. What information is processed, how the information is processed, and the results of the processing affect the decision made.
During the onset of the COVID-19 pandemic, governments around the world were trying to make sense of the highly uncertain situation while having the pressure to decide fast to protect their populations from more serious infection waves. How influential a piece of information on a decision-maker is, is determined by the information source, message, topic, and recipient [29]. Dual-process models, such as the heuristic-systematic model and elaboration-likelihood model, divide information processing into two categories: a systematic/central and a heuristic/peripheral approach [30,31]. Because affected people and government and non-profit workers find themselves confronted with issues they are not experienced with, they are more likely to use the heuristic approach. Klein and colleagues studied how experts make decisions under urgency, uncertainty, high-stakes, and resource constraints [24,32]. They found that experienced firefighters successfully use quick and simple heuristics to build mental plans for plausible solutions to practical problems rather than a time-consuming approach that weighs decision options against each other. Finally, people usually combine the two approaches during information processing [33]. Through our study, we add to the understanding of the role of experience and domain knowledge in crisis information processing and decision-making.
To assess the influence of cognitive biases on crisis decision-making, we focus on four concrete biases: framing effect, bias blind spot, confirmation bias, and anchoring bias (Table 1). Evidence from domains with similar decision contexts shows these biases are present in emergency healthcare, infrastructure safety, forensics, and tense political situations [41][42][43]. These domains have aspects in common with crises. What makes crises distinct is the magnitude of disruption, i.e., crises affect societal systems as a whole.
For example, decisions in emergency management operations, e.g., ambulance calls, need to be made in extremely urgent contexts with high stakes [44]. An analogy is the outbreak of COVID-19 in the Wuhan district in China. The outbreak was first handled as an emergency, affecting a limited area [45]. Over time, the outbreak significantly worsened, ultimately developing into a crisis that is still affecting entire societies all over the world. Our selected biases are likely to happen during circumstances having characteristics similar to crises as they fall into our definition given above [46]. For example, in emergency healthcare, confirmation bias can guide doctors to only test their preliminary assumptions, ignoring alternative assumptions, consequentially leading to wrong patient treatment [47,48]. In sentencing decisions, arbitrary informational cues that have nothing to do with the trial, nor the defendant, can lead to anchoring bias in judges that affect the lengths of prison sentences [49]. In deciding on the treatment of a novel infectious disease, people can be susceptible to the framing effect. The latter refers to decisions being determined by how the decision options are presented rather than by the actual predicted decision outcomes [34]. An important requirement to reduce such bias effects on decision-making is the awareness of the own biased behavior. Yet, research on the bias blind spot phenomenon shows that people often see themselves as less biased than others [36]. The anchoring bias, confirmation bias, framing effect, and bias blind spot have been shown to negatively affect decision-making in various domains. When biases remain undetected and uncorrected in crises, biased decision-making can have significant societal consequences. Biased response decisions might be inadequate and fail to address affected-people's humanitarian needs [4]. Table 1 synthesises the literature review.

Hypothesis development
The research gap we address is the lack of empirical data on the influence of the four biases on concrete crisis response tasks such as estimation, judgment, and decision-making. Understanding biased crisis decision-making is critically important, and identifying and mitigating biases has potentially significant societal benefits through improved decision quality. We discuss the four selected biases and develop our corresponding hypotheses in more detail below.
Framing Effect. People are affected by how information is presented, especially when choices are phrased as more or less risky. This is called the framing effect [34]. For example, when confronted with a task that frames the consequences of options in response to a new infectious disease as either sure or probable lives saved, people favor response options that lead to a certain amount of people being surely saved. When the consequences are framed as sure and probable lives lost, however, people favor options that will lead to a probable loss of lives. In general, people tend to choose sure gains over probable gains and probable losses over sure losses when confronted with risky choices [50]. Prospect theory explains that people perceive losses as more significant than gains [34,51], while they prefer a probable loss over a sure loss and a sure gain over a probable gain. In other words, people are more risk-averse when confronted with framed gains and more risk-seeking when confronted with framed losses [52].
Crisis management literature provided evidence that experienced crisis managers show susceptibility to framing effects, similar to laypeople, but there exists no direct, empirical comparison [53,54]. Studies have shown that previous experience and knowledge can reduce the susceptibility to the framing effect [14,55]. Therefore, we hypothesize crisis-affected people and government and non-profit workers to be significantly susceptible to differently framed decision options for crisis response, while crisis experts are not susceptible. When comparing the three groups, we expect that the group of crisis experts will be less influenced by the framing effect than the other two groups.

Crisis framing hypotheses
H1a, H1b, H1c, H1d: Crisis-affected people (H1a), as well as government and non-profit workers (H1b), show a significant Table 1 Overview of cognitive biases selected for this study.

Bias Explanation Example Example sources
Framing effect Being influenced as to how information is being presented. Choice between risky options; Climate change adaptation behavior [27,34] Bias blind spot Ranking one's behavior as less biased than the behavior of others.
Students, and citizens rank themselves as less biased than their peers. [35,36] Confirmation bias Overly select information that is in line with one's preconceptions.
Public policy preferences; consumer purchase choices [37,38] Anchoring bias Overly rely on initial, skewed information. Estimating stock prices, travel durations, lengths of rivers etc. [39,40]  difference in selection behavior when having to choose between either sure versus probable lives saved and between sure versus probable lives lost when having to choose between two options for the response to COVID-19. Crisis experts do not show this susceptibility (H1c) and will further show weaker framing bias than the other two groups (H1d). Bias blind spot. People tend to think they are less biased than others. A phenomenon called the bias blind spot [36]. The bias blind spot is explained by the combination of two concepts, namely introspection illusion and naïve realism [56]. Introspection illusion refers to people's 'charitable self-assessments' when they reflect on the reasons for their thought processes [56]. Naïve realism then leads people to see these self-assessments as unmediated and truthful [36]. [57] mentioned sports team favoritism as an example of people's bias blind spot. A fan sees their own prediction of a team's performance as more accurate than the prediction of others. This is because their own thought process is easier available to them, and each logical step they made leading to their final assessment seems logical for them. Because people do not have this direct access to the thought processes of others, they do not acknowledge that they could have equal or even more merit (ibid.). Scopelitti and colleagues summarize the pitfalls of the bias blind spot: "When people are unaware of their bias, they are unlikely to adopt corrective strategies to avoid the sources of bias that influence their judgment. Consequently, people who are more susceptible to bias blind spot are less prone to improve their decision making by engaging in bias reduction strategies, responding to training, and taking advice" [56]; p. 2482-2483).
For the mitigation of negative bias effects, one's susceptibility to bias needs to be known to decision-makers. Being self-aware about one's own biases is an important first step in debiasing [43]. Concerning bias blind spot, existing evidence suggests that experienced experts are less susceptible to bias blind spot than non-experts [35]. Therefore, we hypothesize crisis-affected people and government and non-profit workers show a significant bias blind spot, while crisis experts do not. Comparing the three groups, we expect crisis experts to show less biased blind spot compared to the other two groups.

Crisis bias blind spot hypotheses
H2a, H2b, H2c, H42: When asked to reflect on their decision-making behavior as well the decision-making behavior of others during crisis response, crisis-affected people (H2a), as well as government and non-profit workers (H2b), rank themselves as significantly less biased than others. Crisis experts do not show this behavior (H2c) and will further show a weaker bias blind spot than the other two groups (H2d).
Confirmation bias. Research has found that people tend to focus their information retrieval efforts on information that is more likely to confirm their already made assumptions [38,[58][59][60]. Cognitive dissonance theory [61] explains this self-confirming behavior, suggesting that "after people commit to a […] decision, they gather supportive information and neglect unsupportive information to avoid or eliminate the unpleasant state of post-decisional conflict known as cognitive dissonance".
Crisis urgency likely leads people to stick to preliminary decisions rather than invest time and cognitive effort into re-evaluating past decisions and switching preferences. Because of the urgency to act in crisis environments, decisions have to be made quickly without proper consideration and weighing the benefits and drawbacks of decision options against each other. Confirmation bias would allow crisis decision-makers to follow their preliminary assumptions and reduce the time required for testing other assumptions [60,62,63]. Crisis decision-makers frequently find themselves confronted with decision dilemmas, for example, on whether to implement a novel technology that eases certain crisis response tasks but which raises privacy concerns [64,65]. Decisions have to be made about whether or not to implement, adopt, and use novel technologies that might ease crisis response but which put risks to people's privacy and information rights [66]. After such a decision is made, throughout the unfolding crisis, new information can become available that either support or oppose one's decision on whether or not to rely on such technologies. The question then becomes to what extent crisis decision-makers will try to confirm their previous decision or try to question and disconfirm it critically.
Previous experimental research found that information selection can be accuracy-motivated or defense-motivated, while both can lead to confirmation bias [67]. People with less knowledge in a domain, who consequently have not yet developed a stance on the topic, are likely to be accuracy-motivated and select information that is perceived as providing the most utility. Because they might develop a preference directly after becoming aware of the issue, preference-consistent information might seem to provide the highest utility [67]. Having more domain knowledge leads people to develop a stance on the topic and give more relevance to it. Ascribing higher relevance to an issue can lead to defense-motivated behavior in decision-makers, which further leads to upholding already made assumptions, thereby leading to confirmation bias [68].
Therefore, we hypothesize that the three crisis decision-maker groups will be susceptible to confirmation bias when choosing between a technology vs. privacy dilemma and subsequently selecting supporting or opposing information. We expect no significant differences between the groups regarding the strength of the confirmation bias.

Crisis confirmation hypotheses
H3a, H3b, H3c, H3d: When confronted with a decision dilemma, crisis-affected people (H3a), government and non-profit workers (H3b), and crisis experts (H3c) will search for significantly more information that supports rather than opposes their previous decisions. There will be no significant difference in the strength of the confirmation bias between the three groups (H3d).
Anchoring bias is one of the most established cognitive biases [69]. Experimental research showed that people tend to anchor their judgment around initial information, which influences their assessment of the range of plausible solutions to a decision problem [39,40]. People anchor their numerical estimations on initial cues that can be arbitrary and extreme. An explanation for the anchoring phenomenon is that perceived cues lead decision-makers to engage in effortful deliberation regarding the validity of these cues. This deliberation effort reduces decision makers' ability to assess the full range of possible answers and limits it to a solution space that is closely related to the perceived cues [49].
Decision-makers in crises need to decide quickly but often receive important information only in small subsets sequentially over time rather than the complete dataset at once [70]. Therefore, the perceived cues are often the only information available to decision-makers, and consequently, they might rely heavily on them even when the information is skewed. Estimating available resources is a common task in crisis response [71]. Affected people need to estimate their resources to plan individual response efforts, e. g., the application to crisis response funds. Government and non-profit workers need to estimate their organizational resources to plan crisis response efforts and to understand if certain affected areas or population groups need to be prioritized. Crisis experts need to estimate the resources of the overall crisis response network to advise policymakers on where gaps in the response could be. Anchoring estimations on an initial piece of information might seem beneficial to crisis decision-makers because it can accelerate decisions and potentially lead to anchoring bias. There are contradictory findings on whether anchoring bias is reduced when decision-makers have more experience and domain knowledge in the task at hand. While some research found domain knowledge to reduce the anchoring effect [72], a majority of studies have shown that anchoring is significantly present in people with knowledge and experience in the domain in question [49,[73][74][75][76][77]. Therefore, we hypothesize that the three crisis decision-maker groups are susceptible to anchoring bias when making numerical estimations on available crisis response resources. We further do not expect significant differences between the groups concerning the strength of the anchoring bias.

Crisis anchoring hypotheses
H4a, H4b, H4c, H4d: When given a high numerical anchor information, crisis-affected people (H4a), governmental, and non-profit workers (H4b) as well as crisis experts (H4c), will estimate available resources for crisis response significantly higher than when given a low numerical anchor information. There will be no significant difference in the strength of the anchoring bias between the three groups (H4d).

Research method
Based on the above-discussed state of the literature, we formulated a set of hypotheses for each of the four types of bias (H1-H4 above). We designed three online survey experiments (one for each of our three decision-makers samples) to test our hypotheses. Each experiment consisted of the same tasks and measures to test for the four cognitive biases. The details of the research framework are described in the subsections below and summarized in Fig. 1.

Participants
We conducted three online survey experiments, one for each of our three crisis decision-makers groups: crisis-affected people, governmental and non-profit workers, and crisis experts. For the crisis-affected people experiment, we recruited participants through Amazon mTurk with the option to only include workers with the 'mTurk master' attribute to reduce the likelihood of arbitrary responses by the participants. For the government and non-profit workers experiment, we recruited Amazon mTurk workers with the option to only include governmental and non-profit employees in the mTurk selection process.
Completing a survey experiment took approximately 10 min. MTurk respondents were paid between USD 1.10 and USD 1.50 for a completed experiment, in line with usual mTurk compensation guidelines for researchers [78]. Amazon mTurk was found to provide a reliable, balanced participant recruitment pool representing the broader population and comparable to other samples typically used for similar studies [79]; Q [80].
In the crisis experts experiment, we targeted a sample of experienced humanitarian crisis responders. The survey was distributed via social media (Twitter and LinkedIn) and addressed humanitarian workers in local organizations, international non-governmental organizations, United Nations agencies and offices of donor country governments. The final sample of experts included representatives from all main organization types in humanitarian crisis response. Crisis experts received no remuneration for taking part in the survey experiment.

Data collection procedure
We collected data through the three survey experiments between March and June 2021 while the COVID-19 pandemic still heavily affected countries and populations globally. All groups received a link to the survey experiments, which were implemented in the survey software Qualtrics. All respondents started on an introduction page, explaining the objective and scope of the study and the target group the study is aimed at (Appendix A). Participants were told there was no right or wrong answer to any of the questions, that they could stop the survey at any point, that their data would be treated anonymously, and that they could contact the researchers if they wanted to have their data to be removed. After the participants agreed to the terms, the experiments started. Participants were first asked to answer some general questions to capture descriptive statistics about each of the three groups (Appendix A). After filling out the general questions, the actual survey experiment tasks were presented in random order to the participants. These tasks are described in the following subsections.

Experimental tasks and measures
Anchoring bias, confirmation bias, and framing effect were assessed through scenario tasks. Bias blind spot was assessed through the participant's self-reflection about past decision-making behavior during Covid-19. The order of display of the elements was random among participants for each group.
Framing effect. We used a measure that is based on the original, classic framing 'Asian disease experiment' from Ref. [34] to measure how susceptible participants were to differently framed choice options. We asked participants to imagine being a crisis program manager and having to decide between two program options in response to COVID-19. Participants were randomly assigned to either a gain or a loss frame. Participants in the gain frame had to choose between two options, one promising that a certain amount of lives will be surely saved, the other option promising a probable amount of lives being saved. Participants in the loss frame also had to choose between two options, one promising that a certain amount of lives will be surely lost, the other option promising a probable amount of lives being lost.
The measure captures results in two variables, one dichotomous independent variable (two conditions: gain frame, loss frame), and one dichotomous dependent variable (two options: sure option, probable option). Using a Fisher's exact test or chi-square test respectively, we could test for a significant framing effect in the participants' selections. The exact item can be found in Appendix B.
Bias blind spot measure. We measured the bias blind spot in participants by asking them to reflect on their decision-making behavior as well as on the decision-making behavior of other people during crises. 1 We integrated this measure by giving respondents short descriptions of eight biases and asking them to rate how strongly they agree that each bias influenced the decision-making of others and themselves. Participants selected how strongly they disagree/agree with each description on a 7-point Likert scale. This creates a within-subjects design with 16 variables, two for each of the eight biases (self-ranking, ranking of others). In addition, for each participant two means could be computed, one based on all ratings of a participant regarding their biased decision-making, the other mean based on all ratings of a participant regarding their perception of others' biased decision-making. Dependent samples tests could then be used on the two means for each participant as well as the individual differences of the own versus others pairs. The exact item can be found in Appendix B.
Confirmation bias. In our confirmation bias item, respondents were first given a short text about the plans of a company to field test a novel technology that is supposed to use artificial intelligence (AI) and satellite technology to make crisis assessments easier. Respondents were asked if they would partner with the company to facilitate the field test (yes/no). After answering the question, respondents were told that there was new information available on the topic of AI-supported crisis assistance. They were given ten short summaries of statements, five supporting the use of AI-supported humanitarian assistance, and five opposing it. Participants were told to select those summaries (as many as they wanted) for which they would like to receive the corresponding articles in full. This created a within-subjects design with two variables per participant storing the count of selected supporting and selected opposing information respectively. By conducting a dependent samples test, we tested if participants selected more summaries that confirmed their preliminary choice and therefore exhibited confirmation bias. The exact items can be found in Appendix B.
Anchoring bias. To measure anchoring bias, we used a 1 × 2 between-subjects design. Participants were randomly divided into a low-anchor or a high-anchor condition. The scenario of the measure was COVID-19 resource allocations provided by the United Nations to individual countries. The United Nations allocated different amounts of funds to countries in the global south, ranging from USD 60,000 to USD 58 million per country. These minimum and maximum values were used as low and high anchors respectively. Participants were then asked to enter their estimates of the average resources provided to all countries. The measure captures results in two variables. One dichotomous independent variable (two conditions: high anchor, low anchor) and a continuous dependent variable (participants' estimates). Conducting an independent samples test can then reveal if there is a significant anchoring bias in the participants' responses depending on whether they were in the low or high anchor condition. The exact item can be found in Appendix B. Fig. 1. Overview of the research framework for this study. 1 In the two survey experiments with crisis-affected people and government and non-profit workers, the measure for the bias blind spot was phrased with regard to 'COVID-19'. In the survey experiment with crisis experts, the measure for the bias blind spot was phrased with regard to a recent humanitarian crisis context of the participants.

Sample descriptions
In the three survey experiments combined, a total of 531 respondents participated, 460 crisis-affected people, 50 government and non-profit workers, as well as 21 crisis experts. In the sample of crisis-affected people the mean age was 35.85 (SD = 11.02), and 138 females and 271 males participated. In the sample of government and non-profit workers, the mean years of work experience was 16.06 (SD = 12.44), and 16 non-profit and 34 government workers participated. The mean years of work experience in the sample of crisis experts was 10.69 (SD = 7.3) and participants represented all types of organizations in crisis response, including local and international organizations, UN agencies, research and academia, as well as the private sector. (H1a, H1b, H1c, H1d) Participants showed a more risk-seeking behavior in the loss-condition and risk-averse behavior in the gain-condition. We tested for significance in the difference between the two conditions per group using Pearson Chi-Square and Fisher's exact tests. 2 In all three groups, significant differences were found in participants' response option selection behavior, depending on whether participants were in the loss-or gain-condition ( Table 2). The framing effect significantly influenced participants in all three groups. Therefore, we found support for H1a, H1b but not for H1c because crisis experts were significantly affected as well.

Results for crisis framing hypotheses
In the sample of crisis-affected people, 174 out of 207 participants in the gain-frame chose the sure gains option and 33 selected the probable gains option. 129 participants selected the sure losses option in the loss-frame, while 74 participants selected the probable loss option. In the sample of government and non-profit workers, 18 out of 23 participants in the gain-frame chose the sure gains option and five selected the probable gains option. 13 Participants selected the sure losses option in the loss-frame, while 14 participants selected the probable loss option. In the group of crisis experts, nine out of twelve participants in the gain frame chose the sure gains option and three selected the probable gains option. Only two participants selected the sure losses option in the loss condition while seven selected the probable loss option.
A binomial logistic regression was performed to investigate the likelihood that participants of each of the three groups chose the sure or the probable option in the framing task ( Table 3). The logistic regression model was statistically significant (χ2(4) = 58.537, p < .000). The model explained 14.7% (Nagelkerke R2) of the variance in selection behavior. Both predictor variables, framing condition (Wald = 50.094, p < .000) and experiment group (Wald = 4.26, p = .039) were statistically significant. Groups were coded (1 = crisisaffected people. 2 = government and non-profit workers, 3 = crisis experts) and as the odds ratios of the analysis shows, the effect reduces with increasing group codes, meaning crisis experts show the least bias effect. We, therefore, find support for H1d. (H2a, H2b, H2c, H2d) In all three survey experiments, participants' ranked themselves as less biased than others when making decisions. We tested for significance in the difference of participants' self vs. others ranking using Wilcoxon signed-rank tests. 3 The tests found significant differences between participants' self-assessment and their assessment of others' decision-making in all three groups (Table 4). We, therefore, found support for H2a, H2b but not for H2c, because crisis experts were significantly affected as well.

Results for crisis bias blind spot hypotheses
In the group of crisis-affected people, participants rated each bias stronger in others than in themselves. In the government and nonprofit workers group, participants rated almost all biases stronger in others than in themselves. In the group of crisis experts, participants rated most biases stronger in others than in themselves.
To test for group differences, two means were calculated for each respondent for both their ranked decision-making behavior and their ranked decision-making behavior of others (i.e., the mean of the 8 biases ranked for their own and others' ranked behavior). Then the difference of the 'own versus others' means' was calculated for each respondent, building our continuous dependent variable.
Our independent variable had three categories, one for each sample group. A Kruskal-Wallis H test 4 was conducted to determine if there were differences in ranked behaviors between sample groups ( Table 5). Distributions of our dependent variable were not similar for all groups, as assessed by visual inspection of a boxplot. Ranked scores were significantly different between the different groups (χ2 (2) = 29.795, p < .000). Subsequently, pairwise comparisons were performed using Dunn's procedure with a Bonferroni correction for multiple comparisons. This posthoc analysis revealed statistically significant differences in ranked scores between the crisis experts (mean rank = 135.55) and crisis-affected people (mean rank = 278.20) (p < .000). Further significant differences were found in ranked scores between governmental and non-profit workers (mean rank = 190.65) and crisis-affected people (p < .000). No significant differences were found between crisis experts and government and non-profit workers. Yet, crisis experts showed a lower bias susceptibility than government and non-profit workers. We, therefore, find partial support for H2d. (H3a, H3b, H3c, H3d) To test for confirmation bias, we first counted the numbers of selected supporting and opposing information per participant. We then calculated the means of the selected supporting and opposing information for each of our three groups. Finally, we used Wilcoxon 2 Pearson Chi-Square test is most suitable for larger samples, therefore it was used for the groups of crisis-affected people and government and non-profit workers.

Results for crisis confirmation hypotheses
Fisher's exact test is most suitable for smaller samples, therefore it was used for the group of crisis experts. 3 Wilcoxon signed-rank tests were chosen because of several outliers and the non-normality of the data. 4 Kruskal-Wallis H test was chosen because of several outliers, non-normality and unequal variances between the three groups.

International Journal of Disaster Risk Reduction 82 (2022) 103379
8 singed-rank tests to check for significant differences between the means of the groups. 5 The result reveals that crisis-affected people selected supporting information significantly more often than opposing information (Table 6). Interestingly, the group of government and non-profit workers shows a borderline significance and also similar means of numbers of selected supporting and opposing information as the group of crisis-affected people. We assume the absence of a significant effect can be explained by the small sample size of government and non-profit workers. A larger sample might have uncovered a significant confirmation bias in the group of government and non-profit workers. Of further interest is the result within the group of crisis experts who tended toward disconfirmation. While not significant, crisis experts selected more opposing than supporting information. We, therefore, find support for H3a but not for H3b and H3c.
To test for difference between the three groups, the selected confirming and selected disconfirming information was counted per     5 Wilcoxon signed-rank tests were chosen because of several outliers and the non-normality of the data. participant and the difference was calculated, building one continuous dependent variable. The independent variable had three categories, one for each sample group. A Kruskal-Wallis H test 6 was conducted to determine differences in counts of selected confirming and selected disconfirming information between the three groups. Distributions of our dependent variable were similar for all groups, as assessed by visual inspection of a boxplot. Counts of selected supporting/opposing information were not statistically significantly different between the three groups (χ2(2) = 2.328, p = .312). We, therefore, find support for H3d. (H4a, H4b, H4c, H4d) In all three survey experiments, participants' estimates for available crisis resources were influenced by anchoring bias. Mann-Whitney U tests 7 were conducted to test for significant differences between the high vs. low anchor conditions in each of the three groups (Table 7). Participants in the low anchor condition gave significantly lower estimates than participants in the high anchor condition. In the sample of crisis-affected people, the mean of respondents' estimates in the high anchor condition was USD 87.66 million and USD 30.32 million in the low anchor condition. In the sample of government and non-profit workers, the mean respondents' estimates in the high anchor condition were USD 86.1 million and USD 11.42 million in the low anchor condition. In the sample of crisis experts, the mean of respondents' estimates in the high anchor condition was USD 17.96 million and USD 11.42 million in the low anchor condition.

Results for crisis anchoring hypotheses
We tested for significant differences between the groups using two-way ANOVA with robust estimators. 8 The analysis revealed that the groups were not significantly different in the strength of the anchoring effect (p = .74). We, therefore, found support for H4a, H4b, H4c and H4d.

Summary of results
To summarize, most hypotheses were supported, some however not, and one could only be partially supported. Table 8 summarizes all hypotheses of this study together with our main results.

Contribution to theory
Crisis management literature has stressed the potential negative influences of cognitive biases in crisis decision-making [4][5][6][7]. However, empirical evidence has been lacking, especially concerning different bias effects between different crisis stakeholder groups. We started from the assumption that crisis contexts lead decision-makers to be prone to biases but that there would be differences between decision-maker groups concerning the strength of certain biases. As our results show, we find support for this assumption. Overall, crisis experts were the least biased in our experiments. They showed no confirmation bias and even selected more disconfirming information rather than information that supported their preliminary decisions. This suggests that experts chose to challenge their initial decision and deliberately looked for information that disproves their preliminary assumption. This might be explained by the strong professional background of our expert participants (mean number of years of crisis work experience over ten years.). The technology vs. privacy dilemma that was used as the scenario in our confirmation bias task is a well-known crisis problem [81]. Our results suggest crisis experts are more critical on the subject and try to assess their information options carefully. While this might prompt defense-motivated behavior that could lead to stronger confirmation bias [68], our results suggest otherwise.
People in crises might have valid reasons, or even no alternative at all, to rely on quick heuristics when information is uncertain, and decisions need to be made quickly [25]. Experience seems to be an important moderator in mitigating the negative effects of biases and strengthening the positive effects of heuristics. In their observations of firefighters' decision-making, Klein and colleagues found that experience can lead to positive decision outcomes in situations of crises when quick, heuristics-based approaches are used [24]. Similarly, previous research found experience and domain knowledge to be mitigating the framing effect and bias blind spot [14,35,55]. This is further supported by our group comparisons. We found that susceptibility to the framing effect and bias blind spot is weaker in crisis experts than in our other participant groups. Nevertheless, even though the framing effect and bias blind spot were lower in the group of crisis experts than in the other two groups, both biases still significantly affected experts' decisions. This is an important result for crisis management literature, as it implies that debiasing measures in crises need to be designed for laypeople as well as experts. Similar observations have been made in the sensemaking literature that found experienced emergency responders can  6 Kruskal-Wallis H test was chosen because of several outliers, non-normality and unequal variances between the three groups. 7 Mann-Whitney U tests were chosen because of several outliers and the non-normality of the data. 8 Two-way Anova with robust Huber M-estimators was used because of outliers, non-normality and unequal variances of the data.
fail to make sense of urgent and uncertain situations, for example, when informational cues are misinterpreted [82,83].

Implications for crisis information systems design
Previous studies have described information systems that support people in crises with information and decision support [84][85][86]. The general public, for example, has access to mobile apps that inform about measures people can take to reduce the impacts of a crisis on their livelihoods [87][88][89][90]. Experts and response organizations have access to more specialized systems, for example, to monitor social media streams, integrate various data sources, and provide modelling for resource allocation [91,92]. Literature on crisis IS design principles focused on information gathering, data management, and decision support services [84][85][86]92].
We argue that crisis IS would benefit from incorporating cognitive bias mitigation measures as they have been proposed in other domains, for example high-stakes financial decisions [93] and web search [94].
Participants in all three groups were significantly influenced by how crisis response options were being framed. Participants showed a more risk-avoiding behavior in the positive-frame condition, and a risk-seeking behavior in the negative-frame condition. Our findings have implications for the reporting, proposal and resource allocation process in crisis response that is often facilitated through IS. Crisis-affected people and response organizations request resources from donor agencies through an often competitive proposal process and donor agencies decide which proposals to fund [95].
• Crisis IS design principle: debiasing framing effect. Information systems that support organizations in developing proposals should provide different framing options and present potential outcomes of these options, e.g., how differently framed plans on what to do with allocated resources likely affect decisions by donors. Previous research highlighted the effectiveness of implementing warning messages with negatively framed advice in information systems [96]. Information systems used by donor agencies also need to be able to detect potential framing effects and include warning messages that warn about the potential influences of framing on their decision-making. Future studies in the field of machine learning and artificial intelligence for crisis response could look into natural language processing approaches that can distinguish between different frames of information.
In our bias blind spot task, participants in all three groups ranked others' decision-making as more biased than their own. This was particularly strong in crisis-affected people, while crisis experts seem to be least prone to the bias blind spot. Being aware of one's susceptibility to bias is an important first step to debias [57,97]. Low self-awareness of one's own biases leads people to ignore advice from experts and to deprioritize efforts to improve their own decision-making process [56]. Crisis-affected people, as well as government and non-profit workers, might therefore disregard expert advice during crisis response.
• Crisis IS design principle: debiasing bias blind spot. IS should account for potential over-confidence in their users, encouraging them to acknowledge and mitigate their own biases. When systems support the awareness of one's own susceptibility to bias, reducing negative bias effects becomes more likely. Another debias option is the establishment of a so-called red teams or devil's advocates [43]. The role of these teams is to critically observe and provide critical feedback during the information management and decision-making process, especially on assumptions that are taken for granted, so that blind spots are less likely to be overlooked.
In our confirmation bias task, the sample of crisis-affected people showed a significant confirmation bias in line with previous studies [38,63]. This indicates that crisis-affected people chose to confirm rather than disconfirm their initial decision and deliberately looked for information that approved their preliminary assumption. While participants working at governmental and non-profit organizations also chose more confirming than disconfirming information, their result was borderline significant. Previous research has highlighted the effectiveness of flagging potentially biased information to reduce confirmation bias in information systems [98].
• Crisis IS design principle: debiasing confirmation bias. Rather than only providing information that is wished for by decisionmakers, systems should balance information supply with information that also opposes users' assumptions to mitigate confirmation bias [93]. Nudging theory suggests that subtle hints to valid but opposing information can be effective means to reduce confirmatory information selection toward more balanced user behavior [94].
In our anchoring bias task, participants focused on a realistic estimate of available resources around the artificial anchor we provided. All three participant groups estimated available crisis resources subsequently lower when given low-anchor information, and higher when given high-anchor information. This was expected as the tendency of people to anchor numerical estimates on arbitrary informational cues is strong in both lay people [40] as well as experts [49]. Our results add to the literature that demonstrates the ubiquitous strength of the anchoring effect, by providing evidence that anchoring also influences critical estimation tasks by crisis decision-makers.
• Crisis IS design principle: debiasing anchoring bias. Crisis IS should take the anchoring tendency of users into account, by keeping track of what cues were presented and what estimation tasks are to be done by users. IS can then guide users to enlarge their scope of potentially reasonable estimates, instead of keeping it to biased limits. Crisis IS could implement modelling functions that support sequential decision-making under uncertainty. When information is limited at first and only becomes available over time, deep uncertainty models can provide insights into ranges of plausible scenarios even when information is limited [99].

Limitations and future research
We acknowledge that potentially many types of bias can influence crisis response. To keep the focus of this research clear, feasible, and concise, we selected anchoring, framing, and confirmation bias because they are powerful, well-established information processing biases. Furthermore we selected bias blind spot as it is useful to understand people's ability to self-identify biases in their own decision-making. We are calling for future research with larger sample sizes on other forms of bias in crisis response as well as a focus on observing biases in actual crisis response or training exercises. This can limit the potential for self-reporting bias in experimental participants.
In our study, we focus on individual confirmation bias. Nevertheless, a form of organizational confirmation bias might arise because of organizational mandates, experience, and standard procedures, resulting in reduced organizational learning and fewer decision corrections when conflicting information suggests course corrections. While out of scope for our study, it is certainly interesting to focus on in future research.
Identifying what biases influence crisis decision-making needs to be followed up with research on effective interventions that reduce bias effects, i.e., debiasing techniques. Experimental research on different debiasing techniques can inform such interventions. Previous debiasing research has suggested several types of debias techniques that can be differentiated by the effort required to achieve the desired level of debiasing [100]. Extensive training sessions can be conducted with decision-makers to understand their own biases and learn ways to mitigate them [101]. Medium-effort interventions can be achieved through information systems, short courses and video lectures [102,103]. Recent studies on information systems designed to support crisis response emphasize integrating various data sources [85,104], and we suggest extending such systems with functionalities that can identify and mitigate potentially biased behaviors of its users.
We argue that frugal, low-cost, low-effort debiasing interventions might best suit the time-and resource-constraint crisis context [105,106]. For example, consider-the-opposite interventions can reduce anchoring bias and confirmation bias [106,107]. Similarly, prompting warnings in information systems about potentially framed information can reduce the susceptibility to the framing effect [102]. Such measures implemented in information systems for crisis response could prompt decision-makers that information contrary to their initial assumptions might be equally important or correct. Weick described the response to crises when expectations are violated and established frames of understanding seem to be no longer valid, as a sensemaking process of individuals and groups [83]. Through sensemaking, decision-makers try to re-evaluate their understanding of a crisis and give meaning to their observations and actions [2,83]. As such, we argue that sensemaking support systems can play an important role in debiasing crisis decisions [108].
Our experimental findings should be compared to future observations during crsis response exercises or real-world crisis response operations. A limitation with these approaches is that intervening in real-life events would be subject to many influences, which would limit generalizability.
Four our experiments, we recruited Amazon mTurk workers. mTurk workers are online users who voluntarily sign up for paid assignments, fulfilling tasks such as classifying images, translating texts or answering surveys. mTurk provides a large pool of potential survey respondents and previous studies found that results drawn from mTurk samples are comparable to samples from more traditional approaches [79,80].

Conclusion
We found experimental evidence that cognitive biases, such as anchoring bias, confirmation bias, framing effect, and bias blind spot, can influence crisis decision-making. These biases affect estimations of available crisis resources, information selection in technology versus privacy dilemma, choices between differently framed crisis response options, and the ability to identify biases in one's decision-making. Not all stakeholder groups are equally susceptible to biases, however. While crisis-affected people of the general public showed to be susceptible to all four biases studied in our experiments, government and non-profit workers as well as crisis experts were only susceptible to anchoring bias, framing effect, and bias blind spot, but not to confirmation bias. Crisis experts showed a tendency to disconfirm their preliminary assumptions. Overall, crisis experts were less susceptible to bias than the other two groups but still showed significant exposure to anchoring, framing, and bias blind spot.
We add to crisis management literature by showing that experience and domain knowledge can reduce the susceptibility bias in crises. Given the extraordinarily high stakes of crisis response, where, as can be seen in the COVID-19 crisis, millions of people can be affected, the research gap regarding the effects of biases on crisis decision-making and potential debiasing strategies require further attention.
We stress one point for future research. Debiasing interventions need to be investigated, especially for crisis information systems. We discussed the implications of our findings on crisis IS design principles that future research can further experimentally evaluate as a starting point. What interventions work to reduce biases for different decision-makers in various contexts could potentially lead to great benefits for all societal stakeholders affected by crises.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability
Data will be made available on request.