Opinion Dynamics and Collective Risk Perception

: The behavior of a heterogeneous population of individuals during an emergency, such as epidemics, natural disasters, terrorist attacks, is dynamic, emergent and complex. In this situation, reducing uncertainty about the event is crucial in order to identify and pursue the best possible course of action. People depend on experts, government sources, the media and fellow community members as potentially valid sources of information to reduce uncertainty, but their messages can be ambiguous, misleading or contradictory. Effective risk prevention depends on the way in which the population receives, elaborates and spread the message, and together these elements result in a collective perception of risk. The interaction between individuals’ attitudes toward risk and institutions, the more or less alarmist way in which the information is reported and the role of the media can lead to risk perception that differs from the original message, as well as to contrasting opinions about risk within the same population. The aim of this study is to bridge a model of opinion dynamics with the issue of uncertainty and trust in the sources, in order to understand the determinants of collective risk assessment. Our results show that alarming information spreads more easily than reassuring one, and that the media plays a key role in this. Concerning the role of internal variables, our simulation results show that risk sensitiveness has more influence on the final opinion than trust towards the institutional message. Furthermore, the role of different network structures seems to be negligible, even on two empirically calibrated network topologies, thus suggesting that knowing beforehand how much the public trusts their institutional representatives and how reactive they are to a certain risk might provide useful indications to design more effective communication strategies during crises.


Introduction
. In April , an earthquake struck L'Aquila, a medieval town in Central Italy, killing people.In six scientists were put under investigation for allegedly giving false and fatal reassurances to the public a few days ahead of the earthquake (Hall ).They were members of the "National Commission for the Forecast and Prevention of Major Risks" (Commissione Grandi Rischi), a governmental body which was asked to provide advice about tremors and earthquakes in the area.According to prosecutors, the fact that these scientists and their spokesperson issued a statement reassuring the population made many people feel safe enough not to leave their houses a er the initial shocks on the same night.Many of the survivors reported having interpreted the statement issued by the National Commission, which was immediately and widely reported by the national media, as a reliable and trustworthy indication that no catastrophic events were to be expected and to have behaved accordingly.In , the six scientists were formally acquitted, but their evaluation of the risk and the way it was communicated to and perceived by the population had had a profound impact on society, raising awareness of the consequences of institutional communication and media influence in risky situations.

.
Sadly, the importance of this behavior had already become evident during the a ermath of Hurricane Katrina in New Orleans, four years before the earthquake in L'Aquila.Katrina was a powerful hurricane that caused extensive destruction and casualties (at least , people died in the hurricane and subsequent floods), but the many management mistakes during the emergency aggravated the situation.Mayor Nagin of New Orleans and Louisiana Governor Blanco were criticized for ordering residents to a shelter of last resort without any provisions for food, water, security, or sanitary conditions.Furthermore, the information they provided to the population was o en incomplete and late (Selected Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina ). .
These two examples indicate the importance of informing endangered or already a ected populations about the risks of a catastrophe without, at the same time, spreading panic.Natural and anthropogenic disasters are characterized by a varying degree of uncertainty about their occurrence, their magnitude and the ensuing consequences.In the decision-making literature, risk is traditionally defined as a function of (a) the likelihood and (b) the value of possible future events.Risk arises from the uncertainty, actual or perceived, surrounding the event and it varies as a function of the kind of hazard.Underestimating the likelihood of disasters is as dangerous as exaggerating them.The latter might result in hoarding behavior, riots and other undesirable consequences, both on the individuals' health (Rochford Jr & Blocker ) and on the community's well-being (Kaniasty & Norris ). Providing accurate and valid information about an uncertain event is obviously very di icult, but even if experts and institutions succeed in doing so, this does not necessarily translate into an equally accurate perception of collective risk.

.
This occurs because citizens are not passive and unbiased recipients of neutral information, but they actively revise information in light of what they already know and believe (Giardini et al. )], and share their own opinions with each other, in an attempt to make sense of what is happening.This process of risk interpretation (Eiser et al. )] creates a collective perception of risk that can significantly di er from the initial message sent by the institutions or the media, with varying consequences on emergency preparedness and management.The question is: what are the consequences of di erent sources of information on collective risk assessment and how does the message interact with individual features (trust in institutions, sensitivity to risks, propensity to communicate with others)?How do social network structures amplify or reduce the collective perception of risk? .
There is a long tradition of research in complex systems (De uant et al. ) and sociology (Flache et al. ) concerning the complex relationships between social influence at a micro-level and its macro-consequences for integration or divisions in society.A parallel and unconnected research tradition is disaster studies (Lindell ), which address the social and behavioral aspects of disasters.Here, we want to bridge this disciplinary divide by developing an ABM model of collective risk assessment as the result of the interplay between agents' traits and opinion dynamic processes.If applied to risk interpretation, consensus means that there is a shared perception of a given risk, more or less correct, whereas polarization would result in people holding di erent beliefs about the occurrence of the event or its consequences.By modeling the determinants and the processes of risk interpretation and social influence it could be possible to understand what causes risk amplification, whereas by understanding the mechanisms behind consensus and polarization in opinions about risks can prove useful to define e ective communication campaigns, to avoid the traps of misinformation and to ensure a fact-based view of the emergency.

.
The aim of this work is threefold.First, we are interested in modelling the interplay between individual factors, information transmission and social aspects of risk perception.Micro-level characteristics like trust in institutions or individual attitudes towards risk can be amplified or reduced in the interaction with other individuals, with unintended e ects at the macro-level.Second, we want to compare the role of two di erent information sources, institutions and the media, on opinion polarization and consensus.Risk interpretation can occur from a more general process of social influence, in which comparing opinions and collecting information from others can lead to more or less consensus about the risk, and then to a heightened collective preparedness in face of a major hazard.Finally, this study investigates the role of network topologies on collective risk assessment by comparing a set of di erent topologies, also by using two di erent empirically calibrated networks.
. In order to understand collective risk assessment and its emerging macro-e ects, we developed an agent-based model in which heterogeneous agents form opinions about risk events depending on information they receive from the media, institutions and their peers.We also used di erent network topologies in order to get a better understanding of the role of structural features in the spreading of opinions.
. The rest of the paper is organized as follows.Section introduces the problem of risk perception and presents the current state of the art.Section introduces the distinctive features of the Opinions on Risky Events, O.R.E., model, while Section presents the results of di erent simulation experiments.Section is devoted to discussion.Finally, in Section we draw some conclusions and suggestions for future research.

Factors A ecting Risk Perception and Collective Risk Assessment
. Accuracy in judgment about the world is of primary importance, but it becomes fundamental when individuals have to cope with uncertain and dangerous events.When natural disasters and anthropogenic catastrophes are unavoidable, awareness of their consequences and the preparedness to deal with them is necessary to reduce physical damage, prevent casualties and minimize their repercussions.Earthquakes, flooding, storms, volcanic hazards, industrial disasters and epidemics present very diverse challenges, but they are all characterized by varying levels of uncertainty about whether, when, or where they are going to happen and by largely unforeseeable consequences.Equally di icult to predict is the reaction that individuals will have towards the information, i.e., how they will perceive the risk, both individually and as a result of collective processes (Eiser et al. ).
Risk perception: Individual factors and the social amplification of risk .
In decision theory, the tradition of studies on risk perception is well established (Slovic ), and a number of factors have been claimed to play a role in it.Humans tend to overestimate the risk of catastrophic but unlikely events compared to more common but less disastrous ones (Slovic et al. ), although there are several factors a ecting risk perception.Wachinger and colleagues (Wachinger et al. ) reviewed more than European studies on floods, heat-related hazards, and alpine hazards (flash floods, avalanches, and debris floods) published a er .They identified four main categories of factors a ecting risk perception: risk factors associated with the scientific characteristics of the risk, informational factors, such as source and level of information, and media coverage, personal factors, which also include age, gender, and trust, and contextual factors, related to the specific situation.These factors are intertwined, and their interplay gives rise to complex and emergent dynamics at the collective level.In this paper, we are going to focus on trust and risk sensitivity as personal factors interacting with informational factors, i.e., peers, institutions and the media as di erent sources.

.
Trust is a cornerstone of human societies, and in situations of uncertainty and risk the amount of trust individuals place in di erent sources of information can become decisive (Luhmann ; Earle & Cvetkovic ).There are cultural and individual di erences in people's beliefs in the possibility of avoiding and controlling risk, which create di erent levels of trust in others.For instance, personality and cognitive styles contribute to determining people's confidence in their judgement and therefore the decision to turn to others to obtain more information (Eiser et al.
).The source of the message has the potential to enhance the e ectiveness of risk communication (Wogalter et al.
) and trust in the source tends to be higher when the source is perceived to be knowledgeable and has little vested interest, as reported by Williams & Noyes ( ).When deciding whether to accept or not a risk message trustworthiness and credibility play a key role, according to Petty et al. ( ).The experts, the government, or a neighbor can all be trustworthy sources of information, especially in times of danger and uncertainty, and their opinions can have important consequences. .There are a variety of sources from which end-users may obtain information regarding hazards and disasters.
For instance, mass media (e.g., television, newspapers, radio, etc.) play an extremely important role in the communication of hazard and disaster related news and information (King ; Fischer ) and significantly influence or shape how the population and the government view, perceive and respond to hazards and disasters (Rodríguez et al. ).
. Although relevant, these theories treat risk perception mostly as an individual phenomenon.An interesting exception is o ered by the theory of "Social amplification of risk".In the words of Kasperson and colleagues (Renn et al. ) the social amplification of risk is "the phenomenon by which information processes, institutional structures, social-group behavior, and individual responses shape the social experience of risk, thereby contributing to risk consequences".In this framework, a proper assessment of a risk experience requires us to take into account the interaction between physical harm attached to a risk event and the social and cultural processes that shape the interpretations of that event, their consequences, and the institutional actions taken to manage the risks.
. The importance of understanding the interaction between risks and the context in which they are evaluated has been advocated also by other scholars (Wachinger et al. ), but the exact role played by each factor and the consequences for risk perception and individual choices has not been assessed yet.For collective risk assessment to emerge, it is necessary to account for dynamic processes of social influence, but also for the way in which actors integrate what they already know with information coming from other sources, like their peers, the media or institutional sources.Can we identify what drives collective risk interpretation towards alarm (and eventually panic), transforming individuals in scare mongers, or towards indi erence, with the result that people underestimate risks and are not prepared?

Social influence and opinion dynamics for collective risk assessment .
Collective risk assessment does originate from processes of social influence and communication between individuals and supra-individual actors, such as institutions and the media.Here, we propose to model this process in terms of opinion dynamics, in order to focus on the determinants of opinion spreading in a population, and the conditions under which they become polarized (Hegselmann et al.

; Sen & Chakrabarti
).A recent review paper by Flache and colleagues (Flache et al. ) o ers a broad overview of existing models of social influence, and it details the di erent ways in which a complex relationship between social influence as a micro-level process and its macro-consequences for integration or divisions in society might emerge.The authors systematically review the existing literature by grouping the models according to their core assumptions and showing that these determine whether opinion convergence can be achieved or not.Thanks to this corpus of work, it is possible to explore the conditions under which di erences in opinions, considered as the agent's property a ected by social influence, may eventually disappear.The term "opinion" is used in a very general manner, and it can equally apply to attitudes, beliefs and behavior, thus allowing its generalization to di erent contexts, and its applicability to multiple domains.

.
More germane to the topic of risk perception is a model of opinion formation specifically designed to address risk judgments, such as attitudes towards climate change, terrorist threats, or children vaccination, developed by Moussaïd et al. ( ).In this individual-based model of risk perception there are two main classes of actors, e.g., the individuals and the media, with the former receiving or searching for information provided by the latter, and then communicating about the risk with others.The model also introduces a cognitive bias whereby individuals integrate and communicate information in accordance with their current views.Results show that two variables explain whether individual opinions about risk may converge or not: how much agents search for their own independent information and the extent to which they exchange information with their peers.Although interesting, Moussaid's model makes very simplistic assumptions about the internal processes determining confidence in the information received, trust in the di erent sources and the ensuing decision to share that information.

.
In times of uncertainty when how risks are interpreted and experienced is more relevant than their objective likelihood, dimensions such as trust and risk sensitiveness can o er a better tool to understand collective risk perception and to minimize the risks of exaggeration or underestimation of the danger in the population.In order to model the perception of risks as di erent opinions, we moved from the cognitive model of opinions developed by Giardini et al. ( ).They define opinions as complex cognitive constructs resulting from the combination of: . subjective truth-value, which expresses whether and how much someone believes an opinion to be true; . confidence, i.e., the extent to which someone's opinion is resistant to change; . sharedness.

.
The latter encapsulates the popularity of an opinion according to a given agent and it is a way to model social pressure, which means that believing that one's own opinion is shared by the majority makes opinion change less likely to happen (Kelman ). .
The internal dynamics of these traits change as a result of social influence, and it leads to more or less consensus depending on the initial distribution of traits in the population, and on the system topology (Giardini et al. ).This tripartite model, though still very simplified, was used to explore which individual traits are responsible for opinion change, and which configuration of traits will lead to consensus.

The O.R.E. (Opinions on Risky Events) Model
Overview -Design concept -Details .We have a population of L agents (if not di erently specified, we will usually assume L = 1000).Each agent i is characterized by an opinion O i .For the sake of simplicity, we model opinions as the subjective probability that the disaster will actually take place, without taking into account the magnitude of the possible consequences, which would add further unnecessary complexity to the model.Furthermore, there is a huge variation between disasters in which probability and consequences are extremely di icult to predict, such as earthquakes, and disasters such as floods in which weather forecasts and location-specific features make risk calculation less haphazard.Opinions vary between , which can be expressed as "I am certain that nothing is going to happen", and , which means "I am certain that the disaster will happen".At the onset of the simulation, opinions are randomly assigned to agents, and they are updated on the bases of the interplay between internal characteristics of the agents and three di erent sources of influence.
. Initial conditions -At the beginning of every iteration of the dynamics, the agents are randomly assigned an opinion between and , always with uniform distribution.Additionally, internal variables are randomly distributed, although distribution is not necessarily uniform, and it will be specified in each case.Opinions evolve, but individuals' internal variables remain constant over time.The institutional information I is set at the start of the dynamics and never changes.

Characteristics of the agents .
Each individual agent is described by di erent parameters: risk sensitivity, tendency to communicate and trust.Risk sensitivity is an integer variable which can assume three possible values, R i ∈ {−1, 0, +1}.Risk sensitivity is characterized independently from the received information and it is randomly distributed in the population, and a ects the tendency to inform others about the potential danger, B i .This means that agents who perceive the risk as more probable will also tend to talk about it more, thus sharing their worries with others.People tend to transmit information that is in accordance with their initial risk perception, neglecting opposing information (Popovic et al. ).This, in turn, can lead to an amplification of the initial risk perception of the group, even if the original information supported the opposite view; it also fuels polarization between di erent groups.

.
Trust is a real number varying between 0 (minimum trust) and 1 (maximum).When trust is 0 or very close to it, the information received will not produce any change in the initial opinion because the source will be considered untrustworthy and its message will be discarded.On the contrary, when trust is high the influence of the source and its e ect on the opinion will be equally high.
. Trust plays a key role in risk perception (Flynn et al. ).People who trust authorities and experts tend to perceive fewer risks than people who do not trust them, and this e ect is higher when people have little knowledge about an issue that is important to them.In his critical review of the literature, Siegrist confirmed the importance of trust, but he concluded that it varies by hazard and respondent group therefore it is not possible to define a single way in which trust interacts with risk perception (Siegrist ).In this study we distinguish between trust in institutions and in other individuals.We define trust in institutions as T i and trust in peers as P i . .
We assume that trust towards institutional information is negatively correlated to trust towards peers: . This assumption is important because it allows us to distinguish the e ect of trust in two major sources of risk information, and to model the interplay between inter-individual trust and trust in o icial communication (Slovic ).Studies on misinformation (Lewandowsky et al. ) and the link between conspiracist ideation, worldviews and rejection of science seem to suggest that individuals with low trust in government and experts tend to selectively believe people with the same views (Lewandowsky et al. ).A recent study on institutional trust and misinformation about the Ebola outbreak in DR Congo shows that participants in the survey with low levels of trust in government institutions and the information they communicated held widespread beliefs about misinformation, and more than per cent of the surveyed participants had received this information from friends or family Vinck et al. ( ).
Processes of social influence .
We define three ways in which social influence may unfold.The first consists of peer-to-peer communication among agents communicating with each other in a horizontal and reciprocal way.The second kind of influence occurs through vertical institutional communication, which spreads unilaterally from the institution to the individuals.
. The impact of media on the population in the a ermath of disasters is well-known (Vasterman et al. ; Holman et al. ), therefore we also model media influence, as neutral, alarming or reassuring depending on the way in which institutional information is reported.Media influence is also unidirectional, i.e., broadcast from the media source to the agents (in this model we do not consider social media).Algorithm of the dynamics .
Step one -Information from the institutional source.At each time step, the Institution informs each and every agent about the o icial risk evaluation I, which is a real variable between and , being I = 0 the minimum risk information (i.e., no risk at all), and I = 1 the maximum (i.e., catastrophic event to happen with probability of 100%).We will call this variable institutional information.Agents use this information to update their opinions about the communicated risk I according to their internal variables.An individual i modifies its opinion O i (t − 1) ≡ O o i following a two-stage process where the first one is the same rule adopted in De uant model (De uant et al. ; Abelson ): The updated opinion O i is further processed according i's risk sensitivity: Risk sensitive individuals will be driven towards more alarmist views, given the same institutional information, therefore considering the hazard as more likely, whereas less sensitive agents will underestimate it.A third category, unbiased individuals, will not process the information any further and the opinion about risk will remain unchanged. .
Step two -Information exchange among peers.In each simulation round a pair of agents is picked up at random.Let us define j as the "speaker" and i as the "listener" (the symmetrical interaction where i is the speaker and j the listener will take place in the same way), i, and O i and O j their opinions before the interaction takes place, respectively.Now, the probability Π x that a determinate player x communicates its opinion to the opponent is because we assume that given the same opinion, agents with higher tendency to communicate are more likely to speak, but given the same tendency to communicate the more worried agents will also speak more o en.
. If the speaker decides not to share its opinion O j (according previous equation, this happens with probability 1 − Π j ) with the listener, the latter's opinion O i does not change.If instead agent j actually shares its opinion, agent i will change its own according to a rule of the same kind of Equation : ( ) The listener considers its risk sensitivity and updates again its opinion: ( ) The construction of risk perception -Step three A er L rounds (so that on average each player has interacted once per time step), the information exchange ends, and the opinions of the agents become their opinions at time t.
. Media influence.As a starting point, we assume that in principle, the media can report institutional information in three ways: in a reassuring way, in an alarming way and in a neutral way, i.e., reporting the information without any changes.In this paper we model such e ect in a rather simplified manner, leaving a proper refinement for future works: for a discussion about the role of media in disaster preparedness and agenda setting see Barnes et al. ( ) and Moeller ( ).
. We therefore implemented the e ect of media influence in the model as follows.Every time an agent receives the institutional information, we assumed that with equal probability such information can be distorted towards alarmism, reassurance, or le unaltered: with probability 1/3 I with probability 1/3 random number ∈ I+1 2 , 1 with probability 1/3 .
End of simulation.Every iteration lasts enough to achieve a final state, i.e., a configuration where the dynamics has become constant and the global configuration of the system is stable, that is, the opinions of all the agents do not change anymore.All the simulation results are averaged over independent realizations (i.e., iterations) for each given condition, unless di erently specified.

Topological Structure .
A key factor in social influence is the way in which the population is structured and the resulting communication flow.For the sake of simplicity, we decided to place agents on di erent topologies with increasing complexity.First, we place agents on a complete graph, that is, everyone is directly linked with everyone else (well-mixed population or mean-field topology)(Pirnot ).Indeed, interactions among people are in general much more complex, being a complete graph a good description of the underlying network only for very small communities, as for instance the students of a single school class, a group of friends and so on.Nevertheless, this extreme configuration does not generally represent the topology of the vast majority of human interactions.Therefore, as soon as we focus on structured and complex communities, such as cities or countries, it becomes largely insu icient, and a di erent kind of topology has to be considered for di erent kind of situations.For instance, the physical interactions (i.e., not through mass and social media) among people in large enough populations are better modelled by Strogatz-Watts small-world networks (Watts & Strogatz ); on the other hand, the structure of connections among users in virtual communities as Facebook or large mailing lists show topological behavior close to Scale-free networks (Caldarelli ).
. In this work, we tested our model for di erent network topologies, starting with well-mixed case as the baseline configuration in order to explore the more noticeable features of the model.Indeed, knowing the behavior of the model in well-mixed and in more realistic topologies permits to figure out the exact role of the topology itself or, if no relevant di erences are found, to know that other factors influence the dynamics and outcomes of the model (Vilone et al. ).The second step will consist in checking the dynamics on more realistic topologies.More specifically, we consider the following four network structures: • a one-dimensional ring of L = 1000 nodes with connections to second-nearest-neighbors (so that each agent is linked to four other individuals); • an Erdös-Rényi random network of L = 1000 nodes with probability of existence of a link p = 0.1; • a Watts-Strogatz small-world network (Watts & Strogatz ), generated from the ring defined above with rewiring probability p r = 0.05; • a real network of L = 1133 users of the e-mail service of the University of Tarragona, Spain (Guimerà Manrique et al. ; Arenas a), which can be approximated for high degrees with a scale-free network with exponent 2; • a real network of L = 4038 users of Facebook, extracted again by the University of Tarragona, Spain (Arenas b), which can also be approximated for high degrees with a scale-free network with exponent 2. ), as for instance the "six degrees of separation", that is, the average distance among agents increases with the logarithm of the total population.The one-dimensional ring and the Erdös-Rény random network, though essentially abstract, have opposite clustering coe icient (that is, two neighbours of a given node have high probability to be neighbours on their turn in the former, very low in the latter).

Results
. In presenting the results, we distinguish between simulation experiment , in which there is only one source of information about the risk, i.e., the institution, and simulation experiment , in which the presence of media is introduced.For both experiments we aimed to study how interaction between individuals' internal states, information coming from or (including the media) di erent sources, and topology a ects the dynamics of collective risk assessment.

Baseline: No institution .
As a starting point, we checked the behavior of the model without any institutional influence.As we have verified in di erent configurations (varying population size and topology), without any communication by the institution, all the agents tend to maximum level of alarmism (see Fig. ): indeed, in such configurations the more worried individuals are more prone to share their opinions, driving the rest of the population towards their views.Therefore, institutional information plays a fundamental role in o ering a balanced view and avoiding the spreading of panic.

Simulation experiment : The O.R.E. model without media influence
.
In order to have a baseline of the model, we designed a perfectly balanced system in which all initial opinions are uniformly distributed in the real interval [0, 1], and the internal variables {B i , R i , T i } i=1,...N are picked up at random with a uniform probability.The agents are placed on complete graphs.Of course, this is an unrealistic situation but, as for the topology, we started from this simple case and subsequently we will refine the parameters, in order to single out the role of each feature of the model to develop the dynamics and final opinion configuration.
. Our results show that a final stationary state can be e ectively reached by the system, as shown in Figure .Noticeably, the convergence is quickly achieved: already a er very few time steps, the average opinion acquired a stable value.
. Figure shows how the final average opinion behaves as a function of the institutional information.For both high-and low-probability dangers, the system shows a discrepancy between institutional communication and  individuals' opinions.In general, individuals are more alarmist than the institution for reassuring information, but they are less alarmed if the o icial information is worrying.However, these results exhibit another interesting asymmetry: the value I * of the institutional information for which the response of the population is equal to the input is not Ī = 0.5, as one could expect since the system is balanced, but larger (more precisely, we have here I * 0.58).This counter-intuitive result could help explain why alarmist information spreads more easily, especially in highly uncertain situations.
. Noticeably, in our model full consensus is never reached: at most, when the average opinion is close to one of the two extreme values, there are fewer agents with an opinion far from the average.Indeed, in the final state there is always a final non-trivial opinion distribution, as shown in

Unbalanced systems in mean-field approximation
. In this subsection, we investigate what happens when the system is not balanced, that is, when the distribution of agents' internal variables is not uniform (equivalently, their average is not equal to their median value).In particular, we studied the behavior of the system by varying the two key features of agents: average risk sensitivity and trust in the institution in order to isolate the e ect of the di erent variables on the final opinion change.Varying trust, balanced risk sensitiveness .
Here we study those systems in which the risk sensitivity is uniformly distributed among agents, while their trust towards the institution has an unbalanced distribution.More precisely, each player i is assigned with probability P T a trust T i uniformly distributed between .and (high trust), and with probability 1 − P T a trust T i uniformly distributed between and .(low trust).Therefore, the average trust is It is worth noting that as P T is tuned from to , T goes from . to . .increasing the trust means decreasing the value of the final average opinion.On the other hand, when the input is alarmist (I = 0.80), trust appears to have scarce e ect on the dynamics.Indeed, in this case, the system is much less dependent on trust, and the final average opinion is almost constant with respect to P T (what is more, it slightly decreases with P T increasing).
Here we analyze the opposite case where trust is uniformly distributed, but agents di er with regard to their risk sensitiveness.In particular, each player i is assigned the neutral risk sensitiveness (R i = 0) with probability 1/3, a positive risk sensitiveness R i = +1 with probability 2P R /3, and a negative one with probability 2(1 − P R )/3.Therefore, the average risk sensitiveness is In this way, as P R varies from to , R goes from −1/3 to 1/3.
In Figure we show the behavior of the final average opinion as a function of I in complex topologies, i.e., ring, Erdös-Rényi random network, Watts-Strogatz small-world network, and real e-mail network (we do not show the results with the Facebook network because they are very similar to the e-mail case).As it is clearly visible in the figure, the influence of topology is negligible, meaning that the relevant e ects are due to other causes, and in particular the internal variable distributions.Agents' opinions are not influenced by the position of the local neighbours but rather by the dynamic interplay between individual factors.
Simulation experiment : The O.R.E.model with media influence .
In the first simulation experiment, we studied the case of perfect information provided by an institutional source in which all agents receive and process the one and only message coming from the institution I.In the real world, however, news information is also transmitted by di erent kinds of media, either traditional, such as newspapers or TV, or more recent ones, such as online blogs and social media platforms.Here, we model only unidirectional traditional media broadcasting the same message.The role of media sources on collective risk assessment is crucial, because the way the audience processes and transmits the message can produce new dynamics.Therefore, in this last section we focus on the possible e ect of media sources on the institutional information and on the dynamics of the system.
. As can clearly be seen in Figures , , and , the e ect of media is, in general, to increase the average level of alarm of the population, other things being equal.It is worth stressing that media increase the overall level of alarm both on complete graphs and on the real Facebook and E-mail networks.Despite admitting that it is not enough to make the system reach complete consensus, the asymmetry of the outcome is clearly visible.Even though our simple algorithm creates (on average) as many agents receiving a less alarming input as receiving the more alarming one, the latter tend to communicate their worries to their peers more o en than the former.

Discussion
. Here, we have simulated the dynamics of collective risk perception in a population subject to the risk of natural disasters by modeling the emergence of collective opinions about risk from individual characteristics and their e ects on individual agents' minds (Epstein ).The interaction between initial conditions, di erent sources and network topologies yielded interesting results in terms of collective risk assessment.   .First, we observed that in many cases, the average opinion becomes more alarming, even when institutional messages are reassuring.Indeed, the agents are more alarmist than the institution, especially for low values of I, and only when I gets close to , the final average opinion is lower.Such phenomenon also occurs in balanced populations where trust, risk sensitiveness and tendency to communicate are uniformly distributed.This can be explained by the fact that alarmist agents tend to share their opinions more o en than non-alarmist agents, all other things being equal.They act as risk amplifiers (Trumbo ) who, by increasing others' exposure to alarmist information, can o set the influence of other variables, like trust or low risk sensitivity.Another explanation for the spread of alarmist opinions can be found in the presence of agents with low trust towards the institution, which systematically disregard and doubt institutional messages, thus weakening their impact.

.
Our results indicate that agents' risk sensitivity has a stronger influence than their trust towards the institution.Indeed, by varying the distribution of the former, the final average opinion results much more a ected than by changing the latter.This result yields interesting policy implications, suggesting that educating the population in assessing the risks more e ectively could counterbalance the negative e ects of low trust in institutions.

.
We have also shown that the topological structure on which the dynamics takes place is substantially irrelevant for the final fate of the system.This result agrees with previous findings showing that there are some social dynamics processes which were experimentally shown to be independent from the details of the networks (Grujić et al. ).Moreover, we assessed the e ects of the mass-media on the transmission and e iciency of the institutional information.Noticeably, alarmist media condition the opinions of the individuals more than reassuring ones: again, this is due to the higher talkativeness of preoccupied agents. .
Finally, we should stress the fact that in our model a global, absolute consensus is never reached, that is, the agents never end up beholding the very same opinion about the risk, and outliers are observed even when a vast majority of agents converged towards a given opinion.

Conclusions and Perspectives
. Here, we have applied a numerical approach to the study of collective risk evaluation in order to understand better how horizontal, i.e., social influence, and vertical communication, from institutions and the media, might a ect individual opinions in risky situations.In our model, agents receive information about the likelihood of a catastrophic event from two main sources which can transmit either contradictory or converging messages.Institutional sources broadcast their assessment, generally based on experts' analyses and suggestions, which means that all the agents in the population receive the same information.On the contrary, peer information is based on dyadic interactions between agents placed on di erent network topologies.These two streams of information are then processed by the agents according to their mental attitudes and beliefs.
. Our model of opinion dynamics as a basis for collective risk assessment shows that, even if the agents receive the same initial information about the likelihood of the danger, their perception of the actual risks is modified by individual traits and by the resulting social influence process.For di erent parameter configurations, our model shows that being an alarmist is more likely, but also that full consensus cannot be reached.This finding is in line with psychological work on misinformation (Lewandowsky et al. ), which shows that the widespread persistence and prevalence of misinformation can be attributed to a combination of individual traits and social processes.

.
Research in psychology has focused on the elements and processes of individual reasoning about uncertain events, uncovering heuristics and biases in making decisions.Various scholars (Slovic et al. ; Eiser et al. ) convincingly argue that we need to overcome the limitations of traditional "rational choice" models of decision making under risk and uncertainty, by taking into account more realistic theories of human cognition, like heuristics.Although very simplified, our cognitive opinion model takes into account three core aspects, trust, risk sensitivity and a tendency to talk with others.Future work could further develop these characteristics, or it can investigate the role of di erent heuristics, as suggested by Eiser and colleagues (Eiser et al. ).
. When an event is dangerous but its likelihood cannot be determined, as in the case of natural disasters or epidemics, individuals need to refer to others to acquire relevant information and to make sense of what is happening.While we know, for example, that individuals in many situations tend to over-estimate the risk of catastrophic events (Lichtenstein et al. ), or to be almost indi erent to increasingly dangerous situations, such as climate change (Pidgeon & Fischo ), the current understanding of the e ects of risk communication is still limited.According to Trumbo (Trumbo & McComas ), individuals and communities tend to perceive institutional agencies as less credible (Fessenden-Raden et al. ; Kunreuther et al. ; McCallum et al. ; Slovic et al. ; Frewer et al. ).In a complementary manner, there are studies that show that people typically perceive physicians, friends, and environmental groups as the most credible sources, and this also depends on an individual's familiarity with the risk itself (McCallum et al. ; Frewer et al. ). .
Our study helps us understand the dynamics of risk communication and perception by simulating the joint e ect of institutional communication and individual opinion exchange in a population of agents in which those who trust state agencies and the experts are also less likely to trust friends and personal contacts, and the other way around.This simple rule has proven to be powerful enough in determining how individuals who exchange opinions following a similarity bias can become polarized against institutional messages, thus reducing their e ectiveness.Empirical data should be collected by future research to calibrate these predictions, and to validate the conclusions of the model. .
Even if we designed the O.R.E.model to investigate the determinants of collective risk perception, its application is not limited to risk communication, but it can be usefully applied more generally to opinion di usion, even if some of its features, such as individual risk sensitivity and the ensuing tendency to inform others, are specific to risk communication and its social amplification.The model could be developed further to account for the di erences between specific disasters or for the collective risk perception of di erent risks, like conflicts and violence.
. Another interesting direction for future research is the role of social networks.Even if we tested our model on four di erent network topologies, a more thorough understanding of the network characteristics and the possibility to collect or have access to real network data could be extremely interesting.Many scholars have highlighted the importance of social capital in the a ermath of a disaster (Aldrich ), and our model could be used to simulate communities presenting di erent levels of social capital, i.e., the resources embedded in the network, and to test its e ect on opinion dynamics.Our simulations also highlighted the prevalent role of risk sensitiveness with respect to trust, independently of connectivity.This could provide important indications for policy makers, suggesting that interventions should target risk sensitivity, for instance by o ering training and other opportunities to learn about the risks and how to deal with them.Another important implication of our results is that risks can get easily amplified, especially by the media, even when the original message coming from the institutional source is not alarmist.We are aware of the fact that the media is modeled in a very static way, and that social media are becoming a crucial player in the communication of risk (Alexander ).Future extensions of the model will broaden the range of media available, including social media .
What we set out to explore here is the e ect of individual factors and information processes on collective risk perception on di erent network topologies and with di erent information sources.The literature on persuasive communication proposes that when individuals believe that they are at risk of confronting dangerous events they will engage in adaptive behavior (Karanci et al. ) Thus, in disaster awareness programs it is crucial to develop an awareness of risks involved in disaster events, but also to foster consensus about the risk to build a coordinated and coherent reaction in the population.

.
As a concluding remark, our work was intended to model the collective perception of risks connected with natural disasters, such as earthquakes, floods, and volcano eruptions, but it also could be extended to model risk evaluation during pandemics.This development would be timely and it could help understanding how to improve risk preparedness and how to promote the adoption of precautionary measures by the population.

.
The last two networks are real systems which allow us to test the model on a realistic configuration.The remaining ones are useful for di erent reasons.The Watts-Strogatz network models accurately important phenomena occurring in real human interactions (Watts & Strogatz ; Barrat & Weigt

Figure :
Figure : Time behavior of the average opinion O of a system of L = 1000 agents for totally non-alarmist, neutral and highly alarmist institutional information, and the baseline without institutional information.Completely balanced population: initial opinions, trust towards institution, tendency to speak and risk sensitivity randomly assigned with uniform distributions.

Figure :
Figure : The behavior of the final average opinion O f in as a function of the institutional information for a system of L = 1000 agents.Completely balanced population: initial opinions, trust towards institution, tendency to speak and risk sensitivity are randomly assigned with uniform distributions.Linear fitting parameters: intercept 0.33, slope 0.43.
Figure (see also Figure in Sec. about a di erent version of the model), and the median opinions are less common in the final configuration, especially when the system ends up in an alarmist state.

Figure :
Figure : Histogram of the distribution of the final agents' opinion concerning the risk on complete graph a er a single run for totally balanced systems of L = 1000 agents, with institutional information equal to .(le ) and .(right).

Figure :
Figure : Behavior of the final average opinion O f in as a function of the unbalance probability P T of the trust towards the institution for systems of L = 1000 agents, balanced risk sensitiveness, and two di erent values of the institutional information.Linear fitting parameters: a) I = 0.20, intercept 0.54 and slope −0.21; b) I = 0.80, intercept 0.70 and slope −0.03. .In Figure , we show how the population responded to institutional information by varying the average trust towards the institution.Understandably, when the institution communicates a non-alarmist message (I = 0.20), increasing the trust means decreasing the value of the final average opinion.On the other hand, when the input is alarmist (I = 0.80), trust appears to have scarce e ect on the dynamics.Indeed, in this case, the system is much less dependent on trust, and the final average opinion is almost constant with respect to P T (what is more, it slightly decreases with P T increasing).

Figure :
Figure : Behavior of the final average opinion O f in as a function of the unbalance probability P R of the risk sensitiveness for systems of L = 1000 agents, balanced trust towards the institution, and two di erent values of institutional information.Linear fitting parameters: a) I = 0.20, intercept 0.12 and slope 0.56; b) I = 0.80, intercept 0.43 and slope 0.50.

Figure :
Figure : Behavior of the final average opinion O f in as a function of the institutional information for systems of L = 1000 agents.Totally balanced populations.In particular: a) One-dimensional ring, b) Erdös-Rényi network; c) Watts-Strogatz small-world network; d) Real e-mail network.Linear fitting parameters: a) intercept 0.31, slope 0.46; b) intercept 0.31, slope 0.45; c) intercept 0.30, slope 0.46; d) intercept 0.30, slope 0.47.

Figure :
Figure : Final average opinion O f in on the risk as a function of institutional information I, complete graph (L = 1000 agents).Le : with media e ect; Right: without media e ect.

Figure :
Figure : Final average opinion O f in on the risk as a function of institutional information I, Facebook network (L = 4038 agents).Le : with media e ect; Right: without media e ect.

Figure :
Figure : Final average opinion O f in on the risk as a function of institutional information I, university of Tarragona e-mail network (L = 1133 agents).Le : with media e ect; Right: without media e ect.

Figure :
Figure : Histogram of the distribution of the final agents' opinion about the risk on complete graph a er a single run, L = 1000.Balanced model, institutional information I = 0.50.Le : with media e ect; Right: without media e ect.