A review on trust propagation and opinion dynamics in social networks and group decision making frameworks

On-line platforms foster the communication capabilities of the Internet to develop large-scale inﬂuence networks in which the quality of the interactions can be evaluated based on trust and reputation. So far, this technology is well known for building trust and harnessing cooperation in on-line marketplaces, such as Amazon (www.amazon.com) and eBay (www.ebay.es). However, these mechanisms are poised to have a broader impact on a wide range of scenarios, from large scale decision making procedures, such as the ones implied in e-democracy, to trust based recommendations on e-health context or inﬂuence and performance assessment in e-marketing and e-learning systems. This contribution surveys the progress in understanding the new possibilities and challenges that trust and reputation systems pose. To do so, it discusses trust, reputation and inﬂuence which are important measures in networked based communication mechanisms to support the worthiness of information, products, services opinions and recommendations. The existent mechanisms to estimate and propagate trust and reputation, in distributed networked scenarios, and how these measures can be integrated in decision making to reach consensus among the agents are analysed


Introduction
Virtual interactions between people and services without any previous real world relationship have experienced an exponential increase with the availability of interactive on line sites, including the so called social networks. These interconnected platforms present very diverse functionalities; from the well known social networks Facebook ( www.facebook.com ), Instagram ( www.instagram.com ) or Twitter ( www.twitter.com ), where people share pictures and thoughts with their friends and followers, to e-commerce platform like Amazon or E-bay; crowd-sourcing platforms to share knowledge and expertise, such as Wikipedia ( www.wikipedia.com ), Slashdot ( www.slashdot.com ), or Quora ( www.quora.com ), and on-line facilities sharing networks such as UBER ( www.uber.com ) and Blablacar ( www.blablacar.com ) for cars or Airbnb ( www.airbnb.com/ ) for accommodation. In spite of the diversity of the available on-line communities, all of them share the common characteristic of having a vast amount of users interacting between each other under a virtual identity. These emerging social media channels permit users to build various explicit or implicit social relationships, to leverage the network as a worldwide showcase to disseminate and share products, services, information, opinions and recommendations.
In these digital media scenarios, the evaluation of the credibility of the information constitutes a more challenging problem than in conventional media, because of its inherent anonymous, open nature that is characterised by a lack of strong governance structures [23] . In fact, this anonymity offers a favourable environment for malicious users to spread wrong information, virus, or even files in the case of Peer to Peer networks (P2P) [26] . Therefore, it is necessary to use mechanisms that allow to choose the peers from whom interact as well as effectively identify and isolate the malicious ones.
The ideal solution would be to develop a web of reputation and trust, either at a local level (individual websites for example) or across the whole web, that would enable users to express and propagate trust on others to the entire network to allow other users to assess the quality of the information or service provided even without a prior interaction with the agent in question. The ultimate goal, here, is to estimate, for each agent, a reputation level or score. In this sense, reputation can be understood as a predictor of future behaviour based on previous interactions, that is, any agent will be considered as highly regarded if it has consistently performed satisfactorily in the past assuming that the service can be trusted to perform as expected in the future as well. Therefore, reputation and trust based management will ultimately lead to minimise poor and dishonest agents as well as to encourage the reliable and trustworthy behaviours [17,22,30] , which, in its turn, makes reputation to be one key mechanism for the social governance on the Internet [49] .
When it comes to estimate reputation and propagate trust, we can recognise two main differences between traditional and on-line environments: (i) traditional indicators that allow to estimate trust and reputation, observed on the physical world, are missing in on-line environments, and so electronic substitutes are required; (ii) in the physical world, communicating and sharing information related to trust and reputation is relatively difficult, since this knowledge is constrained to local communities. On the contrary, in the case of IT systems, the Internet can be leveraged to design efficient ways to calculate and propagate this information in a world-wide scale [30] .
Some examples of these on-line reputation networks are included in e-commerce platforms such as Amazon and E-bay or even in accommodation booking platforms like Airbnb where both hosts and guests are rated with the aim to provide a global reputation score publicly available to all users. With this regards, it has been observe that a reliable reputation system increases users trust on the Web [9] . In fact, for the case of e-auctions systems several studies confirm that reputation mechanisms benefit both sellers and buyers [27] . More concretely, for the case of e-Bay it has been concluded that part of the e-Bay's commercial success can be attributed to its reputation mechanism, eBay's Feedback Forum, that allows as well to rate negatively dishonest behaviour [19] .
On the other hand, trust has been considered as well an important factor to influence decision making and consensus reaching between multiple users in group decision making (GDM) scenarios [5,33,38,39,41] . These procedures facilitate agents to negotiate in order to arrive to mutually acceptable agreements [3,32,46] , and so trust here can be used to spread experts opinions and to provide recommendations [44,45,47] . A recent survey on social network based consensus approaches [11] points out that trust based GDM approaches are still in an early stage and they lack the necessary tools to calculate dynamically inter agents trust and influence.
The aim of this article is to survey the main mechanisms to generate and propagate trust and reputation in social networks and to point out several open questions concerning the integration of trust and reputation based measures and opinion dynamics procedures in the context of decision making approaches. To do so, this contribution is organised as follows: Section 2 analyses the main characteristics of on-line social communities. Section 3 introduces the concepts of trust and reputation, and the differences between them. The main existing procedures to estimate reputation and to propagate trust are also explained in this section. Section 4 focuses on the trust based GDM approaches, while the main approaches to carry out opinion spreading in social networks are revised in Section 5 . The main challenges and research opportunities on how the analysed trust and reputation based systems, as an influence measure to foster decision making processes and recommendation mechanisms in complex social networks scenarios with uncertain knowledge, are pointed out in Section 6 . Finally, conclusions are presented Section 7 .

Social networks
A social network can be considered as a set of people (or even groups of people) that participate and interact sharing different kinds of information with the purpose of friendship, marketing or business exchange. Social network modelling refers to the analysis of the different structures in the network to understand the underlying pattern that may either facilitate or impede the knowledge creation in this type of interconnected communities [21] .

Characteristics of real world social networks
A multi person social network has some specific characteristics, when compared with a random graph of nodes, that have to be considered in order to properly understand the opinion dynamics, trust propagation and influence. In the following, some of the most important ones are pointed out: The two main structural properties that define this type of network are the higher clustering coefficient and average path length that scales the logarithm of the number of nodes. The clustering coefficient is also known as the transitivity, that is, a friend of a friend is likely to be my friend and the average path length is the minimum number of nodes to traverse to move from A to B [21] .

Scale free network
This implies that few nodes present an elevated number of connections (degree) whereas the majority of them present very few connections. These type of networks have no specific scales for the degrees.
In a social network, it is key to identify which agents or individuals cause the highest influence in the network, and which are the nodes that receive this impact. The eigenvector centrality proposed by Bonacich and Lloyd [2] has been extensively adopted as a measure of the relative importance of an individual in a social influence network. In this way, centrality scores are given to all the nodes in the network based on the premise that "the centrality or status of an individual is a function of the status of those who choose him", for example, being chosen by someone powerful makes one more powerful to the eyes of the others. Thus, a node having a high eigenvector centrality means that it is connected to other nodes with high eigenvector centrality as well. This measures can be mathematically formalised as follows: Definition 1 (Centrality [2] ) . Given an adjacency Matrix A = (a i j ) , with element a ij represents the degree of influence of node i towards node j and let v = (v 1 , . . . , v i , . . . , v n ) the unknown vector of centrality scores for each node, then v i = v 1 a 1 i + v 2 a 2 i + · · · + v n a ni , Which can be expressed as the following eigenvector equation Notice that this eigenvector based centrality assessment may not be applicable to asymmetric networks. For these cases, a generalisation of this measure, that allows every individual some status that does not depend only on his or her connection to others, denominated α-centrality has been proposed in [2] . This assumes for example that the popularity of each student in a class depends not only on her internal connections with her fellow students within the class but also on her independent external evaluation by other such as her teachers. Therefore, given the vector e that represents this external (exogenous) sources of status or information the above expression can be extended as follows:

Trust and reputation systems
Trust and reputation management encompasses several disciplines such us data collection and storage, modelling, communication, evaluation and reputation safeguards [30] . Therefore, various research initiatives have been carried out in different sectors ranging from psychology, sociology and politics to economics, computer science and marketing. From the perspective of computer science, examples of existing applications that make use of reputation and or trust approaches include peer-to-peer (P2P) networks, e-commerce, e-marketing, multi-agent systems, web search engines and GDM scenarios.
Manifestation of trust are almost obvious in our daily routine; nevertheless finding an exact definition is challenging since trust can be represented in many different forms leading to an ambiguous concept. In cite [22] , the following definition of trust is provided: "trust is the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible." The authors claim that this relative vague definition encompasses the following aspects: "(i) dependence on the party that is trusted; (ii) reliability of the trusted entity ; (iii) utility a positive utility will result from a positive outcome, and negative utility will result from a negative outcome; (iv) risk attitude that implies the trusting party is willing to accept a possible risk." On the other hand, according to the Concise Oxford dictionary, "reputation is what is generally said or believed about a person's or thing's character or standing." This definition is aligned with the one given by social networks researchers that states that "reputation is a quantity measure derived from the underlying social network which is globally visible to all members of the network" [13] . Therefore reputation can be understood as the perception that an agent creates through past actions about its intentions and norms in a global level. Reputation can be assessed in relation with one individual or with a whole group. For example group reputation can be obtained as the average of all its individual members' reputation values. Indeed it has been observed that the fact that an individual belongs to a given group has an impact in his/her reputation depending on the give group's reputation.
Notice that the concept of reputation is closely linked to that of trustworthiness, since dependence and reliability, which are key implications in the definition of trust, can be assessed through a persons reputation, for example, based on the evaluations given by the other members in the system. Therefore trust can be established through the use of reputation. That is, one may trust someone who has a good reputation. However, the difference between trust and reputation resides in that trust systems take into consideration as input general and subjective measures of trust (reliability) in a pairwise basis, whereas reputation considers the information about objective events like specific transactions [22] .
Trust manifestations from traditional systems include subjective terms of friendship, long term based knowledge or even intuition that are not available in on-line system. Thus, electronic measures are required to computerise these abstracts concepts. Conversely, in real physical worlds, trust and reputation systems are confined to small local communities and so the use of IT technologies in a proper way will allow the effective development of worldwide systems to exchange and collect trust based knowledge. Some challenges arise in this regard: (i) finding effective ways to model and recognise trust and reputation in the on-line world, i.e. to identify the available cues and develop mechanisms to fuse them; (ii) mechanisms to collect and propagate these measures in a scalable, secure, and robust way. According to Kamvar et al. [26] , there are six main characteristics that any on-line trust and reputation system should address: 1. Self policing. The system should rely on the information given by the users of the network and not on some central authority, therefore ratings given by the user about current interactions should be collected and distributed. 2. Durability in time of the entities. After an interaction it is normally assumed that a posterior interation will take place in the future. 3. Anonymity. The reputation and trust should be link with an ID. 4. No profit to newcomers. Reputation is calculated by constant good behaviour, no advantages for new members. 5. Minimal overhead. The computation of the values of trust and reputation should not constitute a charge in terms of computation to the whole system. 6. Robust to malicious collectives. Users trying to abuse of the trust system should be immediately recognised and blocked in the system.
In the following the main characteristics of the trust and reputation systems are analysed. To do so, in Section 3.1 the main reputation network architectures are presented. In Section 3.2 , the procedures to compute reputation are outlined. In Section 3.3 , the principal approaches to carry out trust propagation between peers are studied. Finally in Section 3.4 , an overview of various methods that use both reputation and trust are presented.

Reputation network architectures
In a reputation system, once a transaction between two agents is completed the agents are required to rate the quality of the transaction (service). The architecture of the these systems determines the way in which the ratings and reputation scores are collected, stored and shared between the members of the system. There exist two main types of reputation network architectures (see Fig. 1 ): centralised and distributed.
In a centralised reputation system ( Fig. 1 (a)) a central authority collects all the ratings and constantly updates each agent's reputation score as a function of the rating the agent received. This type of system requires (i) a centralised communication protocol in charge of keeping the central authority updated from all the ratings; and (ii) a reputation calculation method for the central authority to estimate and update the reputation of each agent. In contrast, in a distributed reputation system ( Fig. 1 (b)) each agent individually collects and combines the ratings from the other agents. That is, an agent A, who wants to transact with another target agent B, has to demand for ratings to the other community members who have directly interacted with agent B. Consequently, given the distributed nature of the information, obtaining the ratings from all interactions with a given agent may be too expensive (time consuming) and so, only a subset of the interactions, usually from the relying agents' network are considered to calculate the reputation score. This type of system requires (i) a distributed communication protocol to allow agents to get information from others agent they are considering to transact with; and (ii) a reputation computation method to estimate and update the reputation given the values of other agents (neighbours). A well known example of distributed architecture are P2P networks in which each agent acts as both client and server. It is noted that these networks may introduce a security threat since they could be used to propagate malicious software or to bypass firewalls. Therefore the role of reputation in this particular case is crucial to determine which nodes in the network are most reliable and which ones should be avoided.

Reputation calculation
In the following the most popular mechanisms to compute reputation are outlined.

Counting
This simple technique consists in the summation of the positive ratings minus the summation of negative ones. This is the technique proposed by e-Bay [34] . An alternative related approach is Amazon's one based a weighted average of the ratings taking into consideration factors such as rater trustworthiness, distance between ratings and current scores [36] .

Probabilistic models
These models use as input previous binary ratings, positive or negative, to estimate the probability of a future transaction to be positive or negative [18,22] . The β-probability distribution is a family of continuous probability distributions that has been used in Bayesian analysis to describe initial knowledge concerning probability of success. Its expression relies on the gamma function γ in the following way: In the reputation framework of interest in this paper, α and β represent the amount of positive and negative ratings, respectively.

Fuzzy models
They use of fuzzy numbers or linguistic ratings modelled as fuzzy sets with membership functions describing the extent up to which an agent can be trustworthy or not. Examples of this approach are the Regret System proposed by Sabater and Sierra [35] and trust based GDM methodologies [45][46][47] that will be discussed in more detail in the next section.

Trust propagation approaches
In a social network, where the agents have express their trust on other agents, there might not be direct trust relationship between all pair of agents of the network. The goal of a trust propagation system is to estimate unknown trust values between pair of agents using the known and available trust values (see Fig. 2 ). Thus, given a network of n agents, who may have expressed some of their level of trust (and/or distrust), and T = (t i j ) being the matrix of trust values where t ij ( ∈ [0, 1]) represents the agent i trust value on agent j , there may be the case that some of the elements of matrix T are unknown. The estimation of the trust value between any two agents with no previous interaction between them, has been proposed to be carried out by some authors using models that rely on some kind of transitive property of trust via an iterative transitivity based aggregation along the different paths in the network that connect indirectly both agents until the estimated trust scores become stable for all agents. These models are based on the premise that a user is more likely to trust the statements/recommendations/advices coming from a trusted user. In this way, trust may propagate (with appropriate mitigation) through the network [15] . These models are known as Flow Reputation Systems [17,22] . In what follows a review of the two most representative mechanisms is presented.  Guha's et al. model [15] This approach estimates the trust missing values carrying out atomic propagation of the trust in four different ways (the first three are depicted in Fig. 3 ): • Direct propagation: Given the value of t i j = 1 meaning that i trusts j , and t jk = 1 indicating j trusts k , then it concludes that i trusts k by means of an atomic direct propagation where the trust propagates directly along an edge. • Transpose trust: Given that i trusts j then j might present some level of trust towards i . • Co-citation: In this case the agent i 1 trusts j 1 and j 2 , and agent i 2 trusts j 2 . The co-citation propagation assumes that agent i 2 may trust agent j 1 as well. • Trust coupling: Given that i trusts j then this trust propagates to k when j and k trust agents in common.
The above four types of trust propagation can combined into a single matrix C k B,w based on a belief matrix, B , and a weight vector, W = (w 1 , w 2 , w 3 , w 4 ) , as follows: C k With this approach, distrust can also be propagated. With respect to rust propagation, it is obvious that if i trusts j , and j trusts k , then i might have a "positive" predisposition towards k based on this knowledge. However, with respect to distrust, this transitivity of trust might not hold, which stems from the complexity of trust and distrust representing peoples multidimensional utility functions. To overcome this issue the authors propose two corresponding algebraic notions of distrust propagation: (1) the first one, known as multiplicative distrust propagation, is based on the premise that the enemy of your enemy may be your friend; and (2) the second one, denominated additive distrust propagation, is based on the notion that if your enemy do not trust someone, you should not trust this person either.
In order to propagate both trust and distrust, Guha et al. propose three different schema that varies in the way the belief matrix, B , and the matrix of propagation after k steps, P ( k ), are generated by taking into account or not the distrust values: • Trust only. The distrust value is completely ignored in this case so trust scores are propagated.
• One-step distrust. In this case it is assumed the additive distrust propagation, that is, if a user does not trust someone all the judgments of this person are discarded, and so distrust propagates only one step whereas trust propagates recursively.
In this case it is assumed that both trust and distrust are propagated together, and so they are processed as the two ends of a continuum.
Kamvar's et al. approach [26] This approach is also based on the notion of transitivity but without distrust propagation. In addition, in contrast to the pairwise approach used in Guha's et al. model [15] , for each node this approach calculates a global measure of trust with the following twofold aim: (i) to isolate malicious agents from the network and encouraging agents to interact with reputable ones; and (ii) to motivate the agents to interact by rewarding the reputable ones.
This approach works as follows: each time agent i interacts with agent j , the transaction may be rated as positive . Following this normalisation, the aggregation is carried out by asking the peer i 's acquaintances about their opinions on other peers, which are weighted by the trust peer i places on them. In this way, the trust value that peer i gives to peer k , based on the questioning of the peer i 's friends, is computed as follows t ik = j c i j c jk .

Trust and reputation distributed approaches
In this section we present some examples of systems that combines both trust and reputation.
• RateWeb [30] is a decentralised and unstructured approach that calculates trust between web services. In order to select the appropriate service, the trusting entity queries the community for a list of suitable service providers. This set of eligible services providers is returned with a list of past entities that have used the service. In this case, each agent stores a personal perception of the services it has interacted with, and the reputation of each service based on this feedback is calculated as follows: where L denotes the set of trusting agents that have interacted with the service i; t ij represents the pairwise trust value that agent j has on i; Cr j ∈ [0, 1] is the credibility of each trusting entity, as viewed by the inquiring entity, and λ f ∈ [0, 1] is a trust with time decay factor. • R2Trust [37] is an example of a fully distributed system in which the reputation of an agent in the network is calculated by aggregating the direct interaction of this entity as well as obtaining recommendations. The value of the recommendation is weighted in the aggregation using local pairwise trust values, which are probabilistic ratings between zero and one calculated as a function of the past interaction. These trust values are calculated using the social relationships, with the particularity that it includes the inherent risk in these relationships. These values are then aggregated and accumulated to give an entity a reputation value using a credibility score weighting mechanism to filter out untrustworthy second-hand opinions. The main advantage of this approach is that, thanks to the risk assessment, is able to react quickly when there is an irregular variation on the behaviour of a given peer. In this case the reputation is calculated as follows: where λ k = p(nk ) is the decay factor of time period k ; 0 ≤ λ k ≤ λ k +1 ≤ 1 , 1 ≤ k ≤ n ; and e tk i j denotes a local trust value of entity i on entity j over the time period t k . • GRAft [17] is a distributed reputation system, whose main peculiarity is that it uses both explicit reputation information (like scores and ratings) and implicit information (such as the in-degree and out-degree in a social network). This reputation information is gathered and stored in the form of XML based profiles. This system does not process this information, instead, it proposes logical policies to generate trust based on the users' profile. • Another decentralised graph-based reputation model has been proposed in [49] , which makes use of social context, including the behavioural activities and social relationship of the opinion contributors, to fully build the trust relationships even in scenarios with scarce first-hand information. In order to attenuate the subjectivity of opinions and the dynamics of behaviours, a set of critical trust factors are pointed out. For example, a deception filtering approach able to recognise malicious agents is included to discard bad-mouthing opinions and personalised direct distrust propagation. This system develops a multi-graph based social network in which the users are characterised and interconnected by means of the following criteria: (i) the context of each performed activity; (ii) the values associated with each activity and relationship; (iii) all the performed activities; and (iv) all the social relationships of the users.

Trust based social network decision making approaches
Group decision making is an example scenario in which multiple agents interact to obtain a solution. Nowadays there is an emergent trend that considers this process in the context of a large scale social network with multiple agents negotiating their opinions by means of feedback to reach a solution of consensus [4,40,50] . Generally GDM processes are composed of four main stages as depicted in Fig. 4

Agents preference elicitation
At this first stage the experts provide their opinions. In many approaches the use of fuzzy sets is considered in order to handle uncertainty in the experts' opinions. As explained later, if trust is included in the procedure, then a mechanism to propagate trust between the members taking part in the process is developed. Moreover, there could be cases where some experts may provide incomplete information [41] , and so a process to estimate the missing information may be included taking into account the trust between the experts.

Opinion aggregation
At this stage the opinions coming from the different agents are fused. Here trust may play a key role since the opinion of the agents may be weighed according to their associated trust values.

Consensus calculation
At this stage, the level of similarity between the agents' opinions is computed in order to assess whether an acceptable level of group agreement has been reached.

Feedback
At this stage personalised feedback are produced and provided to all or some members of the network to consider in a negotiation process with the aim to bring the experts' opinion closer and, consequently, to facilitate the reaching of group consensus.
Recently trust and reputation have been consider as key elements in group decision negotiation processes. Fig. 5 shows an scheme that summarises the stages where trust may play a role in these scenarios. In the following an analysis of the main approaches that use trust in group decision making scenarios is presented from three different perspectives: (i) trust propagation; (ii) trust based opinion fusion; and (iii) trust modelling.

Trust propagation in GDM
In [44] a social network based group decision making approach is presented in which the information about trust, and distrust are jointly propagated among the members of the network by means of a t-norm operator following the methodol- ogy presented in [42] , where the t -norm and the t -conorm operators were used to propagate trust and distrust, respectively. Specifically, it is proposed a trust propagation mechanism based on the transitivity property of the t -norm operator that can be formally expressed as follows: P ( (t 1 , d 1 ) , (t 2 , d 2 ) , . . . , (t n , d n ) ) = ( T N ( t 1 , t 2 , . . . , t n ) , T N ( t 1 , t 2 , . . . , t n −1 , d n ) where TN represents a t -norm operator and which are referred to as trust functions λ = (t, d) (TFs). [47] that using two different operators to independently propagate information might not be optimal since trust and distrust are not independent. Moreover the authors remark that the t-norm tends to favour the minimum value in the aggregation whereas the t -conorm favours maximum values, and so, this predictable behaviour suppose a vulnerability in the system in case of malicious user. For this reason, in [47] , unify the propagation of both trust and distrust with the implementation of an uninorm operator, which generalise both t -norms and t -conorm operator. In general, a t -norm operator behaves like a t -norm operator when all values are below the identity element; in the case of all the values being higher than the identity element it behaves like a t -conorm; otherwise it behaves like a symmetric mean operator.
The type of uninorm operator used in [47] is the well known and like representable cross ratio uninorm which, according to [7] , is the most convenient for handling appropriately inconsistent information. When estimating trust between two distant nodes in a social network if more than one indirect path between them exists, then it is proposed to use the shortest one, that is, the one with minimum number of intermediate nodes. For example, given the scenario represented in Fig. 6 , to estimate the trust and distrust values between E1 and E3 the following premises are followed: • Trust propagation: in a complete trust/distrust scenario, it is expected that when expert E1 fully trusts E2 and expert E2 fully trusts E3 then E1 will fully trust E3. • Distrust propagation: expert E1 will fully distrust E3 when expert E2 fully distrusts E3 and expert E1 fully trusts E2.

Trust based opinion fusion
In decision making scenarios it is usually necessary to aggregate the individual opinions before arriving at a collective solution. In the recent literature on fuzzy decision making, Yager's Order Weighted Averaging (OWA) operator [48] has been widely used. This operator allocates different im portance degree to the aggregated elements based on their ordering with respect to certain criteria of interest, a characteristic that has been shown to be very effective against malicious behaviour [10] .
In [47] , the properties of the cross ratio uninorm sum of TFs and the uninorm scalar multiplication of a TF by a natural number are used to formally define the uninorm trust weighted average (UTWA) operator that uses trust as a measure to calculate the weights and the ordering in the aggregation OWA aggregation. Recently in [46] , the authors have proposed an extension of this approach that takes advantage of all the possible paths to estimate trust and distrust and fuse all of them using and OWA operator taking into account the risk attitude of the members of the network. Moreover, this approach uses trust to provide recommendations to the experts to estimate their incomplete preferences. On the other hand, in [45] the set of experts are represented by means of a directed graph in which the edges represent the trust among the given experts. In this case, in order to assess the importance of each expert in the aggregation, the authors propose to calculate the in-degree centrality of each node in the network, and this value, when normalised, is employed to calculate the weights and to induce the ordering of the values in an IOWA aggregation.

Trust modeling
In most of the contributions analysed so far, trust has been modelled by means of a crisp or binary relation, that is only two possible states are assumed: 'trusting' and 'not trusting'. However, in real life situations, trust can be interpreted Table 1 Summary of the trust and reputation approaches analysed.

Reference
Year Architecture Trust/distrust Trust Computation Local/global Application [36] 20 0 0 Centralised Trust Counting Global e-commerce [35] 2001 Centralised Trust/reputation Fuzzy Global General [34] 2002 Centralised Trust Counting Global e-commerce [26] 2003 Distributed Trust Counting Global P2P [15] 2004 Distributed Trust and distrust Counting Pairwise General [18] 2009 Distributed Trust Probabilistic Local Security [30] 2009 Distributed Reputation Multiple sources Global Web services [42] 2009 -Trust/distrust t -norm/ t -conorm Pairwise Rec. systems [37] 2011 Distributed Reputation Past interaction Pairwise General [17] 2015 Distributed Reputation Explicit/implicit Global General [49] 2015 Distributed Reputation Social context Global e-commerce [17] 2015 Distributed Reputation Explicit/implicit Global General [44] 2015 -Trust/distrust t -norm Pairwise GDM [47] 2016 -Trust/distrust Uninorm, Pairwise GDM [29] 2017 -I n t e r v a l trust -Pairwise GDM [46] 2018 -Trust/distrust Uninorm Pairwise GDM [45] 2018 -Linguistic trust -Pairwise GDM as a "gradual concept" with people trusting someone 'high', 'middle' and/or 'low' [42] . In [45] , the concept of distributed linguistic trust is defined to establish the trust relationship between a group users, while in [29] the concept of trust is proposed to be modelled as an interval-valued trust function and definitions are provided for interval-valued trust score, interval-valued knowledge degree, and ranking order relation for interval-valued trust functions, within the aim to achieve more flexibility in the expression of opinions to include trust, distrust, hesitancy and conflict. This last approach propagates trust between experts following the premise that decision makers may trust opinions coming from trusted experts in their vicinity. Therefore, the concept of trust degree is defined based on the distance between two experts without taking into account other aspects such as the reputation of the experts or the interactions between them [31] , which is subsequently used in a recommendation mechanism to increase the consensus. Table 1 presents in chronological order the main contributions, and their main characteristics, that have been analysed in Sections 3 and 4 .

Opinion dynamics
Opinion dynamics can be regarded, on the context of social networks, as the way in which the network is created and evolves provoking individuals' attitudes, opinions and behaviours toward particular objects to change taking into consideration the displayed attitudes, opinions, and behaviours of other individuals toward the same object [8,12,40] . Opinion formation and evolution is closely linked with trust between the members of the network because the opinion of the individuals are related with the opinion of individual who trust each other, and as well the trust that an individual may give to another one is conditioned by the opinion of trust of other trusted individuals. Thus, a social influence networks can be defined as a social structure composed of members dealing with a common issue [14,20,24,25] . These networks can be mathematically formulated as follows: Definition 3. Social influence network: A directed influence network is an ordered triple, G S = < N, T , W >, comprising a set of nodes N connected by a set of ties T with a set of influence weights W attached to T , where w ij represents the influence to agent j by agent i and W = [ In the literature we can find two main mathematical models for opinion dynamics, the DeGroot model [8] and its generalisation proposed by Friedkin and Johnsen [14] .
DeGroot model [8] This model considers the opinion evolution of the individuals as a weighted average of their opinions at a previous time state, using the following recursively mathematical formulation where y ( t ) is a real-valued vector representing the individuals' opinions at time t .
Friedkin and Johnsen (FJ) model [14] This model, which generalises the DeGroot model, considers as well how individual opinions evolve with respect to their previous state opinions, and it is supported by work reported in [20] that empirically proves the assumption that individuals update their opinions via a convex combinations of their own and others' displayed opinions, based on weights that are automatically generated by individuals in their responses to the displayed opinions of other individuals. The mathematical formulation of the FJ Model being According to the FJ model, when opinion formation reaches equilibrium (consensus or a deadlock), the final opinions can be predicted by mean of combining the following three elements: 1. y ( t ) is a column vector representing the opinions of the members of the network at time state t , with y (0) representing their initial opinions.
3. W = w i j m ×m is a row represents actors relative importance (row elements sums equal 1) on others preferences including themselves, i.e. w ij represents the direct importance that actor i gives to the opinion held by actor j .
Particular cases of the FJ model are the so called bounded confidence models. According to these models, each agent uniquely interacts with the agents who hold similar opinions. This similarity of opinions can be seen as a confidence level between agents. Therefore in these models, the similarity between agents as well as their initial opinions determine the opinion of a neighbourhood in which the expert is likely to interact at every instant. The two main bounded confidencebased models are the Hegselmann-Krause (HK) model [16] and the Deffuant-Weisbuch (DW) model [43] , which consist of three main steps: Step1: each agent's confidence set (neighbourhood) is determined: given a confidence level > 0; ∀ i : determine j ∈ { 1 , . . . , n } such that | y i (t) − y j (t) | ≤ ; Step 2: each agent's degrees of influence with respect to a given one is computed ( W = w i j m ×m ); and Step 3: the opinion for each agent is updated.
• The HK model mathematical formulation for the opinion update is: The HK model mathematical formulation for the opinion update is: where λ is a convergence parameter.
The main difference between these two models is that the DW model adopts an asynchronous opinion updating process, whereas the HK model adopts a synchronous updating process.

Discussion and open research challenges
Motivated by the analysis and observations carried out in this research survey, the following research challenges are highlighted:

Identifying sources of trust and reputation in the digital world susceptible of being explicitly and/or implicitly estimated
The majority of commercial systems available rely on their users providing an explicit feedback regarding their degree of satisfaction with the interaction with other users [17] . This approach has the advantage of allowing to obtain direct feedback in a simple way. Nevertheless, direct rating suffers from several drawbacks, such as the low motivation of users to rate, unfair and biased ratings and the ballot box stuffing (more ratings than the legitimate expected number of ratings) [17,22] . In on-line interconnected systems, though, there is much more information available that can be used to extract implicit information about trust and reputation between users. For example two individuals that interact regularly are likely to trust each other. In this regard, mechanisms of behaviour scoring could be designed and developed to assess, in an automatic way, the reputation of agents. Thus, the following two challenges deserve future research effort s to address them: (1) recognising patterns of behaviour related to the reputation level. (2) Merging global behaviour evaluation with pairwise interaction between users to assess peer to peer related trust.
Similarity measures could be useful to assess trust between users. For example, in opinion dynamics models, it has been proved that users tend to consider and trusting more other users with similar opinions to them [16,24,25,40,45] . The challenge resides in this case in the characterisation of similarity measures that captures the relevant features for two users to be considered as similar and so they can trust each other.

Propagation of trust
In order to estimate the trust value between two agents that have not previously interacted, the flow model has been proposed, which relies in the transitivity property of trust and the existence of at least one indirect path in the local network connecting both agents. An important challenge here is the assessment of the degree of coherence of the estimated trust, i.e. according to the known values of trust of an agent, will it be pertinent to use the transitivity property to estimate the missing trust values? This measure of coherence may be used a priori to evaluate whether the transitivity property would be adequate to be used to propagate trust.

Dealing with malicious users
In reputation systems, a common malicious behaviour is associated with unfair ratings. In other words, when demanding feedback from users about how an interaction was, some of the users may act in a malicious way with a given purpose, for example, to manipulate the score towards the benefit of certain entities. Another common malicious behaviour is the so called 'ballot box stuffing', that consists in getting more ratings than the legitimate number of expected ratings. The main approaches to detect and minimise these fraudulent behaviours are: • Endogenous discounting. These mechanisms takes advantages of the statistical properties of the ratings to allocate less importance or even exclude the ratings that presumably may be unfair. • Exogenous discounting. These approaches are based on the premise that the raters with low reputation usually provide unfair valuations, which can be weighted down accordingly. • Providing ratings just after the transaction has been completed. This is good practice to avoid ballot box stuffing.
The challenge in this case resides in the developing of mechanisms to flag malicious behaviour users. In this sense, techniques based on game theory may play a key role [6] together with the use of an OWA-based operator [48] , which has been proved to be better than a weighted average operator to mitigate malicious behaviour in the information aggregation processes [10] .

Reputation and trust in GDM systems
As mentioned before, GDM scenarios can be considered as a specific case of social network in which the users interact to reach a decision where it is desired to reach consensus. It is usual for a GDM process to include a negotiation process in which the agents are asked to provide recommendations to other agents. In this scenario the following three research challenges are identified: • Modelling trust in GDM frameworks. In dealing with trust, the existing decision making approaches assume that pairwise trust values are given, and if there are unknown trust values (not given by experts) these are estimated using a transitivity based trust propagation procedure. In order to avoid malicious behaviour, and as well to liberate the user from rating their peers, other mechanisms of trust calculation are necessary which could be based on measures of centrality [13] , to give an example. An initial effort, in this sense, has been carried out in [40,45] where the trust has been modelled based on the concept of proximity of experts' opinions [31] . • Interpersonal influence, trust and reputation aggregation for recommendations. Within this context it is key to leverage the power of the trust based network in decision making processes using the trust between agents as a measure of influence, not only in e-commerce scenarios but in e-democracy, e-learning and e-health platforms as well [1,28] . With the objective of deploying this idea, we identify two main open questions: (i) How to aggregate both influence and trust in a decision process. In this sense, there are some initial effort s where trust based aggregation of the experts' opinions are carried out using a trust induced IOWA operator [44,45] . A further endeavour in this context is to extend the opinion dynamics and propagation approaches by including trust assessment and analyse how the propagation of both influence and trust could be done. (ii) Integrating trust in complex scenarios where users are heterogeneous, and both reputation and trust may be in conflict, as it might happen in an e-health network to provide recommendations where we can have users with different profiles, and relevance levels such us doctors and patients. • Reputation based expert selection. In a trust network of peers there may be a large corpus of experts where not all of them will be part in the decision and therefore it is necessary to choose whom will be involved, for which reputation could be used, and avoid disclosing sensible information to experts whose reputation is low, as applied in P2P networks, in what it is call the search phase.

Conclusion
Nowadays on-line services and communities play a prominent role in people's life, from communication and buying purposes to advice seeking for decision making, or recommendations. Nevertheless, the inherent anonymity and open architecture of these systems make difficult for the users to build trust relations among their peers in the network. It is, in this context, where trust and reputation mechanisms play an increasingly important role in order to facilitate users interactions. Therefore, it is not surprising the rich literature growing around these mechanisms for on-line communities and in commercial applications. In this contribution we have analysed these approaches and presented a survey of the main trust propagation and reputation schemes. Moreover, opinion dynamics models, which allows to identify both leaders and followers in a social network environment, are currently being investigated with the aim of identifying new opportunities together with both trust and influence propagation. From this study, we have found that the main areas of application of trust and reputation systems are e-commerce, P2P networks and it is increasingly used in GDM models. However, these mechanisms are poised to have a larger impact on a wide range of scenarios, ranging from large scale decision making procedures, such as the ones implied in e-democracy, trust based recommendations on e-health scenarios or influence and performance assessment in e-marketing and e-learning systems. In this regard, we have pointed out various research challenges that deserves further attention: (i) identifying new implicit and explicit sources of trust and reputation; (ii) trust and influence propagation in decision making approaches; (iii) trust based missing information estimation; (iv) dealing with malicious users via game theory methodologies; and (v) exploiting trust to carry out experts selection. To conclude, it is important to remark that there is not a single solution that can be applied in all contexts. Therefore the challenge when developing a new system is to tailor it to the specific constraints imposed by each application, and the type of information that can be used for the ratings, striking a balance between reliability, security and users acceptability.