Probabilistic Trust Evaluation with Inaccurate Reputation Reports

We are concerned with the problem of trust evaluation in the generic context of large scale open-ended systems. In such systems the truster agents have to interact with other trustee peers to achieve their goals, while the trustees may not behave as required in practice. The truster therefore has to predict the behaviors of potential trustees to identify reliable ones, based on past interaction experience. Due to the size of the system, often there is little or no past interaction between the truster and the trustee, that is, the case wherein the truster should resort to the third party agents, termed advisors here, inquiring about the reputation of the trustee. The problem is complicated by the possibility that the advisors may deliberately provide inaccurate and even misleading reputation reports to the truster. To this end, we develop techniques to take account of inaccurate reputations in modeling the behaviors of the trustee based on the Bayesian formalism. The core of the techniques is a proposed notion, termed Advisor-to-Truster relevance measure, based on which the incorrect reputation reports are rectified for use in the trust evaluation process. The benefit induced by the proposed techniques is verified by simulated experiments.


Introduction
Nowadays, many computational systems, such as peer-topeer networks, e-commerce, and the Grid, are moving toward open, large-scale, dynamic, and distributed architectures [1][2][3]. In such open-ended systems, an agent often has to interact with other peers to achieve its goal, while the partner agents may be malicious and thus do not behave as required [4][5][6][7][8][9]. The new features of these systems, for example, scalability, mobility, autonomy, ubiquity, incomplete information, and global connectivity, imply that traditional security mechanisms are no longer applicable to find malicious agents or control their behaviors [9,10]. One of the alternatives currently being investigated is an approach based on the notion of trust. The basic idea is to provide a quantitative evaluation for trust in each possible participating agent using history of its behavior [5][6][7][8][9]. The trust value here is a number expressing the level of trustworthiness and this view is known as the computational trust. Based on the output of this trust evaluation process, the truster agent can choose its interaction partners.
Due to the size of the system, there is often little or no past interactions between the truster and the trustee, which hinders the truster from evaluating the trustee's trustworthiness precisely enough. In this case, the truster can resort to the third party agents, termed advisors here, in order to collect more historical observations on the trustee's behavior [7][8][9]. The advisors are expected to provide honest report about the trustee's behavior to the truster, while, in practice, some advisors may deliberately provide inaccurate and even misleading reports to the truster. Thus, an efficient trust model taking account of possible inaccuracies in the advisors' reputation reports is required.
To this end, we develop techniques to take account of the aforementioned issues in modeling the behaviors of the trustee. The core of the techniques is a proposed notion, 2 International Journal of Distributed Sensor Networks termed Advisor-to-Truster relevance (ATTR) measure, based on which the incorrect reputation reports are rectified for use in the trust evaluation process.
The remainder of the paper is organized as follows. In Section 2, we formalise the general notion of probabilistic trust by introducing the typical beta trust model. The issues related with inaccurate reputations are pointed out precisely in a mathematical manner. In Section 3, we propose the notion termed ATTR measure and present the trust evaluation method using that measure. In Section 4, we test the performance of the proposed method via simulation experiments. Section 5 concludes the paper.

Beta Trust Model
In this section, we define the basic notion and mathematically formulate the problem illustrated in the Introduction. We use the same notion as in [9] and focus on the perspective of probabilistic trust. The task is then to build probabilistic models for agents' behaviors using the outcomes of historical interactions. Using these models, a truster can estimate the probability of particular outcomes of the next interaction with a trustee te. Such a probability defines the trust of the truster tr in the trustee te. This notion of trust mimics the trusting relationship between human beings illustrated in [11].
The crucial part of any probabilistic trust model is a behaviour model, which is used to estimate the probabilities of future outcomes of the next interaction. Assume that the outcomes are either success or failure ; a beta trust model is introduced in [8] and then followed by other works, for example, in [6,7,12]. The idea is to model the behavior of a trustee te by a beta probability distribution over possible outcomes, that is, success or failure , of an interaction with te. Given a sequence of outcomes ℎ = 1 ⋅ ⋅ ⋅ , parameters of this beta distribution can be estimated by Bayesian techniques [8].
The outcomes are binary here with the beta trust model. We therefore focus on the single probability te that an interaction with the given trustee te will be successful. Under the assumption of fixed te , a sequence of outcomes ℎ = 1 ⋅ ⋅ ⋅ is a sequence of Bernoulli trials, and the number of successful outcomes in ℎ is probabilistically distributed according to a binomial distribution It has been recognized that the beta probability density function (pdf) indexed by the parameters and where Γ is the gamma function, is a conjugate prior to the binomial distribution [13]. That is, if ( te | pr , pr ) is chosen as the a priori pdf of te , then, given a sequence ℎ of outcomes, the resulting a posteriori pdf of te is ( te | post , post ), the beta pdf with parameters post and post , where the a posteriori parameters are related to the a priori ones and the outcome sequence ℎ by the following equations: where ♯ (ℎ) and ♯ (ℎ) denote the numbers of successful and unsuccessful interactions in ℎ, respectively.
Here the estimate for te , the probability of having successful interaction, is naturally evaluated as the expected value of te according to its a posteriori pdf. Using the properties of the beta pdf, this expected value is given by Observe that a uniform pdf, which assigns equal likelihood to all values of te in the range [0, 1], can be represented exactly by a beta distribution with chosen parameters = 1 and = 1. Taking the uniform pdf as the a priori pdf for te , which just indicates an "unbiased" prior belief about te , as no value is more likely than another, then the parameters of the a posteriori pdf, namely, post and post , are related to the sequence ℎ of outcomes as follows: and the beta estimate for te is therefore given by (♯ (ℎ) + 1)/(♯ (ℎ) + ♯ (ℎ) + 2).
As mentioned in the Introduction, the truster tr has to consider reputation reports provided by other peers, termed advisors here, about the trustee te under consideration to enhance the trust evaluation process, especially when there is little or no past interaction experience about the trustee. The beta trust model encompasses a mechanism for handling the reputation reports. It treats each interaction with the trustee as a Bernoulli trial regardless of the interacting partner. Given the representation of te's behavior by the beta probability distribution (parametrised by te ), the sequence ℎ in the estimating equation (3) is correctly seen, therefore, as the sequence of the outcomes of all historical interactions with te regardless of its partners in these interactions. This allows for formulating a reputation report from an advisor ad as the pair where ♯ (ℎ ad ) (resp., ♯ (ℎ ad )) is the count of successful (resp., unsuccessful) interaction outcomes in the sequence ℎ ad of personal interactions between the advisor ad and the trustee te. With a set of reputation reports (provided by different advisors), a truster can evaluate the ingredients of the beta trust (5) as follows: International Journal of Distributed Sensor Networks 3 where the subscript ad is the index of the advisors that have interacted with the trustee in the past. Following the beta model for trust and reputation, the Dirichlet trust model is introduced in [5], which is also followed by [10]. This model generalises the beta trust model such that the outcome of an interaction is not restricted to be binary (success or failure) but rather takes a value from any set of discrete rating levels, for example, {V bad, bad, average, good, and excellent}. Both the beta trust model and the Dirichlet trust model process the reputation reports provided by the advisors in the same way as for the outcomes of direct interactions of the truster with the trustee. In another word, they do not take account of the possibility that the reputation reports, that is, ♯ (ℎ ad ) and ♯ (ℎ ad ) in (7), may be inaccurate and even misleading. To this end, we develop techniques to handle possible inaccuracies in the advisors' reputation reports, which will be presented in the following section.

Handing Inaccurate Reputations Using Relevance Information
As mentioned above, the advisor peers may provide inaccurate and even misleading reputations reports to the truster on the trustee, while many existing trust models, for example, the beta trust model and the Dirichlet trust model, do not take account of that. In this section, a notion of Advisor-to-Truster relevance (ATTR) measure is proposed, based on which we develop techniques in order to handle the inaccurate reputation reports in modeling the trust of the truster in the trustee.

Advisor-to-Truster Relevance (ATTR) Measure.
The ATTR measure is used to quantify the extent to which the reputation report provided by an advisor ad is relevant with the trust evaluation outcome given by the truster tr. We introduce two different approaches to define the ATTR measure. One is based on the Cosine similarity measure, which found wide applications in the field of information retrieval [14][15][16], and the other one is developed by the authors.
To begin with, suppose that we have a trustee set {te 1 , . . . , te }, each member of which had interactions with both the tr and ad in the past. Represent the reputation report of ad on the trustee te , ∈ {1, . . . , }, as Then the probability that the advisor ad will have a successful interaction with te is naturally estimated by We then estimate the probability that the truster tr will have a successful interaction with te , denoted by tr → te , in a similar way. We construct two corresponding vectors, ⃗ ad and ⃗ tr , in the same space of probabilities for ad and tr, respectively; that is, ⃗ ad ≜ [ ad → te 1 , . . . , ad → te ], ⃗ tr ≜ [ tr → te 1 , . . . , tr → te ]. Now we introduce two metrics of the ATTR measure, namely, the cosine ATTR measure and the adjusted cosine ATTR measure, in the following.

The Cosine ATTR Measure.
Here the taxonomy of the ATTR measure is estimated as the cosine between the corresponding vectors, ⃗ ad and ⃗ tr , as follows: Observe that such cosine measure is a dot product scaled by magnitude, and, because of this scaling, it is normalized between 0 and 1. For a pair of two-dimensional vectors, such a cosine measure only depends on the included angle between the two vectors, regardless of the vectors' magnitudes.
To further corroborate the above statement, consider two example cases. In the former one, we have ⃗ ad = [0.1, 0.1, 0.1, 0.1, 0.1] and ⃗ tr = [0.9, 0.9, 0.9, 0.9, 0.9]. In the latter case, we have ⃗ ad = ⃗ tr = [0.9, 0.9, 0.9, 0.9, 0.9]. It is easy to see that, in the former case, the physical meaning of the reputation report given by the advisor ad is totally reverse to that of the truster tr. This advisor ad in the former case is likely to be a malicious agent which deliberately reports reversed counts of successful and failing interactions to the truster tr. In the latter case, the physical meaning of the reputation report given by the advisor ad just coincides with that of the truster tr. However, using (10), we get the same relevance measure, that is, (ad, tr) = 1, for both cases. This result surely corroborates the above statement that the cosine measure only depends on the included angle between the two vectors, regardless of the magnitude. However, this is not a characteristic we want, since, using this measure, we will treat a remarkable malicious agent in the same way as for an ordinary honest peer. Clearly, to make full use of the reputation data, it requires us to develop a smarter relevance measure. To this end, we propose the adjusted cosine measure in the following subsection.

Adjusted Cosine ATTR Measure.
To deal with the aforementioned counterintuitive property of the cosine ATTR measure, we propose here an adjusted cosine ATTR measure, which defines to be (ad, tr) where ( ) ≜ − 0.5. The item exp(−99) present in the denominator is used to avoid the appearance of a zero-valued denominator. Compared with (10), it is seen that each in (10) is substituted by a ( ) in (11). Since the number 0.5 is a naturally neutral probability value, a positive (resp., negative) value of −0.5 just suggests a tendency for the outcome of the next interaction to be success (resp., failure). The magnitude of − 0.5 indicates the magnitude of such a tendency. If all 's take the value of 0.5, which indicates that the advisor ad report reputations in a totally random manner, then no information about the trustee's trustworthiness will be drawn from its reputation report. In this case, we expect to calculate out a zero-valued ATTR measure. Using (11), we will indeed calculate out a zero-valued ATTR measure, which means that this measure conforms to our intuitive understanding on the notion of relevance in this special case. Now let us reconsider the example cases presented above; that is, we have ⃗ ad = [0.1, 0.1, 0.1, 0.1, 0.1] and ⃗ tr = [0.9, 0.9, 0.9, 0.9, 0.9] in the first case and ⃗ ad = ⃗ tr = [0.9, 0.9, 0.9, 0.9, 0.9] in the second case. Now using this adjusted cosine ATTR measure defined in (11), we will get relevance measures (ad, tr) = −1 and (ad, tr) = 1 for the former and latter cases, respectively. It is shown that the sign of the relevance value just gives a clear indication to discriminate the above two correlated but totally different cases. In another word, compared with the cosine ATTR measure, this new measure can provide more critical information on the relevance relationship between the advisor ad and the truster tr.

Trust Evaluation Using ATTR.
Here, we present our approach to trust evaluation using the proposed notion of ATTR measure.
The assumption behind the approach is that a given malicious advisor ad is more likely to provide reputation report about trustee te +1 , with a similar error pattern as those provided in the past about other trustees, te 1 , . . . , te . For example, if a given advisor ad provided reversed counts of successful and failing interactions about te 1 , . . . , te , then, under this assumption, it is very likely that this advisor ad will also provide reversed counts of successful and failing interactions about the trustee te +1 .
We present the proposed approach within the framework of beta trust model introduced in Section 2. To make the description as clear as possible, we rewrite the beta pdf here Suppose that, just before seeing the reputation report provided by the advisor ad, the a priori pdf of te is represented as ( te | pr , pr ). Then, upon receiving the reputation report from the advisor ad about the trustee te represented as (♯ (ℎ ad ), ♯ (ℎ ad )), this approach updates the a posteriori pdf, According to the Bayesian inference mechanism [17], the a posteriori pdf will change to be the a priori pdf when new observations, that is, new reputation reports here, arrive. It is shown that, in this approach, the sign of ATTR determines whether to reverse the count numbers of successful and failing outcomes in the reputation reports and the magnitude determines to which extent the reported counts are discounted, for updating the a posteriori pdf.

Experiments
In this section, we design simulated experiments to examine how effectively we could cope with inaccurate reputation reports using the proposed notion of ATTR measure. We compare two implementations of the ATTR measure, based on the cosine measure and the adjusted cosine measure described in Section 3, respectively. The beta trust modeling approach described in Section 2 is included as the benchmark for performance comparison.
The objective here is to demonstrate that the proposed ATTR measure works. A comparative study of our method with other related methods, for example, TRAVOS [7] and BLADE [18], is certainly interesting but has not yet been performed and is not the intention here.
We are concerned with the ability of our methods to deal with the deceptive advisors at first. The deceptive advisors will report the reversed counts of successful and failing interactions with a trustee, which means that if the true counts of successful and failing interactions are and , respectively, then it reports that the counts of successful and failing interactions to be and , respectively. We simulated 20 deceptive advisors, each having a common training set of trustees with the truster. Each training trustee has interacted with both the truster and the advisor in the past, and the number of past interactions is randomly drawn from a uniform discrete distribution over the range [5,20]. The number of trustees in each common training set is fixed to be 10.
We consider the process of aggregating 20 deceptive advisors one by one and this process is initialized by a truster which knows nothing about the behavior of the potential trustee, and thus we use a beta pdf with chosen parameters = 1 and = 1 to model it. The true probability te that an interaction with the trustee te under consideration will be successful is 0.9. The difference between the mean of the predicted te and the true value 0.9 is termed mean error here. As shown in Figure 1, using the adjusted cosine ATTR measure, the mean error converges to 0 quickly. In contrast, for the cosine measure and the traditional beta trust model, the mean error is much larger as the advisors' reputation reports have been aggregated. Figure 2 graphs change in the mean error of each method as the percentage of deceptive advisors grows, and this result is obtained by a Monte Carlo simulation consisting of 100 independent runs of each method with respect to each specific percentage value.
It is shown that, using the proposed adjusted cosine ATTR measure, we could cope with deceptive reputation reports very well.   Now we evaluate whether this ATTR measure will affect the efficiency of the trust evaluation process when the reputation reports are all accurate; namely, they are all provided by the honest advisors. Use the same setting as for the above deception experiment, except that the advisors now will report honestly the true interaction data to the truster. The simulation result is shown in Figure 3, which reveals that all the involved methods perform equally well in this case. We could approximately deem that if the reputation reports are accurate, the ATTR based method automatically degenerates into the traditional beta trust model approach.
The same experimental setup is used to examine how the totally random reputation reports will affect the trust   evaluation methods under consideration. The totally random reputation reports represent the reports in which the counts of successful and failing interactions are determined in a totally random manner regardless of the truth. In the simulation, the count of successful interactions is drawn from a uniform discrete distribution over [0, the total number of interactions], where the total number of interactions is itself a random variable, whose value is randomly drawn from a uniform discrete distribution over the range [5,20]. As shown in Figure 4, none of the involved methods performs well enough with totally random reputation reports aggregated. Figure 5 graphs change in the mean error of each method as the percentage of abnormal advisors which produce totally randomly generated reputation reports grows. The same as  before, this result is obtained by a Monte Carlo simulation consisting of 100 independent runs of each method with respect to each specific percentage value. It is shown that the mean error of each method grows as the percentage of abnormal advisors grows, while the adjusted ATTR measure based method performs better than the others. Specifically, as this percentage is approaching the point 50%, the advantage of the adjusted cosine ATTR measure based method over the others becomes remarkable.

Conclusions
In this paper, we proposed techniques to cope with the problem of trust evaluation involving inaccurate reputation reports for large scale open-ended systems. The basic idea is to utilize the relevance information between the truster agent and the advisor agents. We proposed a notion termed ATTR measure to quantify such relevance information and developed two implementations of the ATTR measure. We demonstrated that one of the proposed implementations, namely, the adjusted cosine ATTR measure, could provide promising solutions to complex trust evaluation problems involving inaccurate reputation reports.
Through simulated experiments, it is shown that the proposed techniques perform remarkably satisfactorily in dealing with deceptive advisors and perform better than the other methods involved in dealing with inaccurate, totally random generated reputation reports.
The notion of ATTR measure is proposed here within the framework of beta trust model. It is also feasible to use this notion and develop corresponding techniques based on other probabilistic trust models, for example, the Dirichlet trust model [5,10], which is a generalization of the beta trust model, in order to cope with inaccurate reputation reports.