New Techniques for Limiting Misinformation Propagation

This paper focuses on limiting misinformation propagation in networks. Its first contribution is introducing the notion of vaccinated observers, which is a node enriched with additional power. Vaccination is adding, locally, a plugin or asking for the help of a trusted third party, called a trusted authority. The plugin or the authority is able to detect if the received information is misinformation or not. Vaccinated Observers must stop forwarding detected misinformation. Based on this notion, two algorithms for limiting misinformation are proposed. The second contribution of the paper is an algorithm based on Moving Observers for locating a strong adversary diffusion source. This algorithm selects a random subset of nodes as observers for a random period <inline-formula> <tex-math notation="LaTeX">$\Delta $ </tex-math></inline-formula>. This means that the observer subset may change over time in a randomized manner. Consequently, the strong adversary diffusion source can’t have global knowledge about observers positions. Having these positions by the diffusion source will make its localization by the observers more complicated, even impossible. The third contribution is proposing an algorithm for stopping misinformation propagation based on a punishment strategy. This algorithm has a very simple principle design and it assumes that an authority or a mechanism <inline-formula> <tex-math notation="LaTeX">$A$ </tex-math></inline-formula> is available. The authority <inline-formula> <tex-math notation="LaTeX">$A$ </tex-math></inline-formula> has the ability to detect if the received information is misinformation or not. If a node <inline-formula> <tex-math notation="LaTeX">$n_{i}$ </tex-math></inline-formula> receives information <inline-formula> <tex-math notation="LaTeX">$m$ </tex-math></inline-formula> from its neighbor <inline-formula> <tex-math notation="LaTeX">$n_{j}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$m$ </tex-math></inline-formula> is detected, by <inline-formula> <tex-math notation="LaTeX">$n_{i}$ </tex-math></inline-formula> via the authority <inline-formula> <tex-math notation="LaTeX">$A$ </tex-math></inline-formula>, as misinformation then <inline-formula> <tex-math notation="LaTeX">$n_{j}$ </tex-math></inline-formula> is punished for a period <inline-formula> <tex-math notation="LaTeX">$pp$ </tex-math></inline-formula> (<inline-formula> <tex-math notation="LaTeX">$pp$ </tex-math></inline-formula> stands for punishment period). If the node <inline-formula> <tex-math notation="LaTeX">$n_{j}$ </tex-math></inline-formula> repeats this action for <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula> time then the punishment period increases to <inline-formula> <tex-math notation="LaTeX">$n*pp$ </tex-math></inline-formula>. The punishment in this algorithm is stopping the forwarding of the information received from a punished node <inline-formula> <tex-math notation="LaTeX">$n_{j}$ </tex-math></inline-formula>. The simulation results show that the proposed techniques are both efficient and accurate while locating the diffusion source. Consequently, misinformation propagation is limited.


I. INTRODUCTION A. CONTEXT AND MOTIVATION
In recent years, the spread of misinformation has become a major concern and a very challenging problem. Misinformation refers to false or inaccurate information that is spread unintentionally or intentionally. False information can be shared easily and quickly through social networks and other online platforms, causing serious consequences among individuals and communities. Influencing public opinion and The associate editor coordinating the review of this manuscript and approving it for publication was Barbara Masucci . policy and causing harm to individuals who believe false information are examples of these consequences.
Limiting misinformation propagation is a critical issue that requires promoting the spread of accurate and reliable information while minimizing the spread of false or misleading information. To achieve this goal, several techniques and strategies are proposed, including locating the diffusion source [1], fact-checking [2], education [3], collaboration [4], transparency [5] and empowering credible sources [6]. These strategies may be working together to ensure that accurate information is verified, disseminated, and easily accessible, while false or misleading information is identified and corrected.
From a technical point of view, locating the diffusion source that causes misinformation is one of the most studied techniques [1], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. The diffusion source may be a source of rumors in social networks [20], [21], [22], spreaders of viruses and worms in computer networks [8], or an origin of infectious diseases [23], [24], [25]. Locating and isolating the diffusion sources of perturbation is an important step that limiting risks and protecting networks. Most works about locating the diffusion source are based on the observation of the network. The earliest studies assume that all nodes participate in the observation operation of the network, which is very complex and impracticable, especially in largescale networks [7], [8], [26]. To overcome this issue, the observation operation is limited only to a subset of nodes, called observers [1], [7], [8], [9], [10], [27], [28], [29]. The system models considered by most of the works on locating the diffusion source have four main drawbacks: • Global infection models: most of the works assume that all nodes (including observers) are infected [1], [28]. In other words, every infected node transmits the received misinformation to its neighbors until all nodes become infected. In reality, it has been shown that, even for the most dangerous infectious diseases, the spread is limited for a portion of the network (low infection rate). A real example has been provided in [30]. 1 • No reaction from observer about an infection : observers are not enriched with any additional power. They cannot stop the re-transmission of an infection and may be infected them-self.
• No assumptions about the power of the diffusion source: As only a portion of the network may be infected and as observers are selected with a static manner, if the diffusion source is a strong adversary then it is possible that may never be located. This is due to having global knowledge by the strong adversary diffusion source about observers positions. By this, the diffusion source tries to not infect observers so that it is not located.
• There are no clear punishment strategies for nodes or users who forward a misinformation: There exist some punishment strategies used by some social networks such as Facebook and Twitter. Generally, these strategies are based on blocking the social network users, by the central authority, for a period of time.
Generally, most works that deal with source localization consider two main aspects: i) developing new mathematical models to have more accuracy for source localization or ii) finding the best strategies for observers placement inside networks to reduce computation and enhance source identification. 1 https://snap.stanford.edu/data/higgs-twitter.html. An announcement on Twitter ( 2012) shows that the misinformation propagates through only 56% of the network (256491 nodes from 456626).
For locating the diffusion source, the earliest works [7], [8], and [26] assume that all nodes participate in the observation operation of the network. In [1] and [9], the number of observers is reduced to only a subset of nodes. Specifically, in [1], observers collect information about the infection time and rumor spreader. Based on the collected information, a global estimator is calculated for locating the diffusion source. In order to reduce computation time and have a more efficient estimator, Hongjue [9] have proposed an algorithm with low complexity using the Spearman centrality concept to locate the initial source. In [31], the proposed model aims to localize the source for a more general propagation model (unlike previous works, which needed to know which model was used for locating the source). The related works cited above are for locating a single diffusion source. In [32], [33], and [48], algorithms have been proposed for locating multiple sources and estimating their number. In [31], techniques for locating single or multiple diffusion sources have been surveyed.

C. CONTRIBUTION
For limiting misinformation propagation, this paper provides three major contributions: • Limiting Misinformation by Vaccinated Observers: For this, we introduce the notion of the vaccinated observer, which is a node enriched with additional power. This power may be a plugin [34] or an authority able to check whether a received piece of information m is correct or not. In other words: the vaccination is adding locally a plugin or asking for the help of a trusted third party, called authority. The plugin or the authority can detect if the received information is misinformation or not.
Vaccinated Observers must stop forwarding detected misinformation. Based on vaccinated observers, an algorithm for limiting the misinformation is proposed. However, simulation results show that the vaccination limits the misinformation propagation to only a small part of the network, but decreases the accuracy of identifying the source who initiated the misinformation. To overcome this drawback, an isolation-based algorithm is proposed.
• Limiting Misinformation by Moving Observers: For locating a strong adversary diffusion source, we propose an algorithm based on moving observers. This algorithm selects a random subset of nodes as observers for a random period . This means that the observers subset may change over time with a randomized manner. Consequently, having global knowledge by the strong adversary diffusion source about observers positions, for avoiding to locate it, will be more complicated, even impossible.
• Limiting Misinformation by Punishment: For this, we propose an algorithm for stopping the misinformation propagation based on a punishment strategy. This algorithm has a very simple principle design and it assumes that an authority or a mechanism A, that has the ability to detect if the received information is a misinformation or not, is available. If a node n i receives information m from its neighbor n j and m is detected, by n i via the authority A, as a misinformation then n j is punished for a period punish_period. If the node n j repeats this action for count times then the punishment period increases to count * punish_period. The punishment in this algorithm is to not forward the information received from a punished node n j . This means that nodes that cause misinformation over the networks will be punished for a very long time period. Consequently, the misinformation will be limited over the network.

D. ROADMAP
The paper is composed of 6 sections. Section II presents the system model and the related assumptions. Section III presents three techniques for limiting misinformation propagation. The first technique proposes two algorithms, based on vaccinated observers, for locating the diffusion source. The second is proposing an algorithm, based on moving observers, for locating the diffusion source. The last technique is to propose an algorithm based on a punishment strategy. Section IV, presents simulation results. Section V discusses the efficiency and accuracy of the proposed techniques. Section VI concludes the paper.

A. NETWORK MODEL AND DEFINITIONS
The underlying network is modeled by an undirected graph G = (V , E), where V is a finite set of nodes and E is the set of edges connecting these nodes. N i is the set of neighbors of node n i . The source that initiates diffusion (s ∈ V ) is unknown. Note that in some other contexts, the real structure of the network can be unknown (e.g., epidemic propagation) [23], [24]. Concepts and terminologies used in this paper are defined as follows: Definition 1 (An Observer Node): An observer is a node that is able to check if the received information is misinformation or not. It detects also the time and the sender identity from which the misinformation is received.

Definition 2 (A Weak Adversary Diffusion Source):
A weak adversary diffusion source is a node that spreads misinformation without having any knowledge about observers. This means that, it has no information about the identity and/or position of observers, and thus cannot take any measures about the network.

Definition 3 (A Strong Adversary Diffusion Source):
A strong adversary diffusion source is a node that has all knowledge about observers, such as their identities and positions. Therefore, it has the ability to avoid being located by observers and takes measures to remain unidentified.

Definition 4 (A Vaccinated Node):
A vaccinated node is able to check whether a received information is misinformation, and stops to forwarding this misinformation to its neighbors.

B. PROPAGATION MODEL
The propagation model used in this paper is the Susceptible-Infected model (SI), which is a variant of the epidemiologyinspired model Susceptible-Infected-Recovered) SIR [70], [71], [72]. The SIR model has three classes of nodes: i) susceptible nodes, capable of being infected, ii) infected nodes, which can further spread the virus, and iii) recovered nodes that are cured and can no longer become infected. In contrast, the considered SI model has only two classes of nodes: i) susceptible (not infected) and ii) infected nodes. Initially, every node is susceptible and can be infected by one of its infected neighbors during the propagation process. Once a node has been infected, it remains infected.
Using the notion of time, the propagation process is modeled as follows. The source node s initiates the infection at time t (where both s and t are unknown). At time t, only s is infected while all the remaining nodes in the network are not yet. Then, each neighbor node n ∈ N s will be infected by s at time t n = t + θ sn , where θ sn is a random value associated to the propagation delay on edge e sn . Once each neighbor n of s becomes infected, it reproduces the same behavior of s. It re-transmits the infection to all its non-infected neighbors n ′ after a duration θ nn ′ . This process continues until there is no possibility for more infection transmission.

C. OBSERVERS SUBSET
. Each observer has to check whether the received information

D. A TRUSTED AUTHORITY
A trusted authority A is a node that executes the following tasks: i) collecting observations from observers, ii) checking the existence of misinformation from the received observations, and iii) locating the diffusion source based on received observations. For example, in real applications like social networks, the central trusted authority represents the authority of management that has control over the whole network. For detecting the misinformation by the trusted authority, solutions based on deep learning or data mining may be used [38], [49], [50], [51], [52], [56], [59], [69], [73]. Figure 1 presents an example of Observers-Trusted authority architec-ture. Note that, all observers are, directly, connected with the trusted authority.

E. LOCATING THE DIFFUSION SOURCE
For locating the diffusion source, our experiments use the estimator formula of [9].
This formula is stated as the following: The trusted authority A collects the set of k observations, carried the infection time t i and the propagation delays between any two neighbors n i and n j . After this, it calculates p i where d i is the geodesic distance. The node that has the max value for p j is the diffusion source.

F. NOTATIONS
For the convenience of the presentation, Table 1 compiles all notation and variables utilized by the different proposed algorithms are presented respectively in the following sections.

III. THE PROPOSED TECHNIQUES FOR LIMITING MISINFORMATION PROPAGATION A. LIMITING MISINFORMATION PROPAGATION BY VACCINATED OBSERVERS
The vaccination concept has been proposed in [34] to stop spreading fake news on social networks. In this section, we introduce the vaccinated observers concept. This concept is used for proposing two algorithms to locate the diffusion source based on vaccinated observers. The first algorithm is basic, but its accuracy, for locating the diffusion source, is limited if we have a large number of the vaccinated observer. This is due to the no re-transmission of the misinformation by vaccinated observers. To increase the accuracy, the second algorithm is proposed. This algorithm uses the isolation notion, with which the predicted diffusion source will be isolated, even if it is not the real source. The algorithm repeats the same operation until locating the real diffusion source. This is, to our knowledge, the first solution, in literature, for that problem. Figure 2 presents a basic algorithm for locating the diffusion source based on vaccinated observers. Vaccinated observers may be selected, by the trusted authority A, using different strategies such as randomization, high-degree, or centrality. A vaccinated observer executes two parallel tasks. In the first task T 1, if an observer o receives an information(m, n j ) message from a neighbor n j , it sends an observation(information(m, n j ), o i , n j , received_time) message to the trusted authority A (lines 1 and 2). In the second task T 2, when the observer o receives a verification(information(m, * ), response) message from the authority A, if the message information(m, * ) is not a misinformation (reponse = False ) then o sends this message to all its neighbors (lines 3 and 4). Consequently, as observers are vaccinated, if the message information(m, * ) is a misinformation (reponse = True ) then it will never be sent for any node. The trusted authority A executes the task T 3 of the algorithm. When A receives observation (information(m, * ), * , * , * ) messages from all observers and stores those messages in a local variable W (line 5), it invokes the function check(information(m, * )) to verify if information(m, * ) is a misinformation or no. The result of this verification is stored in the local variable response (line 6). After this, The authority A sends a verification(information(m, ), response) message to all observers (line 7). Based on messages stored in W , A invokes another function for locating the diffusion source (line 8). VOLUME 11, 2023 Figure 3 presents an algorithm for locating the diffusion source based on vaccinated observers and isolation. A vaccinated observer executes two parallel tasks T 1 and T 2. These tasks are the same tasks as presented in the sectionIII-A1. The trusted authority A executes the task T 3 of the algorithm.

2) AN ISOLATION-BASED ALGORITHM
At line 5, A manages three local variables isolated, i and s real . isolated is a set, initialized to ∅, to store identities of isolated sources. i, initialized to 0, is a counter, used to count the number of predicted sources. s real , initialized to ⊥, is to store the identity of the real source. Lines 6,7, and 8 are similar to lines 5,6, and 7 of the algorithm of figure 2, respectively. At line 9, if information(m, * ) message is a misinformation (response = True),then A invokes the function localizeSrc(W ) for locating the predicted source (line 10). After this, the counter i is incremented. In the case (i = 1), the founded predicted source (line 10) is stored in the local variable s real (line 11). This means that, this is the first source founded by our algorithm. In the case (i > 1) and (s real ̸ = s predicted ), the precedent s real is removed from the set of isolated sources isolated. After this, the predicted source when (i > 1) is added to isolated, and the real source is the new predicted source (line 12).   figure  2, respectively. Note that the subset of observers O new and the value of ( is a period in which the selected observers are active) are randomly generated. This is for making the task of a strong diffusion source adversary very difficult, even impossible. In other words, if O new and is generated with randomized manner then it is impossible for a strong source to predict the subset O new in advance, even if it has global knowledge about the network. Figure 5 presents an algorithm for limiting misinformation propagation based on a punishment strategy. A node n i executes two parallel tasks. In the first task T 1, at line 1, n j manages two local variables count j , and punished i . count j , initialized to 0, is a counter, used the punishments for a node n j . punished i is a set, initialized to ∅, to store identities of punished neighbors n j . If a node n i receives an information(m, n j ) message from a neighbor n j , it sends the same message information(m, n j ) to the trusted authority A (lines 2 and 3). At line 4, if neighbor node n j is in punished i set and date ≥ punish_date j (punishment date of n j is expired) then n j is removed from the set punished i . In the second task T 2, when the node n i receives a verification(information(m, n j ), response) message from the authority A, if the neighbor n j is not in the punished i set and the received information(m, * ) message is a misinformation (reponse = True ), n j is added the set of punished neighbors punished i and the new punishment date is (count j × punish_period) (lines 5, 6). If the neighbor n j is not in the punished i set and the received information(m, * ) message is with (reponse = True ), then the received information(m, * ) message is sent to all neighbors (line 7). The trusted authority A executes the task T 3 of the algorithm. When A receives information(m, * ) message from a node n, it invokes the function check(information(m, * )) to verify if information(m, * ) is a misinformation or not. The result of this verification is stored in the local variable response (line 9). After this, the authority A sends a verification(information(m, * ), response) message to a node n (line 10). Figure 6 describes with a very simple way an architecture for the proposed algorithm.

IV. SIMULATION RESULTS
The section describes the experiments we have conducted to assess both the efficiency and the accuracy of the proposed algorithms. Algorithms of figures 2, 3, 4 and 5 have been implemented using the Python NetworkX package under synthetic networks and a real dataset. We have conducted experiments under various synthetic networks composed of 600 nodes (i.e., scale-free (Barabási-Albert), small-world (Wattz-Strogatz), and random (Erdos-Renyi) [74]). The real dataset, composed of 4039 nodes and 88234 edges, is extracted from Facebook social network [30]. 2 To simulate the misinformation propagation, We consider the SI model described in subsection II-B and the estimator formula, for locating the diffusion source, presented in subsection II-E.
Evaluations are performed according to i) the observers rate in the network (which varies from 1% to 50%), ii) vaccinated and unvaccinated observers, and iii) the observers positions. To select observers positions, we use the following strategies: randomization, high-degree, betweenness centrality, and closeness centrality. For performance assessment, we consider Three metrics: rate of global infection propagation in the network, locating source accuracy, and distance error. distance error is to evaluate the distance of the faulty predicted source from the real source in terms of hops.   Figure 7 illustrates the global infection rate in the synthetic networks based on vaccinated observers. To limit the misinformation propagation, vaccination has a few impact in the case of small-world (Wattz-Strogatz) and random (Erdos-Renyi) networks, but, in the case of scale-free (Barabási-Albert) networks, vaccination is very efficient. The manner with which observers are selected and their rate determine the efficiency of an algorithm for limiting the misinformation propagation. Vaccinated high-degree nodes and central (betweenness) nodes provide more efficient results than vaccinating random nodes and central (closeness) nodes. When the rate of vaccinated nodes grows, the misinformation propagation becomes very limited. For example, the vaccination rate of 5% allows a propagation of 90%, but, with only 30%, the misinformation propagation become near to 0%. Note that, for unvaccinated observers, the global propagation rate is 100%. Figure 8 illustrates that vaccination, significantly, limits the global propagation in the case of the Facebook dataset. Selecting observers with betweenness centrality is the most efficient strategy to limit global propagation. For a vaccination rate that varies between 5% and 50%, the propagation is limited to 30 % and 18%, respectively. Figure 9 shows that, in the case of scale-free (Barabási-Albert) network, unvaccinated observers provide more accuracy for locating the diffusion source than vaccinated observers. This is due to the propagation limitation effect of vaccination. Consequently, some observers did not get the misinformation, because an important part of the network is not infected. Note that, in the case of unvaccinated observers, locating the diffusion source is very easy, because the whole network is infected. Figure 10 and figure 11 show that simulation results, with vaccinated and unvaccinated observers, are almost similar in the case of random (Erdos-Renyi) and the case of small-world (Wattz-Strogatz). Both cases show that it is very to locate the diffusion source where the observers are unvaccinated, because the whole network is infected. When observers are vaccinated the problem of locating the diffusion source becomes more complex. This is due to the propagation limitation effect of vaccination. When the vaccination rate increases then locating the diffusion source is less accurate.  In the case of vaccinated observers, figure 12 shows that if the strategy used to select observers limits the misinformation propagation then it is less accurate for locating the diffusion source. For example, betweenness centrality is more efficient to limit misinformation propagation, but is less accurate to locate the diffusion source.

3) DISTANCE ERROR
In the case of vaccinated observers, table 2 shows that regardless of the strategy used to select observers, the real source is not located, but the predicted source is close to the real one. The average distance error varies between 1.6 and 3.2 for random and closeness centrality strategies, respectively. The average distance error is the number of hops between the real source and the predicted source. Note that, in the case of the unvaccinated, the predicted source is the real one.
The case of the Facebook dataset is very similar to the last one as shown in figure 13. If observers are vaccinated, then, the average distance error varies between 1 and 5 for random and closeness centrality strategies, respectively. When observers are unvaccinated, then, the average distance error varies between 1 and 2 for random and high-degree strategies, respectively.
In Table 2, we have considered 20% of vaccinated observers and launched 20 processes of diffusion on the generated scale-free (Barabási-Albert) network composed of 600 nodes. The obtained results depicted in this table show that the average distance error is less than 2 hops and that in the worst case (when the source detection algorithm fails), the estimated source is 3 hops far from the real source. Table 3 presents the average number of rounds needed to locate the real source for the algorithm of figure 3. According to these obtained results, the algorithm can find the exact source in a finite number of rounds. As previously shown in subsection IV-A1, Table 3 illustrates that vaccination   and the position of observers impact the limiting of the misinformation propagation. In particular, for the scalefree network, central (betweenness) observers are the most efficient in limiting the propagation at 16%. Random vaccinated observers allow a global propagation limited to 81% in the network. Finally, note that when the propagation is low, more time (i.e., more rounds) is needed to localize the real source and vice versa. Figure 14 shows the accuracy to locate the source of misinformation based on moving observers and static observers. For this, we have used a low rate of propagation (only 90% of nodes are infected), 10% of random observers, and by launching 20 successive propagation tests (for each test we try to locate the same source 20 times). Simulation results show that the algorithm based on moving observers is more efficient for locating the diffusion than the case of static    61244 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.   observers. Over time, for every test, moving observers can easily find the source. In the case of static observers, the propagation source cannot be located sometimes. This is due to the positions of selected static observers. If a static observer is close to the infection then the diffusion source will be located, else it is very complicated to locate it. Note that if the whole network is infected then both cases (moving and static) are equivalents. The randomized manner with which observers are selected is to make the misinformation propagation task of a strong adversary diffusion source more complicated, because it cannot predict the observers subset in advance, for trying to avoid that misinformation passes through them. Figure 15 presents the global infection rate in the case of the Facebook network dataset based on the punishment strategy used by our proposed algorithm. Without punishment, the misinformation propagation infects the whole network when the node infection rate is more than 0.5. The node infection rate represents the rate of neighbors that may be infected by a node. For example, if the node infection rate is equal to 1 then all its neighbors will be infected. If only a few rates of nodes execute the proposed algorithm of figure 5, the misinformation propagation is considerably reduced. Using a community of 2% of random nodes to punish neighbors, the global infection propagates only on 35% of the network. The propagation is very limited, even near to 0%, if only 10% of nodes execute the proposed algorithm.

V. DISCUSSION
To summarize, the conducted experiments have shown that the impact of vaccination on locating the diffusion source and the misinformation propagation depends on two main parameters: the type of network and the selection strategy of observers. The obtained results have also shown that propagation limitation through vaccination impacts negatively the accuracy of locating the diffusion source. That is, the accuracy decreases when the infection hits only a small portion of the network. This result reflects the logic of the locating source estimators proposed in the literature, which are based on the number of observations detected by the community of observers. When the infection is partial, the low number of observations decreases the localization accuracy. More specifically, when the source cannot be correctly located due to partial propagation, the obtained results have shown that the predicted sources in these cases are not far from the real ones.
For combating a strong adversary diffusion source, the manner, with which observers are selected in all previous works, needs to be changed. As the whole network is rarely infected, using static observers makes difficult the problem of locating the diffusion source, because observers may be selected from the part of the network that is not infected. Moreover, a strong adversary diffusion source can have all positions of statics observers. This is, will allow for it to avoid them when it propagates its misinformation. With moving randomized observers, it is very difficult, even impossible, for a strong adversary diffusion source to have all observers positions.
For limiting the misinformation propagation, the punishment strategy used by our proposed algorithm is very efficient. In contrast of the punishment strategy, used currently by some social networks in which the trusted authority punishes nodes or users that propagate misinformation, in our algorithm the punishment is decentralized to nodes. If only a few rates of nodes execute our proposed algorithm, the misinformation propagation is considerably reduced.

VI. CONCLUSION AND FUTURE WORK
Considering the problem of limiting misinformation propagation, this paper has proposed new techniques that solve this problem. The first technique introduces the notion of vaccinated observers. Based on this notion, a basic algorithm for locating the diffusion source has been proposed. Comparatively with the case of unvaccinated observers, this algorithm is very efficient for limiting the misinformation, but its accuracy may decrease. To combine with the efficiency and more accuracy, another algorithm based on isolation has been proposed. The predicted diffusion source will be isolated, even if it is not the real source. The algorithm repeats the same operation until locating the real diffusion source. The experiments has been conducted for this technique show that when the propagation is low, more time is needed to localize the real source and vice versa. In other words, if the whole network is infected, such as the case of unvaccinated observers, then the problem of locating the diffusion source is very easy, but if only a part of the network is infected then this problem becomes more difficult.
The second technique introduces the notion of moving observers. Based on this notion, an algorithm is proposed. In this algorithm, observers are selected with a randomized manner for a random period . In other words, the observers subset may change over time. This algorithm is efficient for locating the diffusion source in the case of a strong adversary diffusion source, but it is similar to the algorithm of figure 2, in the case of a weak adversary diffusion source. Consequently, it is very complicated, even impossible, that a strong adversary diffusion source has global knowledge about observers and their positions in a real-time.
The third technique proposes a punishment strategy. Based on this strategy, an algorithm for stopping the misinformation propagation has been proposed. This algorithm has a very simple principle design. Each time, a node repeats the action of forwarding a misinformation, the punishment period will be increased. The punishment in this algorithm is to stop forwarding the information received from a punished node during a time period. Consequently, nodes that cause misinformation over the networks will be punished for a very long time period. Simulation results show that if only one-tenth of nodes execute the proposed algorithm, then the misinformation propagation is very limited, even near to 0 %. Finlay, future work includes considering a more realistic propagation model, such as the presence of strong adversaries multi-sources.