ON THE CONTROL OF INFORMATION AND THEIR OBJECTIVES: MULTI-INFORMATION MODEL

In this article, we propose a new multi-information discrete-time model describing the dissemination of several pieces of information from one person to another, it can be shared word-to-mouth or in certain types of online environments such as Facebook, WhatsApp, and Twitter. First, we present the model and the different possible interactions between its compartments. Based on the fact that there is always an objective behind the dissemination of information, in the modeling process we distinguish between information that shares the same purpose and information that shares an opposite objective to study their mutual effect. To do this, we divide the entire target population into three groups for each piece of information and consider the possible transition between these groups. We suggest an optimal control strategy that helps to eliminate target information. We use a discrete version of Pontryagin’s Maximum Principle to characterize optimal controls. Numerical simulations are carried out to illustrate the different effects and to show the efficiency of the proposed approach.


INTRODUCTION
Over the past forty years, traditional methods of studying social processes such as information diffusion, expert identification or community detection have been focused on studies of relativity small groups [1]. However, the widespread proliferation of several social websites such as Facebook, Twitter, Digg, Flickr and Youtube has provided ample avenues to researchers to study such processes at very large scales [2]. For example, the social network Facebook currently features more than 350 million users, while Twitter has a rate of approximately 17 thousand posts (tweets) per minute. Information overload has become an ubiquitous problem in modern society [3]. As the penetration of smartphones in societies increases, there is a large growth in the use of different communication channels. This trend is followed by the fast growth in use of online social networking services [4]. As a result, people become more and more addicted to the fact of posting and sharing information with each other in the most popular social media technologies .
Social media platforms are increasingly being used as a tool for gathering information about, for example, societal issues, and to find out about the latest developments during breaking news stories. This is possible because these platforms enable anyone with an internet-connected device to share in real-time their thoughts or to post an update about an unfolding event that they may be witnessing. Hence, social media has become a powerful tool for journalists but also for ordinary citizens [5]. However, social network users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information [3]. The advent of social media and online social networking has led to a dramatic increase in the amount of information a user is exposed to, greatly increasing the chances of the user experiencing an information overload. In particular, microbloggers complain of information overload to the greatest extent [6]. Surveys show that two thirds of Twitter have felt that they receive too many posts, and over half of Twitter users have felt the need for a tool to filter the irrelevant posts [7].
Every day, millions of messages are created, commented, and shared by people on network websites, this provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenger due to the heterogeneous and dynamic crowd behaviors [8]. The challenge is rooted in data analysis capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation which is suspect because of its uncertain origins, the social networking is a fruitful environment for the massive diffusion of unverified rumors, also allows for the rapid dissemination of conspiracy theories that often elicit rapidly. Rumors have been recognized as one of the most important contributing factors to violence, prejudice, and discrimination [9] .
The explosive use of social media, in information dissemination and communication, has also made it popular platform for the spread of rumors that could be easy propagated and received by a large number of users in social media, resulting in catastrophic effects in the physical world in a very short period. It is challenging task; if not impossible, to apply classical supervised learning methods to the early detection of rumors, since the labeling process is time-consuming and labor-intensive [10].
Rumor is a kind of social phenomenon that a remark spreads on a large scale in a short time through chains of communication and runs through the whole evolutionary history of mankind [11]. Usually, it is dispersed by some people in order to achieve the specific purpose: slandering others, manufacturing momentum, diverting attention, causing panic, and so on [12]. Most rumors induce panic psychology or economic loss in the accompanying unexpected events.
Emergencies cause serious negative impacts on people' s life in several ways: not only the event itself might lead to financial loss or personal injuries, but also the rumor might lead to panic feelings and irrational behavior [13]. For example, the nuclear leakage in Japan caused an alt-buying frenzy in China. With the rumors spreading, this frenzy swept and caused social panic in just a few days and the preternatural rising of the salt price, which has a negative effect on society and economy [14].
The spreading of rumors, also known as rumor mongering, has long been examined by psychologists and sociologists who cited that false information spreads via three stages, parturition (the rumor's genesis), diffusion (the rumor's re-transmission), and control (the rumor's decline) [15]. After a rumor goes through parturition and diffusion, eventually it will either die a natural death as interest wanes, or perish as a result of deliberate effort in stopping its propagation [16].
The information is similar to the virus in the way it spreads between individuals [17]. There are three similarities between epidemics and information, the first one is the idea of infectivity which is present in both processes even though the definitions are different. Viruses such as influenza [18,19], Ebola [20] and COVID-19 [21,22,23] are extremely contagious and easy to transmit, information are just as contagious because all that is needed to infect an individual is to transmit this information. Once an information is started, it seems like almost everybody will eventually know it, and the person who started the information has caused "infections" of the information " virus" . Secondly, is the idea that little changes have big effects on the population. In the case of the common cold, it is possible for only a few coughs and sneezes to cause infections in many people. The same holds for information due to the fact that only a few people need to know it in order to have rapid dissemination. The final similarity is that major events happen in a short amount of time. The potential for an outbreak to occur is present for both epidemics and information [24].
Two types of rumors are circulated in social media: long-standing false-information that circulates for long periods of time, without their veracity being established with certainty. These rumors provoke significant, ongoing interest, despite the difficulty in establishing the actual truth. The other type is newly emerging rumors spawned during fast-paced events such as breaking news, besides this, they are generally ones that have not been observed before, where reports are released piecemeal and often with an unverified status in their early stages [25].
Rumor is a potentially harmful social phenomenon that has been observed in all human societies in all times. Social networking sites provide a platform for the rapid interchange of information and hence, for the rapid dissemination of unsubstantiated claims that are potentially harmful [26]. Therefore, it is necessary to develop mathematical models to analyze and predict the spreading of the rumors as a function of time [27].
Information (rumor) models are, in principle, similar to biological epidemic models like Susceptible-Infected-Susceptible (SIS) and Susceptible-Infected-Recovered (SIR), used for modeling the spread of pathogens in a population [28], which is divided into three compartments (or classes): ignorants (those who don' t have the information), spreaders or sharers (those who are spreading the information) and stiflers (those who have stopped spreading it). Spreaders are generated at some rate due to ignorant-spreader contact dynamics which is similar to the biological epidemic models. However, the recovery process in the rumor models is different from that in biological epidemic models [29].
Daley and Kendall proposed in 1965, a mathematical model to simulate the process of the spreading a rumor, the so-called DK model. This model classifies the population into three different groups: The ignorant population which starts a rumor. The spreading population which spreads the rumor, and the stifler population which hears of the rumor and decides not to spread it [27]. This model suppose that a certain population consists of N individuals. One member initially learns a rumor from an outside source, and starts telling it to other members, who continue spreading the information. A knower becomes inactive once he encounters somebody already informed [30]. Afterwards, Maki and Thomson developed another classical MK model, which focused on the analysis of the rumor spreading based on mathematical theory via direct contact between spreaders and others [31]. The basic version of the model is defined by assuming that a population represented by a graph is subdivided into three classes of individuals: ignorants, spreaders and stiflers. A spreader tells the rumor to any of its (nearest) ignorant neighbors at rate one. At the same rate, a spreader becomes a stifler after a contact with other (nearest neighbor) spreaders, or stiflers [32].
Perceived source credibility of an information becomes an increasingly important variable to examine within social media, especially in terms of crisis and risk information. This is because with the increasing amount of information available through newer channels, the gatekeeping function (the process through which content creators decide what stories will be covered and reported, and thus, what information is released to consumers ) seems to shift away from producers of content and onto consumers of that content. Because information provided in newer channels often lacks professional gatekeepers to check content, and thus, lacks some of the traditional markers used to determine source credibility, consumers become more responsible for making decisions about the credibility of information online. Therefore, in new media environments the gates are now located not only with the information providers but also with the information consumers who in the new media environment are acting as their own gatekeepers.
Therefore, gatewatchers fundamentally diffuse information by making sources known to others in the new media environment. Rather than publishing unique information, they make others' information known and add to it. This can be seen in environments such as Facebook when a user publishes a link and then comments on it [33].
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of fake news, low quality news with intentionally false information. The extensive spread of unverified information has the potential for extremely negative impacts on individuals and society. Therefore, rumors detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media not applicable. First, rumors are intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, an auxiliary information is required to be included, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy [34].
These similarities between the spread of the epidemic and the spread of information allowed the researcher to use epidemiological models to model information dissemination and its impact on public opinion [35,36,37].
In this direction, and based on all these facts, here, we divide the population into three groups, which will make it possible to study the development of ignorant people (people who do not know the information), spreaders (people who are interested in this information, who find pleasure in sharing it), removed (people who see that this information lacks relevance and compatibility with their profiles, then they refuse to share it). We distinguish all information that share the same objective O 1 , and those sharing the opposite objective O 2 . Therefore, we propose a new multi-information model which consists of three compartments for each piece of information. In the next section, we present the mathematical model and its compartments.
Then, in section 3, we introduce the optimal control approach that we propose in order to reduce the number of people who share the targeted information and increase the number of its removed. In section 4, we simulate our results numerically to evaluate the effectiveness of this type of optimal control strategy in reducing the number of spreaders and increasing the number of removeds at an optimal cost.

PRESENTATION OF THE MODEL
Information is easily spread, by all means, word of mouth, emails, phone calls, social networks, etc. With the help of all the advanced technologies that facilitate human communication, information spreads quickly. One of the most important factors in spreading information is the option of "Share" that accompanies any status update, link, video, or image posted. Content viewers (for example, friends of the creator and subscribers) are allowed to share the post. For example, on almost social networks, if the content was originally posted publicly, anyone can view and share it [17].
We devise here a compartmental model to study the dissemination of p information in an online environment of N users (Facebook, WhatsApp or Tweeter groups or pages) by posting, sharing and discussing [38]. In these online environments, when a user posts information (text, image, video ...), only his neighbors can see it and decide whether this information is worth sharing again or not. If the information is very interesting and some neighbors decide to share it, the author's neighbors can see it and also re-share it. After that, the influence of the information goes beyond the local scope of the author and can be widely publicized on the network. On the other hand, if none of the original author's neighbors are attracted to this information, it will soon disappear and very few users will see it. At the same time, if neighbors see the message and do not immediately share it, they may gradually lose interest and ignore to share this information.
However, if the user notices that some information is being duplicated and shared by many of his neighbors, he will discuss it with his friends via chat tools or face to face, so that he can determine the relevance of this information and then decide to share it or not. When people debate a topic, they rely on a set of consistent information to validate their point of view, and thus persuade others who might have an opposing opinion. The aim of the discussion may not be to convince others of a dissenting opinion, but rather to persuade them not to publish more information that shows their point of view.
To incorporate all these considerations in our model, we assume that there are p information circulating on the internet, that is J = i 1 , i 2 , ..., i p where J is the set of all these information. Usually we find several information that appear different, but the goal of publishing them is the same. For example, the information on the daily death toll from traffic accidents and the information on the number of daily traffic violations recorded, these information have the same goal, which is to improve driving by respecting the laws. While we can find other information that has the opposite purpose, for example information on traffic jams at a certain time or information on the application of quarantine from a certain time, these information may have the opposite purpose, which is to create a state of panic among the people and thus increase the violation of traffic laws.
Therefore, we suppose that information z ∈ J share the goal G 1 , and the information x ∈ J share the goal G 2 .
If G 1 = G 2 , thus z and x are said media-compatible information.
If G 1 opposes G 2 , thus z and x are said media-incompatible information.
If G 1 = G 2 and G 1 is not opposed to G 2 , thus z and x are said media-independent information.
For an information z ∈ J we define the following sets: C (z) = {k ∈ J /k and z are media-compatible} C (z) = {k ∈ J /k and z are media-incompatible} N (z) = {k ∈ J /k and z are media-independent} Our model consists of three compartments of each information j: Ignorants, Sharers or spreaders, and Removed people. The term "ignorant" (I j ) means a person that does not know yet about the information j. The word " Sharer" (S j ) is used to denote that a person is attracted by the information j and/or he finds it funny or interesting, then he decides to share it. The term "Removed" (R j ) means a person who has seen and know about the information j and has decided not to share it. For example, because of irrelevance or for other personal reasons. We kept the term Removed from the classical SIR epidemiological model to denote individuals removed from the sharing system. All transmissions are modeled using the mass action principle, which accounts for the probability of transmission in contact between the different compartments.
Each information has the potential of sharing, but one can find some information not useful or does not fit the user interests, and then there is no need to share it. For example, if the information is about a concern of the public opinion (Raising costs of education, election cheats, public safety... ), the probability of shares will be very important. Therefore, the potential relevance of the information will be taken into account and it will be defined based on the proportions of sharers. Let's define the potential relevance of the information j by the average β j . After a contact between the ignorant I j of the information j with a sharer S k of the mediacompatible information k (where k ∈ C ( j)), the Ignorant I j becomes a Sharer S j just after he/she shares the information at the rate A sharer S j of the information j ∈ C (z) after a contact with a sharer S k of the mediaincompatible information k ∈C (z) he/she would loss interest of sharing the information j and then become a removed of the information j at a rate α j S j i S k i N j . Note that 1 α j represents the power of persuading people by the information j and the ease with which it is accepted: the smaller α j , the greater the strength of the information j.
Any sharer S j can lose interest of sharing and decide at any time not to share the information j anymore for personal or other reasons, thus he becomes a Removed R j at a rate γ j S j . All these interactions happen at the instant i, and N j i is the total targeted population by the information j at instant i, that this N j i = I j i + S j i + R j i . We propose a discrete-time compartmental model describing the interactions between the different information governed by the following equations:   Fig. 1, and parameters description can be found in Table 1.

THE OPTIMAL CONTROL PROBLEM
3.1. Presentation of the controls. The main goal here, is to eradicate the information j. We introduce controls function describing the possible additional contacts between sharers of the information j and the sharers of the other media-incompatible information k ∈C ( j). This controls represent the clarifications and documents that will be shared to proof that the information j is not true, or it has no importance, or all things that will stop it from being published anymore.
Thus, the controlled system is given by the following equations: Where S j 0 > 0, I j 0 > 0 and R j 0 > 0 for all j ∈ J .

Objective functional.
The main objective of this study is to use a variable feedback control function, depending on output of the system. We use optimal control strategy to reduce the number of Sharers and increase the number of Removeds, and that with optimal costs of applying the control. Then, the problem is to minimize the objective functional given by Where A k > 0, α S > 0, α R > 0 are the weight constants of control, the sharers and removed, respectively, u j = u 1, j , ..., u l, j , where u 1, j , ..., u l, j ∈C ( j). and N is the final time of our strategy of control.
Our goal is to minimize Sharers, minimize the cost of applying controls and increase the number of Removed of the information j ∈ J . In other words, we are seeking an optimal control u * j such that where U j is the control set defined by 3.3. Sufficient conditions. Theorem 1. There exists an optimal control u * j ∈ U j such that subject to the control system (4)-(6) and initial conditions.
3.4. Necessary conditions. By using a discrete version of the Pontryagin's maximum principle [41,42,40,43], we derive necessary conditions for our optimal controls. For this purpose, we define the Hamiltonian as: Theorem 2. Given optimal controls u * and solutions I * , S * and R * , there exists ζ k,i , i = 0...N − 1, k = 1, 2, 3, the adjoint variables satisfying the following equations: where ζ 1,N = 0, ζ 2,N = α S , ζ 3,N = −α R are the transversality conditions. In addition Proof. Using the discrete version of the Pontryagin's maximum principle [41,42], we obtain the following adjoint equations: To obtain the optimality conditions we take the variation with respect to controls (u i and v i ) and set it equal to zero Then we obtain the optimal control u k, j By the bounds in U of the controls in the definitions (8), it is easy to obtain u * i in the following form

NUMERICAL SIMULATIONS
We now present numerical simulations associated with the above-mentioned optimal control problem. We write code in MATLAB T M and simulated our results using data from Table 2. The optimality systems are solved based on an iterative discrete scheme that converges following an appropriate test similar to the one related to the Forward-Backward Sweep Method (FBSM).
The state system with an initial guess is solved forward in time and then the adjoint system is solved backward in time because of the transversality conditions. Afterward, we update the optimal control values using the values of state and co-state variables obtained at the previous steps. Finally, we execute the previous steps until a tolerance criterion is reached.
In these simulations, we suppose that there are 4 different informations, that is  To achieve these optimal results by following our control strategy, we suggest using it in the first 24 hours of information appearing to bring forward the peak of sharers. When those concerned do not provide more explanatory information, people can be left feeling that there is something wrong, which leads to a lot of gossips. In the case of government rumors, some information can build trust and strengthen social stability.    In the sub-figure (c) we can see the comparison between the number of removed individuals R 2 with and without the use of the optimal controls, where it can be seen that the number of removed population for media-compatible information is insensitive to these optimal controls.

CONCLUSION
In this article, we proposed a multi-information discrete-time model describing the dissemination of several pieces of information within a population, it can be shared word-to-mouth via video calls or in certain types of online environments such as Facebook, WhatsApp, and Twitter. We presented the model and the different possible interactions between its compartments.
We considered all information that shares the same objective and information that shares the opposite objective to study their mutual effect. We divide the entire target population into three groups for each piece of information and consider the possible transition between them. We suggested an optimal control strategy that helps to eliminate a target information. Based on a

CONFLICT OF INTERESTS
The author(s) declare that there is no conflict of interests.