Mitigating On-Off Attacks in the Internet of Things Using a Distributed Trust Management Scheme

In the Internet of Things (IoT), physical objects are able to provide or require determined services. The purpose of this work is to identify malicious behavior of nodes and prevent possible On-Off attacks to a multiservice IoT. The proposed trust management model uses direct information generated from direct communication with the nodes to evaluate trust between nodes. This distributed approach allows nodes to be completely autonomous in making decisions about the behavior of other nodes. We perform network simulations using Contiki-OS to analyze the performance of the proposed trust model. Simulation results show effectiveness against On-Off attacks and also a good performance to recognize malicious nodes in the network.


Introduction
The term Internet of Things (IoT) arises from the need to establish heterogeneous environments where the devices with varying processing capabilities can cooperate and communicate in an intelligent environment transparently to the user [1,2]. In IoT, there is interconnection of heterogeneous devices, which in turn will be connected to the Internet global network anytime and anywhere, allowing the availability of information in real time [1][2][3]. These objects are called smart objects and have the ability to perform the daily tasks with minimal human intervention. IoT integrates a large amount of everyday life devices from heterogeneous network environments, bringing a great challenge into security and trust management [3,4]. These devices in IoT often will be exposed to public areas and communicate through wireless channels, hence vulnerable to malicious attacks.
The trust management concept has been proposed in order to provide solutions to problems such as key management, authentication mechanisms, and secure routing [5]. The basic idea of the trust management is to establish trust between two individual nodes. Trust management is a mechanism that also allows identifying malicious, selfish, and compromised nodes. It has been widely studied in many network environments such as peer-to-peer, grid, ad hoc, and wireless sensor networks [5][6][7]. The current trust management mechanisms proposed in the literature do not meet all requirements for a functional implementation for the IoT context. There are only few works with focus in the IoT [8].
The mobility of the nodes also plays an important role when designing such mechanisms and protocols. The inability of some objects in the IoT context of processing, storage, and limited power as well as the heterogeneity of the network and the different services to be offered under this new paradigm makes the development of new trust management mechanisms necessary. In IoT, all information about the user will be available online; creating an efficient lightweight trust management mechanism that fits the characteristics of each object is indispensable [8][9][10]. Also, trust management schemes are susceptible to attacks such as bad mouthing, selective attack, and On-Off attacks, which have to be considered in the system [11]. One important characteristic of IoT not explored in trust systems for ad hoc and sensor networks is the multiservice approach [1,8,11]. In IoT, environments there are heterogeneous nodes which can provide different types of services. Each service demands different resources of each node.
Our work is based on [11] where the authors propose a design of a centralized trust management scheme. The trust management in [11] is carried out by a trusted entity 2 International Journal of Distributed Sensor Networks previously configured using a multiservice approach. However, in this work, we propose a distributed management model where the decision of the trust value is computed directly by any node, without the need of a central entity. Depending on the assistance to a service, any node may infer the trust on other nodes. We propose a trust management for the IoT that considers past behavior of nodes in different cooperative services. We study the behavior of the proposed system in the presence of On-Off attacks. In these attacks, the malicious nodes behave well and badly alternatively, compromising the network if they remain as trusted nodes.
The paper is organized as follows: in Section 2, we discuss some trust management concepts for IoT; our proposed model is presented in Section 3; Section 4 shows the simulation results and, finally, Section 5 gives the final considerations.

Trust Management for the Internet of Things
The trust in IoT is a term that involves the analysis of the behavior of the devices connected to the same network. Trust relationship between two devices helps in influencing the future behaviors of their interactions. When devices trust each other, they prefer to share services and resources to certain extent. Trust management allows the computation and analysis of trust among devices to make suitable decision in order to establish efficient and reliable communication among devices [5,9,10]. Basically, trust management is the mechanisms to evaluate, establish, maintain, and revoke the trust between devices of the same or different networks within the IoT environment.
In the literature, it is possible to find a large number of definitions of trust. In [5] is presented an overview of the definitions of the terms trust and reputation. According to the authors, there is no definition which can satisfy dependencies of context and time and the dynamic nature of the trust. One of the most cited definitions in the literature of the trust is defined as "a person's expectation about the actions of others that affect the choice of the first person." In the literature, we found works where the notion of trust is related to the reputation, with the assertion that the first (trust) is a derivation of the second (reputation). The reputation is defined as the opinion of one node on another and is built over time based on the node's behavior history. It can be reflected as positive, negative, or uncertain.
Some authors summarize the characteristics of the trust and reputation of wireless systems in [5][6][7].
Trust is useful only in an environment characterized by uncertainty and where the participants need to depend on each other to achieve their goals.
Trust Is Context Sensitive. The trust relationship is based on how well the subject's capabilities suit the context in which the relationship exists.
Trust Is Subjective. The formation of an opinion about someone's trustworthiness depends not only on the behaviors of the subject but also on how these behaviors are perceived by the agent.

Trust Is
Unidirectional. An agent's trust in a subject is based on the knowledge that it has about the subject but may not be reciprocated.
Trust May Not Be Transitive. Node A does not have to rely on node B just because node C that it trusts knows node B.
In [9], the author discusses some questions about trust in an IoT environment from a human perspective. It is discussed if humans can trust devices in IoT, not if devices can trust other devices. The proposal of [10] is based on the concept of communities formed by nodes in a spontaneous ad hoc network. The trust chain in a wireless network is created based on physical proximity, reflecting the way humans interact. In this scheme, devices establish their trust based on their initial communities or companies. In [12,13], Bao and Chen propose different metrics to be used in a trust management system, including its cooperation to provide a service and recommendations. The authors evaluate the proposed strategy in the presence of bad mouthing and good mouthing attacks. In [14], the authors propose a different approach for trust using a simple game approach which achieves Bayes equilibrium. The IoT network consists of RFID nodes. Chen et al. [15] propose a trust management scheme for IoT where the nodes report recommendations to their neighbors and the trust is computed at the constrained device. The work presented in [11] considers a context-aware and multiservice approach for IoT, where trust is evaluated based on direct and indirect observations (recommendations). In a multiservice IoT, nodes can provide different services. Each service has a different cost in terms of resources consumption of the nodes. The trust in [11] is not computed at individual nodes, but in a centralized trust manager. The scheme is evaluated in the presence of bad mouthing, selective repeat, and On-Off attacks. We believe the multiservice approach is a more realistic scenario for IoT. Our proposed scheme is inspired by [11], but we use a distributed approach instead of a centralized strategy.
All these trust management schemes are vulnerable to some types of attacks. The On-Off attack is of special interest because it explores the vulnerabilities of trust management strategies [16,17]. By behaving as a good node and as a bad node alternatively, this attack may cause the scheme to consider the behavior of a bad node as a temporary error. Thus, the malicious node remains active in the network and is not detected. The On-Off attack has two states: the On state is considered the attack state; the Off state is a normal state and the node behaves as a good node. The attacker switches between these two states. A high ratio of the Off state in relation to On state is a more effective attack but also makes it more easy for a trust management scheme to detect the malicious behavior [17].
Although some schemes consider a centralized scheme [11], most of the works consider a distributed approach [12][13][14][15] or even a hybrid approach [10]. The choice between a centralized and a decentralized trust management scheme depends on the scenario considered.
Trust can be computed by demand (when a node needs to communicate with another node) or it can be evaluated on a regular basis. The first one benefits of a centralized approach and the second of a distributed approach. The authors in [11] state that trust information computed on a regular basis to be disseminated in the network would result in transmission overhead, thus depleting the energy of the constrained nodes and affecting network performance. Also the trust information has to be stored in memory constrained nodes. These arguments make the authors in [11] propose a centralized strategy with trust management servers responsible for trust computational load. However, it is not possible to implant such servers in all scenarios of IoT. Ad hoc networks, for example, do not have a central entity and are an important part of the IoT. The cost to install and maintain such servers may also be unfeasible in many application scenarios. We think both centralized and distributed approaches can coexist with each other. In this work, we investigate a multiservice approach for trust management, similar to [11], but with a distributed scheme.

Proposed Distributed Trust Management Scheme for a Multiservice IoT
For the design of a functional trust management model for the IoT environment, it is necessary to consider some factors and basic characteristics that influence the decision-making whether to trust or not into an entity. For the assessment of trust, we must take into account that there are levels of trust between entities. For example, a trusted entity for a particular task may not be reliable for other tasks. In the IoT environment with services approach, it is required that the nodes are able to provide services with a common goal, but not all nodes have the same resource capabilities to provide every kind of service. For this reason, it is important to evaluate the trust level taking into account the current context and resource capabilities. Traditional schemes for ad hoc and sensor networks define trust levels for different nodes in a similar manner. Also, trust management schemes for sensor networks define a global score for any type of assisted service. We consider a node may offer different types of services. Lightweight services and a more resource consuming service have different impacts in the trust score, a more realistic scenario for IoT. Our proposal has been developed with the objective that gives to nodes the possibility to manage their own trust values with respect to the service provided by the nodes. We implemented a distributed mechanism where all devices have the same features considering the constraints of the IoT environment. We propose a trust evaluation model which is able to identify the trustworthiness of sensor node in order to filter out malicious behavior of nodes in the network. The proposed trust management scheme does not have a central entity that manages the communications or trust values between nodes and therefore each node has an autonomous and independent behavior. The trust management model is divided in 3 phases: neighbor discovery, service request, and trust computation.
Phase 1 (initial communication for neighbor discovery). At the beginning of the lifetime of the network, all the nodes start with an initial trust value which equals 0 (zero). We assigned this value in the assumption that all nodes initially are unknown nodes and only through communication between them will they discover whether they are trusted or not to maintain communications.
For the initial trust value to be filled with 0 (zero) in the trust table of the node, the neighbor discovery process is performed by all nodes. The neighbors announce their presence by periodically sending out announcement packets. These announcements are used to populate the neighbor table with neighbors. The trust table will be filled with information of the nodes that are in the same transmission range giving to the neighbors initial trust values which equal 0 (zero).
The value 0 can mean two things in our design: (1) Ignorance of node's behavior or initial trust (unknown trust).
Phase 2 (request service to the neighbor). In our design, each node in the network is able to provide a different number of services. As is supposed in the IoT, the smart objects will be able to establish relationship with other nodes based on the cooperation for services. Each service has a fixed reward and punishment value each time it is provided or not to the neighbor who requested it. The following list shows the weight system by service used in our trust management model. The punishment for a node when it does not provide a service is twice the reward for properly providing a service, thus discouraging nodes to perform a bad behavior.

Weight System of Services
(i) Reward for providing the service: Service weight: (ii) Punishment for not providing the service: Service weight: where is the note to a node for providing or not a requested service and is a weight assigned for each service which is calculated by where is a value assigned to a specific service and is an adjust factor which varies in 0 < < 1 and it is the expected 4 International Journal of Distributed Sensor Networks rapidity of change in the node. If a high value is assigned to , the trust will converge quickly. If a low value is assigned to , the trust will converge slowly. The services are valued according to their processing capacity in the network; each service has different requirements of energy and processing in the node. The services that require more processing capacity have a high value of and those services that do not require too many resources have a low value of .
Phase 3 (trust calculation of the neighbor). The trust value of node computed by node is the sum of the notes of all requested services: Trust is a value ranging from 1 to −1, where 1 is equivalent to the maximum score of trust and −1 is the minimum score of distrust that a neighbor node can reach in the trust table. When a node requests a service to node and node provides successfully that service to node , then node is rewarded by node with an increase in the trust value in the trust table of node depending on the service that was provided. The trust value of that node grows according to the system of weights shown above. The nodes that do not provide the services on the network are punished by the node requesting the service. The trust value of the node decreases each time the node does not provide the requested service. The trust value decreases according to the system of weights shown above.
When the services are not provided by the nodes this is considered malicious behavior and such behavior is punished with a specific weight. In our trust model, the malicious behavior is expressed in negative values; a trust value closer to −1 means a high distrust in the node to provide any type of service. Conversely, the trust in a node is measured as a positive value from 0 to 1 and the closer the value is to 1, the more trustable the node is to provide any type of services in the network.

Simulation Results
In this section, we give numerical results obtained as a result of executing our proposed trust management model for IoT devices. We implemented the trust management schemes and created different simulation environments in the COOJA simulation platform included in Contiki-OS [18].
For the implementation of experiments was used the Radio model UDGM (Unit Disk Graph Medium). The UDGM radio model allows for the easiest setup of the simulation and seems sufficient for our purpose. We use the implementation of ContikiMAC as Radio Duty Cycling (RDC) layer and CSMA (Carrier Sense Multiple Access) as MAC layer. ContikiMAC is a duty cycling mechanism that allows nodes to keep their radio off for most of the time (>99%) to save energy while being able to relay multihop messages.
The simulated network consists of Tmote Sky nodes. These nodes are randomly distributed with a mote start delay of 1000 ms. Each node requests a service to a random neighbor with a random interval between 0 and 60 seconds. The nodes are capable of providing 3 types of services with the following values: 1 = 0.1, 2 = 0.05, and 3 = 0.02. The value of the adjust factor is 1 (one). We implemented and simulated 3 scenarios for the On-Off attack. In the On-Off attack, malicious nodes stop providing services that are offered on the network. This attack exploits the dynamic properties of trust through timedomain inconsistent behaviors. In each scenario, the number of malicious nodes is increased in 10%, 20%, and 30% of the total number of nodes. Figure 1 shows the network topology for a scenario with 50 nodes, being 10% malicious nodes that perform the On-Off attack.
For the analysis of the results, the selected node in this simulation scenario is the node with ID 36. Node 36 has a total of 17 neighbors (15 honest and 2 malicious, nodes 2 and 5). This node is chosen based on two criteria on the total number of neighbors and the number of malicious neighbors in its transmission range. This selected node represents the behavior of any well-behaved node in the network; if we select a different node in the network we are going to obtain similar results of the selected node presented in the paper. Figure 2 presents the average score assigned to the malicious nodes during the simulation time. As we can observe, all the malicious nodes in this scenario are detected by node 36 in our trust model at the beginning of simulation.
The trust values of the malicious nodes in the trust table of node 36 remain in constant decrease until reaching the maximum negative score of distrust. In this example, the time to reach a trust value close to −1 is high (about 75 minutes for = −0.65 or 130 minutes for a value of = −0.8). But even at the initial interactions the trust value is decreasing and the nodes are able to detect an anomalous behavior. In any time, the malicious node has a high value of , which would be considered a trust node in the network.  0  10  20  30  40  50  60  70  80  90  100  110  120  130  140  150  160  170  180  190 Trust value Time (min) Node 2 Node 5 The service request model used is one request per node per minute. In an environment with more interactions among nodes the time to detect a malicious node will decrease.
Our motivation is to test if the trust management scheme detects malicious activity, not the time to detect a node with = −1, which will depend on the traffic generated in the network. This observation is valid for all results in this section.
In Figure 3 are shown node 32 (honest) and nodes 2 and 5 as malicious nodes. The selected node takes in total an average time of about 1.50 hrs to assign the maximum value of distrust to the misbehavior nodes and 1.39 hrs to assign the maximum trust to the a well-behaved node. Figure 4 shows the network topology for a scenario with 50 nodes, being 20% malicious nodes that perform the On-Off attack. Node 32 is the selected node in this simulation scenario. Node 32 has a total of 29 neighbors; four of these neighbors (nodes 5, 6, 7, and 10) are malicious nodes in its transmission range.
As it can be seen in Figure 5 when the average values are calculated for all malicious nodes, the score always remains in negative values until reaching the maximum value of distrust in the trust table of the node that is doing the behavior assessment. In this scenario, the rate of malicious nodes is incremented and the number of neighbors which node 32 has to communicate is also increased. The number of malicious nodes impacts the assignment of the maximum score of distrust. The time to reach a trust value of −0.8, for instance, is higher in this scenario. The node takes in total average time about 2.36 hrs to assign the maximum value of distrust to the malicious nodes and 1.40 hrs to assign the maximum trust to a well-behaved node. In Figure 6, node 20 is a well-behaved node and nodes 5, 6, 7, and 10 are malicious nodes. Here it is shown that in average the minimum score that a malicious node can obtain is −0.6, which represents a malicious behavior on the network. As with the previous simulation scenario, this average time shown in Figure 6 is influenced by the number of neighbors that node 32 has to communicate; that is, a larger number of neighbors will involve more time to assign the maximum score to the nodes. Figure 7 shows the network topology for a scenario with 10 nodes, being 30% malicious nodes that perform the On-Off   3 5 a n d 1 0 5 2 a n d 3 4 a n d 6 6 2 and 3 5, 7, and 8 7 2 6 a n d 8 8 2 a n d 3 6 a n d 7 9 1 , 2 , a n d 3 N o n e 10 None 4 attack (nodes 1, 2, and 3). The relation of neighbors of good nodes is shown in Table 1. The setup of this scenario varies with respect to the two previous simulations scenarios. We implemented this scenario to monitor the variability of the score of the malicious nodes in a time span much larger than in previous simulations. Figure 8 shows the trust relationships of the honest nodes with the malicious nodes in the same range and their respective trust scores for a period of 43 hours. The proposed trust model has a good performance for On-Off attack. The highest positive score that a malicious node can obtain in average is 0.4 but this trust score comes down very fast until being listed again as a malicious node with the maximum distrust. Figure 9 shows the average time to assign the maximum value of distrust to the malicious nodes. As we can observe, the tendency of the score is to fall into negative values until obtaining the maximum value of distrust (−1).
The results obtained from the simulation scenarios using the On-Off attack show that a factor influencing the detection of malicious nodes is the number of neighbors with which the node has to communicate. A large number of nodes communicating with each other create a delay in the assignment of the maximum score of distrust for the malicious nodes.

Conclusions
In this paper, we implemented On-Off attacks to study the effectiveness of a trust management mechanism. The proposed model assigns trust positive scores to the cooperating Node 6-node 2 Node 6-node 3 Node 7-node 2 Node 8-node 2 Node 8-node 3 Node 9-node 1 Node 9-node 2 Node 9-node 3  nodes and assigns trust negative scores to the malicious nodes. The trust assessment is performed using direct observations. The detection of the malicious behavior in On-Off attacks will depend on the number of malicious nodes, the position of the node, and the volume of traffic in this transmission range. The simulation results show the proper operation of our trust management model in constrained resource nodes for the IoT environment. Future works may consider other attacks to evaluate the performance of our trust model, such as selective attacks and bad mouthing attacks. Also the trust of each node can be computed using not only direct observations but also recommendations from neighbors.