Cross-Layer Design of an Energy-Efﬁcient Cluster Formation Algorithm with Carrier-Sensing Multiple Access for Wireless Sensor Networks

A new energy-e ﬃ cient scheme for data transmission in a wireless sensor network (WSN) is proposed, having in mind a typical application including a sink, which periodically triggers the WSN, and nodes uniformly distributed over a speciﬁed area. Routing, multiple access control (MAC), physical, energy, and propagation aspects are jointly taken into account through simulation; however, the protocol design is based on some analytical considerations reported in the appendix. Information routing is based on a clustered self-organized structure; a carrier-sensing multiple access (CSMA) protocol is chosen at MAC layer. Two di ﬀ erent scenarios are examined, characterized by di ﬀ erent channel fading rates. Four versions of our protocol are presented, suitably oriented to the two di ﬀ erent scenarios; two of them implement a cross-layer (CL) approach, where MAC parameters inﬂuence both the network and physical layers. Performance is measured in terms of network lifetime (related to energy e ﬃ ciency) and packet loss rate (related to network availability). The paper discusses the rationale behind the selection of MAC protocols for WSNs and provides a complete model characterization spanning from the network layer to the propagation channel. The advantages of the CL approach, with respect to an algorithm which belongs to the well-known class of low-energy adaptive clustering hierarchy (LEACH) protocols, are shown.


INTRODUCTION
Wireless sensor networks (WSNs) are composed of low-cost low-energy nodes, whose battery is normally not replaced during network lifetime. Nodes sense the environment and are equipped with radio transceivers which allow them to act as both transmitters and route-and-forward devices.
Typical applications include a sink, which periodically triggers the WSN, and a large number of nodes deployed without detailed planning in a given area.
The characteristics of WSNs and their applications make energy conservation and self-organization primary goals with respect to per-node fairness and latency [1,2,3,4]. As a result, the main performance figure in these cases is network lifetime, that is, the time elapsing between network deployment and the moment when the percentage of nodes still active falls below a given threshold which depends on the application. Accordingly, many self-organizing and energy-efficient protocols have been recently developed for data transmission in WSNs [5,6,7,8,9,10,11,12,13].
The cross-layer design (CLD) paradigm seems to be a promising solution to solve the conflicts between requirements of large-scale and long lifetime and the constraints of limited node resources and low battery capacity [14]. Two different CL approaches exist: the first considers a layered structure of protocols, with vertical entities providing exchange of data between all layers; the second, instead, considers a protocol structure where the different layers cannot be distinguished. The former approach, instead, is simpler, as it keeps the existing protocol layer structure and provides additional exchange of information between layers via a single vertical entity [15]. In this approach, it is important to identify traditionally hidden interdependencies among layers and find relevant metrics that capture such dependencies that have to be exchanged among layers to optimally adapt to network dynamics. Some CL works are based on this approach, but most of them are focused on the interactions between two layers only and consider, mainly, the performance in terms of network lifetime. In [16], the authors develop CL interactions between MAC and network layers to achieve energy conservation; in particular, the MAC layer provides the network layer with information pertaining to successful reception of packets and the network layer, on its turn, chooses the route that minimizes the error probability. In [17], a cluster design method that allows the evaluation of the optimum number of clusters to realize power saving and coverage is developed; to do this, a dynamical adjusting of the number of clusters is proposed.
Our approach refers to the one described above, where a suitable interplay between MAC and routing protocols, and physical and MAC protocols are introduced; moreover, performance is evaluated either in terms of energy efficiency, or in terms of packets loss.
A routing protocol architecture that provides good results in terms of energy efficiency for WSNs is low-energy adaptive clustering hierarchy (LEACH) [9,10]. LEACH includes a distributed cluster formation technique, which enables self-organization of large numbers of nodes with one node per cluster acting as cluster head (CH), and algorithms for adapting clusters and rotating CH roles to evenly distribute the energy load among all nodes. The nodes forward their data to the sink through the CH according to a two-hop strategy. Starting from the basic idea of LEACH, in [18], a new routing strategy, denoted as LEACH B, is proposed and the performance shows improvements in terms of network lifetime in a large range of situations.
As far as MAC aspects are concerned, two main families of protocols can be considered: those based on collision-free strategies and those relying on suitable retransmission techniques to overcome the potential collisions caused by uncoordinated transmissions. The proper selection of the family of MAC protocols is a critical issue for energy efficiency.
In the original proposal of LEACH [9,10], a time division multiple access (TDMA) schedule is defined by the CHs to ensure that there are no collisions among data messages. However, this centralized control at the CH requires suitable transmission of control packets which makes the protocol complex; moreover, this overhead creates energy inefficiency. In [19], a self-organization protocol for WSNs called self-organizing medium access control for sensor networks (SMACS) is proposed. Each node maintains a TDMAlike frame in which nodes schedule different time slots to communicate with its known neighbors. A different approach, though still based on coordinated actions to avoid packet collisions, can be found in sensor-MAC (S-MAC) [20], which sets the radio in sleeping mode during transmission of other nodes. The contention mechanism is the same as that in IEEE 802.11 using request-to-send (RTS) and clearto-send (CTS) packets.
When dealing with collision-prone MAC techniques, carrier-sensing multiple access (CSMA) is a usual choice in WSNs [21]. The advantage here is that no extra signalling to schedule transmissions and coordinate data flows is required; on the other hand, collisions might occur, and suitable backoff algorithms are needed to recover data.
An OMNET++ platform [22] is used in this paper to simulate a WSN composed of several tens of nodes randomly and uniformly distributed over a square area, accounting for routing, MAC, physical, energy, and propagation aspects. In particular, we propose a novel cluster formation algorithm, that we name LEACH B+, which introduces the possibility for nodes to transmit to the sink, by using a direct path, when it is energetically efficient, and is based on a new CH election algorithm which significantly improves network lifetime. We also introduce a time division between the data transmission in the different phases of the algorithm, which allows the reduction of the packet loss rate. Moreover, we employ a CSMA protocol based on IEEE 802.11 [23]. If collisions are reduced by suitably dimensioning the average cluster size, this choice leads to high energy efficiency. A relevant energy waste in CSMA protocols is owed to idle listening that occurs when the node is sensing the channel to check whether packets are sent. To avoid this energy loss, an ON/OFF modality which consists in turning off and on periodically radio components can be implemented as usual in WSNs [21].
We apply the CL paradigm to the design of a protocol for WSNs where MAC and routing (i.e., cluster formation) aspects are jointly considered and optimized: the decisions to be taken for cluster formation rely on parameters extracted from the MAC; also, some physical layer parameters (like transmit power) are based on MAC layer protocol status.
We consider two different scenarios, in which the propagation channel fluctuations vary at different rates; it is shown that the protocol design can take advantage of the knowledge of the fading rate.
We study the network lifetime and the packet loss rate for the two different scenarios and we make a comparison between the protocols with and without the CL paradigm.
The paper is organized as follows. As in WSNs, the protocol choices are application-specific, Section 2 describes the reference scenario and application, and discusses the choice of the MAC protocol; Section 3 refers to LEACH B+ routing protocols, with the details on the CHs election and the cluster formation algorithms when no CLD is considered, for the two different scenarios. Then, in Section 4, the MAC strategy is presented. Sections 5 and 6 are devoted to the description of the physical and energy aspects, respectively. The CL approach and its impact on the cluster formation algorithms previously presented in Section 3.2 are discussed in Section 7. Simulation results are reported in Section 8, and  the conclusions are drawn in the final section. The appendix presents the new CH election algorithm proposed in this paper which shows very good performance improvement with respect to the protocols previously presented in the literature: the algorithm description is reported in the appendix to make the paper more readable.

Reference scenario
The reference scenario we assume consists of N TOT sensors randomly and uniformly distributed over a square area (having side M) and a sink located at a given distance d from the center of the square, as shown in Figure 1. The network must be able to provide the information detected by nodes to a sink that periodically (every T R seconds) broadcasts a short packet that we call "start" and waits for the replies from the nodes. We denote by "round" the period of time between two successive start packets sent by the sink. During each round, all sensors should send their information to the sink.
The wireless channel is assumed to be characterized by random fluctuations that will be modeled as Gaussian distributed when being in logarithmic scale. A distancedependent path loss is also considered. The model is motivated by the presence, in many cases for WSNs, of obstacles (ground, foliage, cars, human bodies, depending on the application).

Reference application and motivation for the choice of LEACH and CSMA
This work, though presenting ideas, approaches, and results which are much more general, has been inspired by a specific application: the monitoring of a car parking area where nodes sense the presence of cars and interact to communicate to a sink, which provides information to cars entering the parking area about the better way to reach the closest free slot. Other specific applications that can be considered are based, for example, on the estimation of a target multidimensional process such as, seismic waves through acoustic sensors arrays, the ground temperature variations in a small volcanic site, or structural monitoring of buildings, by means of samples captured by nodes randomly and uniformly distributed. Samples are then transmitted to a sink with a selforganizing and distributed routing strategy. As for network aspects, routing algorithms for WSNs can be classified into three categories: multihop flat, hierarchical, and location-based [24]. In the first category, each node plays the same role and sensors collaborate to perform the sensing task. The second category, instead, refers to protocols where sensors are organized in clusters, where particular tasks are assigned to cluster heads; thus, nodes have not all the same role in the network [25,26]. Finally, in the third kind of protocols, sensors exploit the knowledge of their position in the network, obtained, for example, through GPS. The multihop flat protocols may include scalability issues, whereas the hierarchical protocols (unless the number of levels of the hierarchy is unlimited) can be applied only in those cases where the maximum distance between nodes and the sink is not too large. We will set values of d and M not larger than 100 mt, so cluster-based algorithms like those belonging to the LEACH family represent a good choice.
Concerning MAC, the selection of a protocol belonging to the families of collision-free or collision-prone strategies requires suitable comparison between the time elapsing between two start packets T R and the time coherence of the environment T coh which is a measure of how slow or fast the channel attenuation fluctuates.
In fact, when T coh is much larger than T R , a suitable scheduling of transmissions, which requires extra signalling between nodes, can be kept fixed for many rounds, thus reducing the impact of the related energy wasted on network lifetime. On the other hand, if this condition does not occur, the channel tends to be independent in different rounds, and a collision-free protocol which tries to schedule transmissions in order to avoid collisions becomes energy inefficient since the extra signalling to manage the scheduling is required at each round.
The application we consider is characterized by values of T R which are larger than, or of the same order as, T coh , and the natural choice in this case is CSMA.
In particular, we will consider two different cases: the first with T coh T R (scenario 1) and the second with T coh T R (scenario 2); more precisely, in the former case, the channel fluctuations are completely uncorrelated at each round, whereas, in the second scenario, we assume a block-fading model, where the random variables characterizing the propagation channel remain constant for two subsequent rounds, and then change according to a memoryless process.
The following assumptions concerning the application, are also made.
(i) Nodes and sink are still (no mobility). (ii) Nodes do not know their position in the area. (iii) Each node is aware of the sink position with respect to a given reference coordinate system; in particular (as described in the appendix), the sink includes the information about its position in the trigger, so that nodes are aware of it. (iv) Each node can use power control to vary the transmit power.

THE ROUTING PROTOCOL-LEACH B+
We propose a new routing strategy which combines LEACH B [18] with a simple single-path routing protocol, which includes the direct transmission to the final sink, without passing through CH nodes, when it is energetically efficient. Moreover, a new CH election algorithm is proposed. Two different versions of our new algorithm are suitably designed for scenario 1 and 2; we name them LEACH B+ v1 and LEACH B+ v2, respectively.
In case of LEACH B+ v1, a clustering protocol based on two phases, performed whenever nodes receive the start packet from the sink, is designed.
(1) Setup Clusters are formed according to a two-step procedure: a distributed self-election algorithm is run by nodes in order to elect the cluster heads (CHs), then each CH broadcasts a packet informing of its role and those nodes that did not elect themselves as CHs select the cluster to belong to, or decide to transmit directly to the sink. Details are given below.
(2) Transmission Each non-CH node, belonging to a given cluster, transmits its packet to the respective CH, which, in turn, sends all packets received from the cluster, plus the one it generated, to the remote sink. Alternatively, nodes transmit directly to the sink.
In LEACH B+ v2, instead, the first phase is performed once every two rounds, because nodes, which elected themselves as CHs, remain CHs for the following round and so the CH election algorithm is not carried out at every round (except for the case in which there are no CHs elected. In the latter case, in fact, the CH election algorithm is performed at the subsequent round, too). By using this strategy, CH nodes have to transmit the initial broadcast packet only once every two rounds, since the information about which sensors are CHs remains unchanged for two rounds. As we will see in Section 8, this version allows the decrease of energy consumption.
All other aspects of LEACH B+, which will be described in this section, and Sections 4-6, do not change in the two versions (namely, v1 and v2).
In this paper, we also introduce a subdivision of the time axis into three periods, one for each phase of the algorithm (taking into account that the first phase is divided, on its turn, into two phases), to reduce collisions between packets (see Figure 2).
(1) T CF : during this period, the start packet and CHs broadcast packets are sent. (2) T IC : non-CH nodes send their packets to the CHs.
(3) T TS denotes transmissions toward the sink.

Cluster-head selection algorithm
LEACH B+ forms clusters by using a distributed algorithm, where nodes make autonomous decisions without any centralized control. When a node receives the start packet, it decides whether or not to become a CH for the current round. This algorithm allows the election of a certain number of CHs, on average equal to N. Being a CH node is much more energy intensive than being a non-CH node. Therefore, LEACH incorporates a randomized rotation of the CH role among sensors to avoid draining the battery of a particular set of sensors in the network [10]. Ensuring that all nodes become CHs the same number of times, each node will be CH once in N TOT / N rounds on average. The rationale behind the determination of the value of N is described in the appendix through suitable analytical formulation.
To do this, we consider an indicator function C p (i) determining whether or not node p, at the ith round, has been a CH in the most recent R * = N TOT / N − 1 rounds (i.e., C p (i) = 0 if node p has been a CH and 1 otherwise), where x stands for the largest integer less than or equal to x. The decision to become or not a CH is made by node p choosing a random number between 0 and 1. If the number is less than a threshold T p (i), the node becomes a CH. The threshold is set as where R is a counter incremented at each round and set to zero whenever it reaches R * or when the node becomes CH, while N p is set equal to N initially. In the appendix, N is evaluated in a more realistic way with respect to LEACH B. Therefore, according to (1), the mechanism which allows the rotation of the CH role is the following: every node starts with C p (i) = 1, so it has the possibility to become CH; when a node elects itself CH, C p (i) is set to zero and the node cannot become CH for R * rounds; after that, C p (i) is set to one, so the node can become CH again with probability that grows with i; while if a node does not elect itself CH for R * consecutive rounds, it is forced to be a CH for the current round by setting T p (i) = 1. In conventional LEACH [10], N is a fixed value and it is determined a priori. In LEACH B+, we propose a new adaptive strategy to choose the CHs election frequency, varying N for each node in such a way that we consider the energy dissipation of each node the last time it has assumed the role of CH. As can be seen in [18], this strategy improves network lifetime.
If we consider an average situation, each CH has to send N TOT /( N + 1) (as we will see below the ( N + 1)th cluster is formed by nodes that choose to transmit to the sink via a direct link) packets to the final sink with an energy consumption that is dependent on its position, plus the energy required to receive N TOT /( N + 1) − 1 packets from non-CHs that belong to the cluster. As explained in Section 5, we assume that the transmission power of each node (either CH or non-CH) is controlled adaptively in order to guarantee an adequate received power at the destination nodes with the minimum required energy. Therefore, since the energy dissipated by each CH is dependent on its position with respect to the sink, we can evaluate the worst and the best case in terms of energy consumption that is useful to perform our adaptive strategy, where (i) E R is the energy spent to receive a packet (see Section 6); (ii) E T-far and E T-close are the energies spent to transmit a packet, considering two different transmission ranges: the distance between the sink and the farthest point of the network D max , and that between the sink and the closest one d − M/2.
Starting from the average of these energies we fix two different thresholds as follows: If the energy dissipated by node p the last time it assumed the role of CH is larger than E CH-sup , the value of N used by node p, N p , is decreased by 1, so that this node will have smaller probability to become CH in the next rounds. At the opposite, if this energy is smaller than E CH-inf , N p is increased by 1. Finally, if the energy dissipated is between the two thresholds, the value of N p does not change.
Particular attention must be paid on the cluster election phase. In fact, the CH election should guarantee the minimum energy consumption by means of the cluster-head rotation algorithm presented. In order to assess the validity of the algorithm proposed, several simulations have been performed. As a result, we can state that in LEACH B+, the majority of CHs are located, on average, on a circumference centered in the sink, and having radius equal to D max /2, which is clearly an efficient condition from the energy consumption viewpoint.

Cluster formation algorithm
Concerning cluster formation, each node chooses its CH by evaluating the energy dissipated in the complete path between itself and the final sink, via the CH, for the transmission of its packet.
The start packet sent by the sink contains the information about the power used for its transmission, so every receiving node can compute the loss between itself and the sink. The broadcast packet sent by each CH includes the value of power used for this transmission and the loss estimated previously. Every time a non-CH node receives a broadcast packet, it estimates the total path loss between it and all the CHs whose packets have been successfully detected by the node, and reads the loss between the CH and the sink. Every node selects the path characterized by the smallest total path loss, considering also the possibility to transmit directly its packet to the sink without passing through any CH. So every non-CH selects the link (through the CH or not) which corresponds to the lowest path loss.
Finally, if a non-CH node does not receive any broadcast packets correctly, it is forced to transmit directly to the sink.

THE MAC PROTOCOL PROPOSED
The access to the wireless channel is controlled through a CSMA protocol, whose mechanism has been inspired by the IEEE 802.11 standard [23]. According to this protocol, each node, before transmitting, invokes a carrier-sensing mechanism to determine the busy/idle state of the channel. After the sensing phase, one out of two situations may occur.
(1) Channel free: the node generates a random backoff period T b for an additional deferral time before transmitting its packet. (2) Channel busy: the algorithm is different for a non-CH or a CH. The former stops sensing and moves to a sleeping state, where it remains till the end of the packet transmission; therefore, the node turns off and it preserves energy. In fact, we assume that in each transmitted packet, there is a duration field that indicates how long the remaining transmission will be, so when a node receives a packet destined to another node, it knows for how long it cannot transmit [20]. In the latter case, the node keeps on, because it could receive packets from other nodes belonging to its cluster. Figure 3: Transmission system block diagram.
The duration of the carrier-sensing phase T s is not fixed; it is considered to be random and given by where the following exist.
(i) Distributed interframe space (DIFS) is the minimum sensing length and we take it equal to the data transmission time; assuming a negligible propagation delay, as is usually done for sensor networks [20], the data transmission time is the time during which the packet occupies the channel and is given by the ratio between the packet size z and the bit rate R b . (ii) r is a random number drawn from a uniform distribution over the interval [0, 1).
The choice of a random sensing time [20] allows the reduction of packet collision probability; there are two possible causes of collision: two or more nodes could select the same value of r, so they end sensing at the same time and transmit simultaneously, or a node is not able to perceive a communication in the channel and could decide to transmit its packet though the channel is busy (hidden node problem). By fixing a minimum received power for a successful channel sensing P Smin , in fact, a node which receives a packet with a power smaller than such value does not "hear" the transmitter. We assume a packet is captured by the receiver, even in case of packet collisions if where (i) P r0 is the power received from the useful signal; (ii) P ri is the ith interference power; (iii) Nis the number of colliding packets; (iv) α 0 is the capture threshold which we set equal to 3 dB.
When condition (6) is not fulfilled, the packet is lost and the receiving node requires the packet retransmission. An acknowledge mechanism is not provided in this algorithm, because the transmission and the reception of these packets cause an increase of the energy spent. Thus, we consider only the use of retransmission requests, when nodes receive wrong packets.
To minimize collisions during contention between multiple nodes, as mentioned above, we introduce a backoff algorithm, namely the exponential backoff algorithm adopted in the IEEE 802.11 MAC protocol [23]. According to this algorithm, nodes, once the sensing phase has ended, in the case of free channel do not transmit their packets immediately, but only after a random backoff time given by where r c is a random integer drawn from a uniformly distribution over the interval [0, CW], where CW is the contention window value, that is, an integer within the range of values CW min and CW max (CW min < CW < CW max ). We used the 802.11 standard values, so CW min = 7 and CW max = 255. The contention window parameter will take the initial value of CW min . Then, in case of collision, CW is augmented and the new value is computed as So, there is an exponential increase of the contention window value up to CW max , or till a packet is correctly received. In both cases, CW will be reset to CW min . The performance of CSMA protocols are mainly affected by the hidden node problem and the amount of data transmitted by nodes to the CHs. First of all, we want to point out that the random changing of the CHs can mitigate the hidden terminal problem. In fact, in every round in LEACH B+ v1, or every two rounds in LEACH B+ v2, the clusters change according to the cluster-head election algorithm defined. Therefore, if a node is unfortunately hidden during a round, this does not preclude that this situation changes in the following rounds. As far as the impact of the MAC protocol on network performance is concerned, we have analyzed its behavior for different packet sizes z. In particular, an increase of the packet size from 127 to 1016 bits corresponds to an expected decreasing of the network lifetime due to the augmented number of collisions, and a doubling of the packet loss rate.

Transmission system
In this section, we describe the transceiver scheme adopted for each node, the radio propagation channel, and the power required for the transmission. The block diagram of the transmitting and receiving parts that are considered in our analysis is reported in Figure 3. S and U are the source of bits and the final user, respectively. The block APP T is composed of a coder, a modulator, and an up-converter, AMP represents the power amplifier for the transmission, while APP R is composed by a down-converter, a demodulator, and a decoder. Finally, the blocks A T , A R represent the attenuations due to the connections by transmitting and receiving antennas, respectively, while G T and G R are the antenna gains.
As far as propagation is concerned, we assume a statistic channel characterized by a Gaussian distribution of loss, when measured in dB, where P T and P R represent the generic transmit and receive powers, respectively. The logarithmic value of L has mean depending on link distance, antenna gains, and so forth. More precisely, we assume the following expression for loss at distance D: where (i) f c (Hz) is the carrier frequency, c(m/s) is the speed of light, d 0 (m) is a reference distance, and α is the path loss exponent; (ii) G ant is given by (iii) S is a Gaussian random variable, with variance σ 2 and zero mean.
In this paper, we fix two power thresholds: the smallest one is the minimum receiver sensitivity P Smin and the other is the receiver sensitivity P Rmin . A packet is correctly detected whenever P R is larger than P Rmin and it is "heard" when P R is larger than P Smin .
As far as the transmission scheme is concerned, we assume a binary phase-shift keying (BPSK) modulation with a BCH(127, 50, 13) code, that is, with packet length z = 127 and information bits k = 50, able to correct up to t = 13 bits.

Packet error probability
Assuming a transmission scheme based on BPSK modulation, the two thresholds P Rmin and P Smin can be derived starting from the bit error probability [27] where E b is the received energy per information bit, R c = k/n = 0.394 is the coding rate, and is the signal-to-noise ratio at the receiver input. In particular, N 0 is the one-sided power spectral density of the additive white Gaussian noise (AWGN) which depends on the noise figure F of the receiver, that is, where K B is Boltzmann's constant and T 0 = 290 K. Considering packets of z bits, packet error probability is then given by Now, for a given value of P ep , we can derive P eb , and then from (12)- (14), the corresponding received power can be evaluated. In particular, by fixing a packet error probability of P ep = 10 −2 , we derive the receiver sensitivity as where W R is the signal-to-noise ratio needed to detect a packet. By fixing a signal-to-noise ratio equal to 3 dB, the minimum receiver power P Smin required to "hear" a packet is derived. All the parameters involved in the derivation of these two power thresholds are reported in Table 1.
Having fixed the two aforementioned thresholds, the behavior of nodes when they receive the start packet is as follows.
(i) If P R < P Smin , the node cannot perceive the packet, and therefore it does not transmit its own packet for that round. (ii) If P Smin < P R < P Rmin , it perceives the start packet but it cannot compute the path loss between it and the sink, since the information about the transmit power used by the sink cannot be read. (iii) If P R > P Rmin , it can compute the loss.

Power control
Now we consider the transmission power used in the different phases of the LEACH B algorithm.
The start packet is transmitted using a value of power given by where the transmission range D max is the distance between the sink and the point in the scenario farther from it (see Figure 1). M f is a fade margin suitably introduced to keep under control the probability of packet failure owing to the random fluctuations of the channel; it can be written as where P OUT is the maximum outage probability which depends on the type of transmission. The outage probability is the probability that the packet reception fails. For the transmission of the start packet, we use P OUT = P OUTSN . The broadcast CHs messages are transmitted with where M f is given by (18) with P OUT = P OUTBr and d broadcast is the area diagonal. As we explained, nodes do not know their position in the network, so they must behave like they were in the worst case.
In both cases (start and broadcast packets), the received power at the maximum distance is given by Note that, depending on the value of the margin M f , some packets can be lost owing to the channel fluctuations.
During each round, we assume a stationary channel, so losses between CHs and non-CHs do not change. With this assumption in mind, every node can transmit its packet to the CH by using the minimum power that allows its correct reception. Therefore, the transmit power used by a generic non-CH node to send its packet to the relevant CH is where L is the path loss between the CH and the node that is transmitting. Finally, we consider the transmission power of the messages sent by the CHs to the sink, or any nodes directly transmitting to the sink. If these nodes succeeded in computing the loss between them and the sink, by extracting the information from the start packet regarding its transmit power and measuring the received power level, their transmit power is set according to (21) where L, in this case, is the path loss between the transmitting node and the sink. If such node was not able to estimate L, it will transmit using the power level P Tmax . In this case, M f is given by (18) with P OUT = P OUTNS .
All parameter values not specified in the text of the paper are reported in Table 1.

ENERGY CHARACTERIZATION
The central problem for sensor networks is energy consumption. It is important to estimate the energy spent, during each round, by all nodes, when they transmit, receive, or sense the channel.

Transmission
The energy dissipated for the packet transmission depends on the value of the transmission power where (see Figure 3) (i) P APPT includes the power dissipated in the baseband, oscillator, frequency synthesizer, mixer, filters, and so forth; (ii) P T /η amp is the power dissipated within the power amplifier, where P T is given by (17), (19), or (21), according to the specific cases; (iii) η amp ≤ 1 is the transmitter amplifier efficiency; (iv) R bc = R b /R c is the coded bit rate.

Reception and Sensing
In the radio receiver model we use, there is no difference between the energy levels dissipated during reception or sensing [20]. The energy needed to keep the node on is given by where P APPS represents the power dissipated during the sensing phase (see Table 1) and T is the time interval during which the node senses the channel. In particular, the energy consumed to receive a packet is where P APPR represents the power dissipated during the receiving phase. Note that in case nodes do not know when the following start packet will arrive, we have a high energy consumption due to the fact that nodes should be on between the end of a round and the beginning of the following one.
As we can see in Section 8, we investigate performance in terms of network "lifetime." To extend the nodes lifetime, we introduced the ON/OFF modality (Figure 4) in which, after the start packet's arrival, nodes stay on for a certain interval of time denoted as T ACT and then they turn off and on alternatively till the following start. In particular, we have chosen (i) the duration of the ON phase equal to DIFS, (ii) the duration of the OFF phase equal to 15 · DIFS, according to suitable considerations, not reported for the sake of conciseness.
To be sure that a start packet is detected by each node regardless of the ON/OFF mechanism, the sink must transmit sixteen sequential starting packets so that every node is able to receive at least one of these. Note that this requires that the sink has no energy consumption problems. Through this modality, we obtain a significant improvement of performance in terms of system lifetime.
As mentioned in Section 3, T ACT is divided in the three periods of duration T CF , T IC , and T TS .

Scenario 1-CLD v1
To improve network performance, we introduce a modified version of LEACH B+ v1, based on the CL paradigm, denoted as CLD v1, where interactions between physical and MAC layers and MAC and network layers are introduced.
For the interaction between physical and MAC layers, a power control algorithm is proposed which accounts for the number of retransmissions required. As mentioned, when nodes, either CHs or non-CHs, do not know the loss between themselves and the sink, they transmit with a high power level (obtained by assuming that the node is at a distance D max from the sink). Since in this case nodes waste a lot of energy, we impose that they transmit to the sink by using a power equal to P Tmax /2, while they use P Tmax when they receive a retransmission request by the sink. In this way, the MAC layer affects the physical layer, namely the transmit power algorithm.
Concerning the CL interactions between the MAC and network layers, we use, once again, the number of retransmissions requested to influence the CH election algorithm for the following rounds. In Section 3.1, we stated that the value of N used by a node p, N p , is decreased by 1 when the energy dissipated by the node the last time it assumed the role of CH is larger than E CH-sup , and it is increased by 1 when the energy spent is less than E CH-inf . A possible CL interaction to reduce the energy waste consists in increasing and decreasing N p , by considering not only the energy dissipated, but also the number of retransmissions requested by the sink to a CH in the last round it assumed the role of CH. In particular, N p is increased when the energy spent is low and the nodes have received less than 2 retransmission requests from the sink; at the opposite, N p is decreased when the CH has dissipated a lot of energy and has received more than 3 retransmission requests. By increasing N p , the probability that the node will be CH for the next rounds increases and, in this way, this opportunity is given only to nodes that are in a good location with respect to the sink, either in terms of energy expense, or in terms of collisions.

Scenario 2-CLD v2
In this case, as stated previously, we assume that the loss between two nodes remains unchanged for two rounds; a suitable protocol design can take advantage of this. We define here a new version of LEACH B+, namely CLD v2, which includes all the techniques already introduced in CLD v1 plus some additional features: the information about the request of retransmissions obtained at the first of the two rounds is used at the second round to change the structure of the cluster. At the first round, in fact, every non-CH node records the value of the loss between itself and the sink and the total losses between itself and the sink, passing through the CHs. At the beginning of the second round, if it has received one or more retransmission requests, it changes the cluster to which it belongs to. It will choose the CH, or also the sink, which corresponds to the smallest loss, avoiding the previous CH considered. No adaptive strategy is performed between the second and the third rounds, for example, because, owing to the fact that the channel changes, in the third round, there is a new election of the CH nodes and new clusters are formed. Moreover, when a non-CH node belonging to a certain cluster receives a retransmission request from its CH, to reduce the packet losses, it transmits its packet directly to the sink, without passing through the CH. So, nodes can change the cluster they belong to according to the number of retransmissions that occurred within the cluster. However, the direct transmissions to the sink are very energy expensive, in particular for those nodes that are farther from the sink, so this CL protocol, even if advantageous in terms of packet loss rate, is expected to worsen network lifetime.

NUMERICAL RESULTS
We show the performance results obtained by means of a simulator implemented on an OMNET++ platform [22]. All simulation parameters related to a network with M = d = 100 mt are reported in Table 1. All values of time intervals are normalized with respect to T R ; so, for example, t ACT is equal to T ACT /T R , and so forth.

Improvement with respect to LEACH B
First of all, in Table 2, we compare the round when the first node expires for LEACH B [18] and the new LEACH B+ v1   Table 2 as well as in the following figures, the value of the number of rounds is normalized with respect to the value of energy which equipped the sensors initially.

Scenario 1
In this section, we illustrate a comparison between the performance obtained in scenario 1 with the LEACH B+ v1 protocol and with CLD v1 (i.e., without or with CL approach implemented, resp.).
In Figure 5, we compare the network lifetime of the two protocols, considering a network of N TOT = 30 nodes. In particular, we show the number of nodes still alive as a function of time, expressed in terms of number of rounds. The figure shows that the CL approach allows an increase of network lifetime. In Figure 6, we show the round when the first node expires, as a function of N TOT ; this parameter increases by increasing N TOT . As we can notice, the improvement due to the CL approach is kept even by varying N TOT (i.e., the density of nodes). Now, we consider the packet losses. The causes for these losses are the following.
(1) Fading: when P R < P Rmin , the packet is lost; the margin M f is set in order to control the packet loss probability on each link, but the total packet loss rate in the network is different, as it is a combination of the events on the different links.
(2) Collisions: notwithstanding the use of a retransmission mechanism, some packets could be lost. In fact, when a node transmits, it is not able to perceive a packet directed to itself, so it cannot ask for retransmission.
In Figure 7, we show the packet loss rate as a function of N TOT for the two protocols. The losses increase, by increasing N TOT , owing to the larger traffic. As we can see, the two protocols have about the same values of packet loss rate, so we can conclude that CLD v1 improves network lifetime without increasing the packet loss rate.
Finally, in Figure 8, we show the round when the first node expires as a function of β = P APPS P APPR (25) to show that there is a strong dependence between network lifetime and the power spent in the sensing state. In fact, in our protocol, the time during which sensors are in a sensing state is high, so if in this state they spend the same energy as in the receiving state (β = 1), their life will be much shorter.

Scenario 2
This section is dedicated to show the comparison between LEACH B+ v2 and CLD v2. Concerning network lifetime (see Figure 9), LEACH B+ v2 performs better than v1, because, in the former case, CH nodes have to transmit half of the broadcast packets than in the latter. However, when we introduce the CL strategy described in Section 7.2, we have a decrease of network lifetime, owing to the fact that we increase the number of direct transmissions to the sink, which are very expensive. This protocol, however, allows a significant decrease of packet loss rate (see Figure 10) either with respect to LEACH B+ v1 or v2. So, in this scenario, the CL approach proposed, accounting for MAC protocol status at network level, provides advantages in terms of loss rate at the expense of energy efficiency.

CONCLUSIONS
In this paper, a CSMA-based WSN composed of several tens of nodes uniformly distributed over a square area is analyzed by means of simulations taking into account the complete stack of layers. We proposed four different versions of LEACH B+, a new protocol presented here which outperforms the other algorithms belonging to the same class (LEACH), previously presented in the literature. LEACH B+ is a hybrid protocol which allows nodes to use a singleor two-hop path towards the sink according to energy-   related considerations. Moreover, the distributed algorithm for cluster-head self-election has been suitably designed starting from some novel analytical descriptions of the energy spent on the average at each round; this model was reported in the appendix to make easier reading of the paper.
We introduced the CL paradigm, which is shown here to improve performance. In particular, we focused on two different scenarios, characterized by two different values of the ratio between T R and T coh . The two different CL approaches, derived from the two scenarios, allow the improvement of the network lifetime (scenario 1) or the packet loss rate (scenario 2).
The paper jointly takes routing, MAC, physical, energy, and propagation aspects into account, and this makes the description of the model used rather complex. Owing to the many parameters of the model, the results shown represent a sample among the many found by the authors, but it was found out that the conclusions drawn can be considered more general and applicable to other sets of input parameters.

A. COMPUTATION OF N
N is chosen in order to minimize the total transmission energy, which is the sum of the energies dissipated by each node, CH and non-CH, in a round.
We assumed that there are N TOT nodes distributed uniformly in an M × M region. If N nodes became CHs, there would be N + 1 clusters, because we consider also the cluster formed by nodes which transmit directly to the sink. For the purpose of the determination of N, we assume that all N + 1 clusters are equally loaded, so in every cluster, there are on average N TOT /( N + 1) nodes (one CH and N TOT /( N + 1) − 1 non-CH nodes).
The total energy spent in a round is given on the average by E TOT = E CH · N + E non-CH→CH · N non-CH→CH + E non-CH→S · N non-CH→S , (A.1) where (i) E CH is the energy dissipated by each CH; (ii) E non-CH→CH is the energy dissipated by each non-CH which chooses a CH to transmit to and is the total average number of non-CHs which transmit to a CH; (iii) E non-CH→S is the energy dissipated by each non-CH which chooses to transmit to the sink and is the number of non-CHs which transmit directly to the sink, on the average.
Each CH dissipates energy to send the broadcast packet to transmit its own packet and the packets of the other nodes to the final sink and to make sensing (the energy spent to receive packets can be neglected). We assume an average situation where the shadowing is not considered and we suppose that there are not collisions in the system, thus we do not consider the energy dissipated for the retransmissions. Hence, the transmission energy dissipated by a CH at a given round, on average, can be written as where d CH-S is the distance between the CH and the external sink, d broadcast is the distance between the CH and the farthest point of the observed area, E APPT = z P APPT /R bc is the energy spent by the block APP T during a packet transmission, and b is a constant that takes into account transmission parameters f c , G ant , P Rmin , and so forth, according to (17). Finally, T is the sensing period, which is set to T ACT . So, contrary to the LEACH B protocol, we take into consideration also the energy spent for sensing, obtaining a more realistic evaluation of N.
Each non-CH node only has to transmit its packet to the CH or to the sink and so the energy dissipated for each round is where d p-S is assumed equal to D max , which represents the worst case. In many practical scenarios, the energy spent in the block APP T in (A.4), (A.5), and (A.6) can be neglected, that is, E APPT b D α for every distance D. As developed in [10], the expected squared distance from a general node p to the CH is given on average by Therefore, for each round, the total transmission energy dissipated in the network is on average (A.9) however, considering that N N TOT and 1 N, (A.9) can be approximated as where K is a term that does not depend on N. At this point, the optimum number of CHs can be evaluated easily by setting the derivative of E TOT performed with respect to N to zero. We obtain This equation can be solved only numerically. Note that each node can determine its own optimum number of CHs, owing to the fact that the values of the total number of nodes in the network, the path loss exponent, the network size, the distance considered in the transmission of broadcast packets, and T ACT are contained in the trigger transmitted by the sink, so they are known by nodes. Note that N does not depend on the distance between the CH and the final sink, so that distance could not be known.