An Energy-Efficient Coverage Enhancement Strategy for Wireless Sensor Networks Based on a Dynamic Partition Algorithm for Cellular Grids and an Improved Vampire Bat Optimizer

Sensor nodes perform missions based on the effectual invariable coverage of events, and it is commonly guaranteed by the determinate deployment for sensor nodes who deviate from the optimum site frequently. To reach the optimal coverage effect with the lowest costs is a primary goal of wireless sensor networks. In this paper, by splicing the sensing area optimally with cellular grids, the best deployment location for sensors and the required minimum number of them are revealed. The optimization problem of coverage rate and energy consumption is converted into a task assignment problem, and a dynamic partition algorithm for cellular grids is also proposed to improve the coverage effect when the number of sensors is variable. Furthermore, on the basis of solving the multi-objective problem of reducing and balancing the energy cost of sensors, the vampire bat optimizer is improved by introducing virtual bats and virtual preys, and finally solves the asymmetric assignment problem once the number of cellular grids is not equal to that of sensors. Simulation results indicate that the residual energy of sensors during redeployment is balanced notably by our strategy when compared to three other popular coverage-enhancement algorithms. Additionally, the total energy cost of sensor nodes and coverage rate can be optimized, and it also has a superior robustness when the number of nodes changes.


Introduction
With the rapid development of wireless communication technology, embedded computing technology, sensor technology, and microelectronic technology, wireless sensor networks (WSNs) which bring low-power, low-cost, distributed and self-organizing features to information perception have emerged at this historic moment. They have greatly changed the way humans interact with nature and established a bridge between the information and the physical world [1]. As a kind of self-organized network formed by low-power microsensor nodes with the ability of sensing, data processing and storing in wireless communication, WSNs, which are called one of the most influential technologies in the 21st century, have become one of the most popular research fields due to their wide matching algorithm of bipartite graph [33,34]. The above algorithms based on task assignment all ignore the optimization of reducing the maximum energy cost of sensors and balancing the residual energy, which are exactly the keys that affect the life-cycle of WSNs.
The related works reveals that the derivative algorithms of VFA and GWO have relatively superior performance in solving the problem of sensor coverage enhancement, but they have limitations such as the upper limit of coverage optimization ability and the difficulty in optimizing mobile energy consumption, hence a new strategy is proposed in this paper and compared with VFA, VFPSO and LGWO which are the most representative algorithms. The central contributions of this paper are as follows: (1) We present a stacking strategy based on cellular grid (SSBCG) for splicing the two-dimensional sensing area optimally with the length and width given. (2) A cellular grids dynamic partitioning algorithm (CGDPA) is proposed to dynamically adjust the size of the cellular grid based on the actual number of sensor nodes to optimize the coverage effect when the number of sensors changes. (3) The optimization problem of coverage enhancement and energy consumption is converted into a task distributing problem of assigning cellular grids for sensor nodes. We improved the vampire bat optimizer (IVBO), which has been introduced and discussed in [35], to solve the asymmetric competition problem by introducing virtual bats and virtual preys, namely not only the multi-objective problem of minimizing and balancing the energy cost of nodes, but also the asymmetric assignment problem once the number of cellular grids is not equal to that of sensor nodes. (4) Simulation experiments are performed with MATLAB, and the proposed strategy is compared with VFA, VFPSO and LGWO, and the reasons for their performance differences in energy cost of nodes and final coverage rate are revealed and discussed.
The structure of the paper is as follows. The related concepts of two-dimensional deterministic coverage problem are described in Section 3, mainly including energy consumption model, coverage model and mathematical optimization model of deterministic coverage. In Section 4, a stacking strategy based on cellular grid, an improved vampire bat optimizer for asymmetric assignment problem, and a cellular grids dynamic partitioning algorithm are presented to solve the problem of deterministic coverage enhancement when sensor nodes change. Simulation analysis and discussions of the reasons that causing the performance differences of VFPSO, VFA, LGWO and the proposed strategy are given and in Sections 5 and 6, respectively. Ultimately, we summarize the main contributions of this paper in Section 7.

Two-Dimensional Coverage Model of Sensors
The sensing area Ω can be symbolized by a two-dimensional area comprising K discrete points. Note the sensing range of the sensor node in Ω as Θ, whose radius denotes the perceived range of the sensor node. G j can be monitored by S i once the condition d i,j ≤ R S is met, where S i represents the i-th sensor node and its location (x S i , y S i ), G j represents the centroid of the j-th discrete point and its coordinates (x G j , y G j ), the sensing radius of all sensors and the distance between S i and G j are denoted as R S and d i, j , respectively. The probability that G j covered by S i can be calculated by: Given that G j may be covered by many sensors simultaneously, the condition that G j has been successfully covered is that G j has been monitored by at least one sensor, and the coverage probability for G j can be calculated by p j = 1 − S i ∈S α 1 − p i,j , where S α is the sensor set that covers G j . Accordingly, the coverage rate (CR) of Ω can be calculated by K j = 1 p j /K.

Coverage Enhancement
Sensor nodes perform missions based on the effectual invariable coverage of events, which is commonly guaranteed by deterministically redeploying sensor nodes who deviate frequently from the optimum site. To reach the optimal coverage effect with lowest costs is one of the primary goals of WSNs. Assuming that all sensors have the same perceived radius, and can acquire location information and reach any position in Ω, a coverage enhancement strategy can be regarded as the most efficient once it can save the amount of sensors with the cover effect of Ω fully optimized, which is equivalent to seeking a polygon who has the highest efficiency in stacking Ω [36]. The optimal coverage pattern is the regular hexagon with the sensing range R S as its side length [37,38].

Energy Consumption During Coverage Enhancement
Given that the failure time of the node dead firstly is often considered as an important indicator to measure the life-cycle of WSNs, and the movement of sensor nodes and signal transmission between them are two main aspects of energy consumption during redeployment, which means the proportion of the former is higher than that of the latter for mobile sensors, namely the moving distance is a necessary factor for a mobile WSNs. Consequently, in addition to the coverage enhancement, the optimization of energy cost is also an essential factor for mobile WSNs, which is equivalent to the optimization of mobile distance of sensors.
After moving to G j , the residual energy of S i is defined as , where S i 's initial energy is denoted as Eo i , and its energy cost after deploying a movement of 1 m is denoted as e, and N S is the number of sensors. The redeployment problem can be transformed to a task distributing problem of assigning N C mobile destinations for N S sensor nodes, Figure 1 shows the bipartite graph model of it, whose weight of the edge < S i , G j > is equivalent to d i,j , D N S × N C can be presented as: The objective function can be defined as min(w 1 f 1 + w 2 f 2 ): and the constraint condition is: where f 1 and f 2 are the cost functions about TEC and URE of sensors during movement; w 1 and w 2 are the weights of f 1 and f 2 , respectively. mobile destinations for sensor nodes, Figure 1 shows the bipartite graph model of it, whose weight of the edge , is equivalent to , , can be presented as: The objective function can be defined as min : Figure 1. Bipartite graph model for the task distributing problem of assigning N C mobile destinations for N S sensor nodes. S 1, S 2 , . . . , S N S and G 1, G 2 , . . . , G N C represent the set of N S sensors and N C grid points, respectively. The weight of the edge < S i , G j > denotes the distance between S i and G j .

Stacking Strategy Based on Cellular Grid
With the intention of maximizing the sensing range of all sensors, it is necessary to specify the deployment location of each sensor, hence a stacking strategy based on cellular grids (SSBCG) is proposed.
As the arrow shows in Figure 2, the perceived radius of sensor nodes is represented as the radius of the circumcircle of the cellular grid, which is recorded as R C . Given that there is a geometric relationship , namely the relationship between R C and stacking interval ∆x and ∆y, which are as twice the length of | → BL 2 | and | → BL 4 |, satisfies ∆x = 3R C and ∆y = √ 3R C , respectively; the coordinates of the cellular grid B and D, which is the reference grid of the first and second type of cellular grids, are (R C /2, 0) and 2R C , Sensors 2020, 20, 619 6 of 25 and the constraint condition is: . .
where and are the cost functions about TEC and URE of sensors during movement; and are the weights of and , respectively.

Stacking Strategy Based on Cellular Grid
With the intention of maximizing the sensing range of all sensors, it is necessary to specify the deployment location of each sensor, hence a stacking strategy based on cellular grids (SSBCG) is proposed.
As the arrow shows in Figure 2, the perceived radius of sensor nodes is represented as the radius of the circumcircle of the cellular grid, which is recorded as . Given that there is a geometric relationship of 2 ⁄ , namely the relationship between and stacking interval ∆ and ∆ , which are as twice the length of ⃗ and ⃗ , satisfies ∆ 3 and ∆ √3 , respectively; the coordinates of the cellular grid and , which is the reference grid of the first and second type of cellular grids, are 2 ⁄ , 0 and 2 , √3 2 ⁄ , respectively. When using cellular grids with a radius of to seamlessly stack Ω with and as length and width, all cellular grids can be classified into two categories according to their coordinates. The first type of grid is based on the reference grid 2 ⁄ , 0 and is extended in the horizontal and vertical directions with ∆ and ∆ as the stacking interval, respectively. Accordingly, the centroids When using cellular grids with a radius of R C to seamlessly stack Ω with L and W as length and width, all cellular grids can be classified into two categories according to their coordinates. The first type of grid is based on the reference grid (R C /2, 0) and is extended in the horizontal and vertical directions with ∆x and ∆y as the stacking interval, respectively. Accordingly, the centroids and the locations of the first type of cellular grids are denoted as (R C /2 + n 1 ∆x, n 2 ∆y), n i ∈ {1, 2, . . . , N i }: where N 1 and N 2 are the minimum number of the first type of cellular grids required along axis X and Y, respectively, when using the first type of cellular grids to stack Ω seamlessly. The distance from the centroid of cellular grid B to the edge of Ω along axis X and Y can be expressed as L − R C /2 and W due to the coordinates of point B, hence the number of remaining cellular grids required along axis X and Y can be expressed as (L − R C /2)/∆x and W/∆y. The number of all cellular grids required along axis X and Y is [(L − R C /2)/∆x + 1] and W/∆y + 1, respectively, since cellular grid B should be also included. The number of sensor nodes can be economized by rounding down the result. Analogously, with the same stacking interval as the first type of cellular grids, and regarding the centroid 2R C , √ 3R C /2 as the reference grid, the centroids and the locations of the second type of cellular grids are denoted as 2R C + n 3 ∆x, √ 3R C /2 + n 4 ∆y , n i ∈ {1, 2, . . . , N i }: N 3 and N 4 are the minimum number of the second type of cellular grids required along axis X and Y, respectively, when using the second type of cellular grids to stack Ω seamlessly. Therefore, the calculation formula of the minimum number of required cellular grids to stack Ω seamlessly is: By using cellular grids with a radius of R C = 5.25 m to stack Ω with a size of 50 m × 50 m seamlessly, the minimum number of cellular grids can be calculated by Equations (5)- (7), which is 42. The stacking effect is shown in Figure 3, the blue cellular grids are the first type of grids based on the 1-st grid, and the remaining cellular grids are the second type of grids based on the 7-th grid.
Sensors 2020, 20, 619 7 of 25 and the locations of the first type of cellular grids are denoted as 2 ⁄ ∆ , ∆ , ∈ 1, 2, … , : where and are the minimum number of the first type of cellular grids required along axis X and Y, respectively, when using the first type of cellular grids to stack Ω seamlessly. The distance from the centroid of cellular grid to the edge of Ω along axis X and Y can be expressed as 2 ⁄ and due to the coordinates of point , hence the number of remaining cellular grids required along axis X and Y can be expressed as The number of all cellular grids required along axis X and Y is 2 since cellular grid should be also included. The number of sensor nodes can be economized by rounding down the result.
Analogously, with the same stacking interval as the first type of cellular grids, and regarding the centroid 2 , √3 2 ⁄ as the reference grid, the centroids and the locations of the second type of cellular grids are denoted as 2 ∆ , √3 2 ⁄ ∆ , ∈ 1, 2, … , : and are the minimum number of the second type of cellular grids required along axis X and Y, respectively, when using the second type of cellular grids to stack Ω seamlessly. Therefore, the calculation formula of the minimum number of required cellular grids to stack Ω seamlessly is: . (7) By using cellular grids with a radius of 5.25 m to stack Ω with a size of 50 m 50 m seamlessly, the minimum number of cellular grids can be calculated by Equation (5)- (7), which is 42.
The stacking effect is shown in Figure 3, the blue cellular grids are the first type of grids based on the 1-st grid, and the remaining cellular grids are the second type of grids based on the 7-th grid.

Improved Vampire Bat Optimizer
Based on SSBCG proposed in Section 4.1, the coverage effect can be enhanced by deploying sensor nodes to the centroids of the cellular grids. However, how the moving destination of each sensor node should be allocated has not been resolved, which is related to the movement trajectory of the nodes and even the energy cost during redeployment. Inspired by the vampire bat's egoism

Improved Vampire Bat Optimizer
Based on SSBCG proposed in Section 4.1, the coverage effect can be enhanced by deploying N sensor nodes to the centroids of the N cellular grids. However, how the moving destination of each sensor node should be allocated has not been resolved, which is related to the movement trajectory of the nodes and even the energy cost during redeployment. Inspired by the vampire bat's egoism and altruism, we have introduced and discussed vampire bat optimizer (VBO) in [35], aiming at maximizing the benefits of the whole generation of vampire bats and balancing the health status of individual. Eventually, VBO is used for reducing TEC, MEC, and URE of sensors during deployment.
However, VBO can only solve the problem of symmetric assignment, namely the problem of coverage enhancement that the number of sensor nodes is equal to that of mobile destinations. VBO is not applicable once they are not equal. For the case that the number of cellular grids is not equal to that of sensor nodes, we propose an improved VBO (IVBO) to solve the asymmetric competition problem by introducing virtual bats and virtual preys for the asymmetry of competition process. IVBO solves not only the multi-objective problem of minimizing the TEC and URE of sensor nodes, but also the asymmetric assignment problem once the assignment matrix is not square, which has the following steps.

Seeking the Favorite Prey
As a unique blood-eating mammal, the feeding habits of vampire bats vary widely. Whether they are interested in a prey depends not only on their own taste and hunger degree, but also on the blood volume and hunting risk of the target prey. Assuming that the number of bats and prey is N b and N p respectively, and they are not necessarily equal. For convenience, the j-th prey and the i-th bat are denoted as p j and b i , respectively. The risk rate of p j , the gene of b i , and the interest rate of b i in p j are denoted as r t j , g t i and I t i,j , respectively. Before the whole generation of vampire bats starts hunting, the N b bats will calculate the income of the intended hunting according to their interest in prey and the risk of capturing the N p prey. For example, B t i,j = I t i,j − r t j can characterize the benefits of b i to capture p j during the t-th round of hunt, and the benefit matrix is defined as: If N b is greater than N p , the competition problem is transformed into a symmetrical assignment problem by adding N b − N p virtual preys, which is equivalent to adding N b − N p columns of zero element to the right of matrix B t N b ,N p and expanding it to a square matrix of N b × N b as shown in Equation (9). Namely N p bats can capture prey and suck blood eventually with the remaining N b − N p bats starved until the second part of IVBO: If N p is greater than N b , the competition problem is transformed into a symmetrical assignment problem by adding N p − N b virtual bats, which is equivalent to adding N p − N b rows of zero element to the top of matrix B t N b ,N p and expanding it to a square matrix of N p × N p as shown in Equation (10). Namely N b bats can capture prey and suck blood eventually with the remaining N p − N b preys escaped and survived: The reason for augmenting B t with zero elements is that we try to accomplish symmetric assignment by adding virtual prey or bats in the competition process, but these prey and bats cannot really exist in reality, namely during the process of competition, the N b − N p virtual preys added will become the best prey for N b − N p bats once N b is greater than N p , and N p − N b virtual bats will also participate in competing for N p − N b preys once N p is greater than N b . However, the real bats cannot take any advantage from the virtual prey and the real prey will not be captured by the virtual bat after the end of assignment. If non-zero values is added instead of zero elements for the weighted maximum matching problem of asymmetric bipartite graph where the elements of B t N b ,N p are all positive, some virtual bats or preys will be actually assigned to the actual prey or bat once the value of an element added is greater than the minimum element of B t N b ,N p , which will affects the assignment result obviously. Similarly, a very large number (greater than the largest element of the efficiency matrix) should be used to replace the zero element to achieve the same purpose once it is a weighted minimum matching problem of asymmetric bipartite graph.
Each bat then explores one of its most favorite prey and is ready to compete for it, which is equivalent to seeking the largest element of each row in B t N,N . For instance, the favorite prey for bat b i can be determined by p t best f orb i

Predation Competition
Given that the favorite prey of vampire bat is likely to conflict, namely multiple bats compete for the same prey often occurs, which is called the phenomenon of predation conflict, hence they have to start a predator competition. The biological behavior of bats participating in predatory competition for their favorite prey is one of the biological characteristics of vampire bats, which is famous as the egoism.
The prey robbed by bats are denoted as Φ t prey . Taking p α in Φ t prey as an example, all bats participated in robbing p α are represented as Φ t bat . The updating formula of b i 's interest value for p α is: where ϕ t 1 and ϕ t 2 are the maximum and secondary benefits of the bats in Φ t bat hunting p α , respectively. In order to prevent the update failure of I t i,α due to the equality of ϕ t 1 and ϕ t 2 , we added ε to ensure the update process runs smoothly.
Predation conflicts no longer occur once each bat has the favorite prey, and the vampire bats begin to suck blood from its favorite prey. The amount of blood sucked by b i are denoted as z i . the benefits of the whole generation of vampire bat have been maximized, and we call the process from 4.1.1 to 4.1.2 the first part of IVBO.

Back Feeding
As discussed in Section 4.2.1, few bats will fail to hunt once there are more bats than prey. Universally, not every bat can draw enough blood to sustain life since predatory competition occurs frequently. Vampire bats will starve to death if they can't suck enough blood for three consecutive nights [39]. Nevertheless, the life of most vampire bats does not end there, as one of the biological characteristics of vampire bats [40], the biological behavior of sharing extra food with hungry vampire bats according to their kinship, which is famous as the altruism.
After the end of predation, vampire bats began to look for a hungry bat for back feeding according to kinship. For example, b j will get the excess blood from b i once the latter is full and the former is hungry and the genes of them are similar enough, which is called a back-feeding condition as shown in Equation (12). Differences in kinship and starvation between b i and b j are measured by τ 1 and τ 2 , respectively: Given that there may be more than one vampire bat meeting the back-feeding condition, hence we regard b j as the optimal transfusion target for b i if the attribute value of b j is the largest, which can be calculated by where the weights for differences in kinship and starvation between b i and b j are denoted as w 1 and w 2 .
The back-feeding process will end once no bats need back feeding, and the blood absorption of the whole bat population is effectively balanced and the benefits of each bat have been balanced. We call the process in 4.1.3 the second part of IVBO.

Cellular Grids Dynamic Partitioning Algorithm
The residual energy of nodes and the total energy cost along with the cover effect can be optimized only for a specific number of sensor nodes by SSBCG and IVBO proposed in 4.1 and 4.2, which has mediocre performance when the number of sensors changes. In order to enhance the robustness of the proposed strategy, a cellular grids dynamic partitioning algorithm (CGDPA) is proposed to dynamically adjust the size of the cellular grid based on the actual number of sensor nodes to enhance the coverage effect when the number of sensors changes.
Regarding the sensor's perceived radius R S as the radius R C of the cellular grid, the minimum number N ub of cellular grids required for stacking Ω seamlessly can be calculated by Equations (5)-(7), which is also the minimum number of sensors required to cover Ω completely.
As shown in Figure 4a, by regarding the sensing area of the sensors as the circumscribed circle of the cellular grids and denoting the radius of the cellular grids as R C = R lb , the minimum number of cellular grids required for stacking Ω seamlessly, which is denoted as N ub , can be calculated by Equation (7) once the length and width of Ω and the sensing radius of the sensor are given.
If the number of sensor nodes is equal to N ub , the optimal cover effect as shown in Figure 5a can be achieved; if the number of sensor nodes is more than N ub , the coverage rate of Ω can be increased to 100% in spite of the remaining N S − N ub sensors being redundant, which is shown in Figure 5b; however, if the number of sensor nodes is less than N ub , on the one hand, N ub − N S cellular grids will turn into monitor blind area, on the other hand, the overlap of the sensing area of N S sensors will be redundant, which finally leads to the inferior monitoring effect shown in Figure 6a. (7), which is also the minimum number of sensors required to cover Ω completely.
As shown in Figure 4a, by regarding the sensing area of the sensors as the circumscribed circle of the cellular grids and denoting the radius of the cellular grids as , the minimum number of cellular grids required for stacking Ω seamlessly, which is denoted as , can be calculated by Equation (7) once the length and width of Ω and the sensing radius of the sensor are given.  If the number of sensor nodes is equal to , the optimal cover effect as shown in Figure 5a can be achieved; if the number of sensor nodes is more than , the coverage rate of Ω can be increased to 100% in spite of the remaining sensors being redundant, which is shown in Figure 5b; however, if the number of sensor nodes is less than , on the one hand, cellular grids will turn into monitor blind area, on the other hand, the overlap of the sensing area of sensors will be redundant, which finally leads to the inferior monitoring effect shown in Figure 6a. In order to adapt to the circumstances that the number of sensor nodes is not equal to , a cellular grids dynamic partitioning algorithm (CGDPA) is proposed to dynamically adjust the radius of the cellular grids according to the actual number of sensor nodes. Specifically, the radius of the cellular grid is dynamically adjusted by: In order to adapt to the circumstances that the number of sensor nodes is not equal to , a cellular grids dynamic partitioning algorithm (CGDPA) is proposed to dynamically adjust the radius of the cellular grids according to the actual number of sensor nodes. Specifically, the radius of the cellular grid is dynamically adjusted by: In order to adapt to the circumstances that the number of sensor nodes is not equal to N ub , a cellular grids dynamic partitioning algorithm (CGDPA) is proposed to dynamically adjust the radius of the cellular grids according to the actual number of sensor nodes. Specifically, the radius of the cellular grid is dynamically adjusted by: R C is the radius of the adjusted cellular grid (dependent variable); R ub and R lb are the radius of the cellular grid as shown in Figure 4a,b, where the sensing area of the sensor node is the inscribed and circumscribed circle of the cellular grid, respectively; N lb and R ub are the number of cellular grids calculated by Equations (5)-(7) when R ub and R lb are regarded as the radius of the cellular grid, respectively; N S is the actual number of sensor nodes (the only independent variable).
The size of the cellular grid will be adjusted in the range shown in Figure 4 according to the number of sensor nodes. When the condition of N lb < N S < N ub is satisfied, as shown in Figure 6b, R C will be increased with the decrease of N S , the distance between sensor nodes will be increased, and the redundant coverage area will be effectively reduced; when the critical condition N S = N ub is met, the optimal cover effect can be achieved without adjusting the size of the cellular grid; when the N S > N ub is satisfied, the coverage rate can be increased to 100% in spite of few sensors being redundant, hence there is no need to adjust the cellular grid size.

Energy-Efficient Coverage Enhancement Strategy for WSNs
The SSBCG, IVBO and CGDPA proposed in Sections 4.1-4.3 are combined into an energy efficient coverage enhancement strategy for WSNs, which is called improved vampire bat optimizer based on dynamic cellular grids (IVBODCG), and its flowchart is shown in Figure 7. After the amount, the sensing radius, the position of sensors, and the size of the monitoring area are initialized, the first step is to calculate the radius of the cellular grid which is suitable for the actual number of sensors, and divide the monitoring area into cellular grids according to SSBCG, then calculate the distance matrix and expand it based on the relationship between the actual number of nodes and cellular grids, which corresponds to Section 4.4.1. The second step which corresponds to Section 4.4.2 is to find the best mobile destination for sensors by regarding the centroid of cellular grids as the mobile destination of them. The sensor nodes compete for the mobile destination until the best mobile destination of sensors no longer conflicts in the third step, which corresponds to Section 4.4.3. The fourth step calculates the mobile task of the sensors after the end of the competition, then judging whether there are any exchangeable mobile tasks according to the Theorem of task exchange and selecting the most suitable sensor, and it corresponds to Section 4.4.3. The final redeployment task will be carried out once there is no longer any exchangeable task. The detailed process of each section is as follows. and the redundant coverage area will be effectively reduced; when the critical condition is met, the optimal cover effect can be achieved without adjusting the size of the cellular grid; when the is satisfied, the coverage rate can be increased to 100% in spite of few sensors being redundant, hence there is no need to adjust the cellular grid size.

Calculating the Radius of Cellular Grids and the Distance Matrix
N C and R C can be calculated by Equations (5)-(7) and (14), respectively, based on the relationship between the actual number of nodes and N ub . The distance matrix D N S ×N ub can be calculated by Equation (2). Then we regard the sensors and cellular grids as vampire bats and as prey, respectively.
The number of bats is equal to that of prey when N S = N ub is satisfied, which means D N S ×N ub is a square matrix, which can be transformed into a symmetric assignment problem. The number of bats is not equal to that of prey when N S = N ub is not satisfied, which means D N S × N ub is no longer a square matrix and it belongs to the asymmetric assignment problem, and it can be transformed into a symmetric assignment problem by adding N S − N ub virtual cellular grids when N S > N ub and adding N ub − N S virtual sensors when N S < N ub , which is equivalent to adding zero elements of N S − N ub columns to D N S ×N ub and extending it to a square matrix when N S > N ub , and adding zero elements of N ub − N S rows to D N S ×N ub and extending it to a square matrix when N S < N ub .
Given that IVBO is used to optimize the total energy cost of sensors, which can effectively solve the maximum matching problem when considering only the predation competition of vampire bats. However, the purpose of reducing the total energy cost of nodes is to find the minimum value. Therefore, the benefit matrix should be calculated by B N×N = −D N×N . We then proceed to Section 4.4.2.

Seeking the Optimal Moving Destination for Sensors
We traverse all sensors and cellular grids to seek the optimal moving destination for each sensor. We define G j as the best moving destination for S i if G j is closest to S i . The optimal moving destination for S i on the t-th iteration is calculated by G t best f ors i = arg max j∈{1,2,...,N} B t i,j , where B t i,j is the benefits of S i when moves to G j on the t-th iteration.
We determine whether the optimal moving destination of the sensors is in conflict; if there is no conflict, then the moving task matrix can be calculated by is the distance matrix calculated in 4.4.1, and the meaning of x t i,j ∈ X t N×N is shown in Equation (4). We then proceed to Section 4.4.4; if there is a conflict, then proceed to Section 4.4.3.

Competition
When multiple sensor nodes compete for the same cellular grid, we denote the conflicting cellular grid robbed by multiple nodes as a popular grid set Φ t girds . For instance, taking G t α in Φ t girds as an example, sensors participated in robbing G t α are represented as Φ t sensors . The updating formula of S i 's benefit in moving to G t α is: where ϕ t 1 and ϕ t 2 are the maximum and secondary benefits of the nodes in Φ t sensors moving to the G t α , respectively. In order to prevent the update failure of B t i,α due to the equality of ϕ t 1 and ϕ t 2 , we added ε to ensure the update process runs smoothly. All popular grids and the conflicted sensors will be traversed to update the benefit matrix B t N × N . Then return to Section 4.4.2.

Exchanging the Moving Tasks
So far, the benefits of all sensors have been maximized but the differences have not been minimized. We traverse all nodes to exchange their moving tasks based on the theorem of task exchange; then proceed to Section 4.4.5: • Condition of task exchange: Given that sensors S i and S m with G j and G n as their moving destination, the task exchange condition is: • Lemma of task exchange: The benefit of S i and S m can be balanced by Equation (17) once they satisfy the condition of task exchange: • Theorem of task exchange: n schemes can be found once n sensors satisfying the condition of task exchange simultaneously based on the lemma of task exchange. We define S ξ as the optimal exchangeable sensor for S i once it has the largest fitness value among n sensors, which can be calculated by: where µ 1 and µ 2 are the weights of the summary and difference of the benefit of them, respectively.
For S i and its moving destination G j shown in Figure 8, both S m 1 and S m 2 satisfy the condition of task exchange. The fitness value of S m 1 and S m 2 can be calculated by Equation (18), and the former is bigger than the latter; thus, we exchange the moving task of S i and S m 1 rather than that of S i and S m 2 .
So far, the benefits of all sensors have been maximized but the differences have not been minimized. We traverse all nodes to exchange their moving tasks based on the theorem of task exchange; then proceed to Section 4.4.5:


Condition of task exchange: Given that sensors and with and as their moving destination, the task exchange condition is:  Lemma of task exchange: The benefit of and can be balanced by Equation (17) once they satisfy the condition of task exchange:  Theorem of task exchange: schemes can be found once sensors satisfying the condition of task exchange simultaneously based on the lemma of task exchange. We define as the optimal exchangeable sensor for once it has the largest fitness value among sensors, which can be calculated by: where and are the weights of the summary and difference of the benefit of them, respectively. Taking a Ω of 50 m × 50 m as an example, when calculating the movement scheme of 42 sensors with a perceived radius of 5.25 m, the comparison before and after the task exchange is shown in Figure 9, and it can be seen that the distant moving tasks as shown by red arrows in Figure 9a are balanced in Figure 9b, which is shown by green arrows. than that of and . Taking a Ω of 50 m 50 m as an example, when calculating the movement scheme of 42 sensors with a perceived radius of 5.25 m, the comparison before and after the task exchange is shown in Figure 9, and it can be seen that the distant moving tasks as shown by red arrows in Figure  9a are balanced in Figure 9b, which is shown by green arrows.

Redeployment
The energy-efficient redeployment can be completed by moving sensors according to the matrix .

Redeployment
The energy-efficient redeployment can be completed by moving sensors according to the matrix Task N × N .

Parameter Setting
Simulations were performed with MATLAB R2019a on a computer with a 2.7 GHz frequency and 8 GB memory to evaluate the performance of our proposed strategy IVBODCG, and the MATLAB code is detailed in the supplementary file. We compared the performance of VFA, VFPSO, LGWO and IVBODCG about final coverage rate (FCR), total energy cost (TEC) of all sensors, uniformity of residual energy (URE) and maximum energy cost (MEC) of sensors under the same experimental conditions, which are shown in Table 1.

Simulation Results
The differences in final locations, moving trajectories, and cover effect of sensor nodes by four algorithms are presented as Figure 10.

Parameter Setting
Simulations were performed with MATLAB R2019a on a computer with a 2.7 GHz frequency and 8 GB memory to evaluate the performance of our proposed strategy IVBODCG. We compared the performance of VFA, VFPSO, LGWO and IVBODCG about final coverage rate (FCR), total energy cost (TEC) of all sensors, uniformity of residual energy (URE) and maximum energy cost (MEC) of sensors under the same experimental conditions, which are shown in Table 1.

Simulation Results
The differences in final locations, moving trajectories, and cover effect of sensor nodes by four algorithms are presented as Figure 10.  The figures from (a1) to (d1) show the initial locations of nodes for algorithms for fairness of comparison, and it can intuitively show that our proposed strategy IVBODCG is superior than VFA, VFPSO and LGWO when considering the actual moving distance, the final coverage effect, and the uniformity of the mobile distance of sensors.
IVBODCG reaches an FCR of 100% while that of LGWO, VFA and VFPSO are only 92.70%, 93.56% and 95.44%, which is compared in Figure 11a. Since there are obvious differences in FCR by the end of redeployment, the TEC is compared in Figure 11b when four algorithms all reach a The figures from (a1) to (d1) show the initial locations of nodes for algorithms for fairness of comparison, and it can intuitively show that our proposed strategy IVBODCG is superior than VFA, VFPSO and LGWO when considering the actual moving distance, the final coverage effect, and the uniformity of the mobile distance of sensors.
IVBODCG reaches an FCR of 100% while that of LGWO, VFA and VFPSO are only 92.70%, 93.56% and 95.44%, which is compared in Figure 11a. Since there are obvious differences in FCR by the end of redeployment, the TEC is compared in Figure 11b when four algorithms all reach a coverage rate of 92.70%. The TEC of VFPSO, LGWO and VFA is 1.8306 × 10 4 , 2.8991 × 10 4 and 2.2132 × 10 4 Joules when the coverage rate of them all reach 92.07%, respectively, and they are worse than that of IVBODCG by 49.39%, 136.58% and 80.61%, and the former even consume more energy when the latter achieves full cover effect. In addition, IVBODCG achieved an FCR of 100% with a TEC of 1.6148 × 10 4 Joules while the coverage rate of LGWO, VFA and VFPSO only reached 80.6%, 85.8% and 89.2%, respectively, with the same energy cost.
IVBODCG after 28 rounds of movement. The circular areas with the solid points at the center are the perceived ranges of sensors. (a1) to (d1) are the initial positions of the four algorithms, and (a3) to (d3) are the final coverage effects of them. The 53 hollow and solid points in (a2) to (d2) are the final and initial locations of 53 nodes, respectively. The lines connecting them are the actual mobile trajectories of sensors.
The figures from (a1) to (d1) show the initial locations of nodes for algorithms for fairness of comparison, and it can intuitively show that our proposed strategy IVBODCG is superior than VFA, VFPSO and LGWO when considering the actual moving distance, the final coverage effect, and the uniformity of the mobile distance of sensors.
IVBODCG reaches an FCR of 100% while that of LGWO, VFA and VFPSO are only 92.70%, 93.56% and 95.44%, which is compared in Figure 11a. Since there are obvious differences in FCR by the end of redeployment, the TEC is compared in Figure 11b when four algorithms all reach a coverage rate of 92.70%. The TEC of VFPSO, LGWO and VFA is 1.8306 × 10 4 , 2.8991 × 10 4 and 2.2132 × 10 4 Joules when the coverage rate of them all reach 92.07%, respectively, and they are worse than that of IVBODCG by 49.39%, 136.58% and 80.61%, and the former even consume more energy when the latter achieves full cover effect. In addition, IVBODCG achieved an FCR of 100% with a TEC of 1.6148 × 10 4 Joules while the coverage rate of LGWO, VFA and VFPSO only reached 80.6%, 85.8% and 89.2%, respectively, with the same energy cost.  The energy cost of each sensor of IVBODCG, VFPSO, VFA and LGWO are compared in Figure 12, LGWO has the worst performance when considering TEC of nodes, and VFPSO along with VFA are slightly better than LGWO, and IVBODCG are obviously better than all of them. In addition, URE and MEC of nodes of IVBODCG is effectively optimized when compared with other three algorithms, as a result of that they only care about the cover effect and the convergence speed with the optimization of energy cost ignored.
The performance differences of MEC of nodes of four algorithms are presented in Figure 13a. The node with the maximum energy cost of IVBODCG, VFPSO, VFA and LGWO reaches the optimal location after 13, 17, 29 and 33 rounds of deployment, and consume 590.83, 829.08, 1406.32 and 1533.47 Joules of energy, respectively. In addition, URE of IVBODCG, VFPSO, VFA and LGWO are 144.51, 183.29, 261.56 and 350.69 Joules after the final rounds of movements, which are shown in Figure 13b. It indicates that VFPSO, LGWO and VFA performs worse than IVBODCG by 26.84%, 142.6% and 81.00%, respectively. Unfortunately, our strategy is worse than the other three algorithms during the first 10 rounds of movement. Given that URE is related to the difference in distance moved by each sensor, namely, the higher the uniformity of residual energy, the greater the difference of moving distance of each node during the movement since the actual moving speed by single step is the same.
The energy cost of each sensor of IVBODCG, VFPSO, VFA and LGWO are compared in Figure  12, LGWO has the worst performance when considering TEC of nodes, and VFPSO along with VFA are slightly better than LGWO, and IVBODCG are obviously better than all of them. In addition, URE and MEC of nodes of IVBODCG is effectively optimized when compared with other three algorithms, as a result of that they only care about the cover effect and the convergence speed with the optimization of energy cost ignored.  Figure 13b. It indicates that VFPSO, LGWO and VFA performs worse than IVBODCG by 26.84%, 142.6% and 81.00%, respectively. Unfortunately, our strategy is worse than the other three algorithms during the first 10 rounds of movement. Given that URE is related to the difference in distance moved by each sensor, namely, the higher the uniformity of residual energy, the greater the difference of moving distance of each node during the movement since the actual moving speed by single step is the same.    Figure 13b. It indicates that VFPSO, LGWO and VFA performs worse than IVBODCG by 26.84%, 142.6% and 81.00%, respectively. Unfortunately, our strategy is worse than the other three algorithms during the first 10 rounds of movement. Given that URE is related to the difference in distance moved by each sensor, namely, the higher the uniformity of residual energy, the greater the difference of moving distance of each node during the movement since the actual moving speed by single step is the same.  Obviously, the sensor nodes driven by each algorithm have not all moved to the destination by the end of the 10th round of movement, hence the reason why the URE of IVBODCG during the first 10 rounds is higher is that a large number of sensor nodes have reached the best destination early while other nodes are still moving, and the sensor nodes of the other three algorithms have not arrived the destination and are moving at a speed of 1 m by single step, and it is consistent with the phenomenon shown in Figure 10d2, which shows that our proposed strategy IVBODCG has a large number of sensor nodes with short moving distances when compared to VFPSO, VFA and LGWO.
By initializing different positions for sensors, 200 independent simulation experiments are performed to evaluate the stability and reliability of IVBODCG. The initial positions of the sensors were randomly generated, and the other parameters were the same in every simulation experiment. The performance comparison about FCR of IVBODCG, VFA, VFPSO and LGWO is shown in Figure 14a, which can be seen that LGWO is the worst one that floating around 92%, and VFA is close to 93% which is slightly superior than LGWO. Our proposed strategy can reach an FCR of 100% by every single experiment while VFPSO can only reaches 95%. Figure 14b compares the TEC of the four algorithms after completing the moving task, which is consistent with the experiment result presented in Figure 12.
VFPSO, LGWO, and VFA fluctuate around 2.0 × 10 4 , 2.8 × 10 4 and 2.5 × 10 4 Joules, respectively, while IVBODCG is close to 1.6 × 10 4 Joules. Figure 14c,d compare the performance differences in MEC and URE, and our proposed strategy is the best among them. experiment while VFPSO can only reaches 95%. Figure 14b compares the TEC of the four algorithms after completing the moving task, which is consistent with the experiment result presented in Figure 12b. VFPSO, LGWO, and VFA fluctuate around 2.0 × 10 4 , 2.8 × 10 4 and 2.5 × 10 4 Joules, respectively, while IVBODCG is close to 1.6 × 10 4 Joules. Figures 14c,d compare the performance differences in MEC and URE, and our proposed strategy is the best among them.
The mean value of 200 simulation experiments are presented in   Table 2. IVBODCG can balance the URE of sensors by 48.36%, 41.51% and 24.73%, and also reduce the MEC of nodes by 48.66%, 41.98% and 24.94%, when compared to LGWO, VFA and VFPSO. Besides, it can reduce the TEC of nodes by 42.03%, 34.73% and 18.25%, and also have a superior performance when considering the FCR. Given that conclusions based on the means are generally misleading, and it is hard to find whether there is any statistically significant difference between the approaches without the indication of error, the difference in the statistical results of the performance of the four algorithms in each performance indicator is presented in Figure 15 and Table 3 Given that conclusions based on the means are generally misleading, and it is hard to find whether there is any statistically significant difference between the approaches without the indication of error, the difference in the statistical results of the performance of the four algorithms in each performance indicator is presented in Figure 15 and Table 3.   Considering that the actual number of sensors may not be the optimal number which is 53, a vertical comparison is made by changing the number of sensors in order to test the universality of our proposed strategy. Figure 16 compares the performance of four algorithms when the number of sensors varies from 30 to 80. It can be seen from Figure 16a that the FCR of the four algorithms is increasing with the increase of the number of sensors, and our proposed strategy is superior than that of LGWO and VFA when the number of sensors is the same, and FCR of IVBODCG reaches 100% when the number of sensors exceeds 53. Figure 16b shows that TEC of LGWO, VFA and VFPSO increases obviously with the increase of the number of sensors, and that of our strategy is slightly increasing before the number of nodes increases to 53, and is lower than that of LGWO, VFA and VFPSO when the number of sensors is the same. Figure 16c,d indicate that the MEC and URE of sensors of LGWO and VFA is gradually increasing with the increase of the number of sensors, and that of VFPSO is slightly rising while IVBODCG is constant and even gradually reduced when the number of sensors is greater than 53, and it also has superior performance when the number of sensors is the same.
slightly increasing when the number of sensors is less than 53, which is due to the game theory of IVBO, namely it can always find a scheme to optimize the moving distance. Regardless of the number of sensors, it can assign sensor nodes to the cellular grid in close proximity. Analogously, when the number of sensors is greater than 53 and continues to increase, the number of cellular grids produced by CGDPA and SSBCG will stay at 53 instead of increasing, namely the number of sensors will be greater than the number of cellular grids, which causes the assignment problem to become asymmetrical. However, the existence of virtual grids transforms it into a symmetric assignment problem, and some of the sensor nodes will correspond to these virtual grid points, and they are assigned only in the IVBO instead of moving actually during redeployment. Therefore, IVBO obtains the rights of option for picking 53 nodes out of all sensor nodes to move, rather than moving all of them. As the number of

Discussion
As the derivative algorithms of VFA, such as the strategies proposed in [8][9][10][11], their basic principle is to fill unmonitored areas and separate overlapping nodes. During each iteration, the virtual moving effect of a sensor is influenced by the threshold of the repulsion between the nodes, the threshold of the attraction from grid points to sensors, and the single-step moving speed. Therefore, the threshold of repulsion, attraction and the virtual moving speed by single-step have a significant impact on FCR; the optimal coverage effect will not be obtained if the best parameters cannot be determined. In general, VFA-based approaches involve a number of parameters to decide the magnitude of the force and to prevent sensors from oscillation, which means, that it is usually difficult to find the suitable values for these parameters in different cases. In addition, just similar to other swarm intelligence, PSO and GWO save the calculation time by sacrificing the accuracy of the solution, hence the convergence speed and the performance of the convergence value are poor for large scale problems, for example, optimizing the coverage effect. Conversely, the performance of IVBO is insensitive to its parameters thanks to it transforming the sensor coverage enhancement problem to a task-assignment problem, it ensures an optimum coverage rate after the movements, which is the reason that FCR of VFA, LGWO and VFPSO are lower than that of IVBODCG in 200 simulation experiments as shown in Figures 14 and 15 and Tables 2 and 3. Although the parameters of the former are adjusted repeatedly, the effect cannot be improved significantly. Surprisingly, the TEC, MEC and URE of IVBODCG are almost constant or slightly increasing when the number of sensors is less than 53, and are decreasing rather than increasing as the number of sensors increases beyond 53 as shown in Figure 16b-d, which is different from the other three algorithms. However, this is not without reason, which is precisely thanks to the virtual grids and virtual sensors introduced in Section 4.4.1. The TEC, MEC and URE of sensors are almost constant or slightly increasing when the number of sensors is less than 53, which is due to the game theory of IVBO, namely it can always find a scheme to optimize the moving distance. Regardless of the number of sensors, it can assign sensor nodes to the cellular grid in close proximity. Analogously, when the number of sensors is greater than 53 and continues to increase, the number of cellular grids produced by CGDPA and SSBCG will stay at 53 instead of increasing, namely the number of sensors will be greater than the number of cellular grids, which causes the assignment problem to become asymmetrical. However, the existence of virtual grids transforms it into a symmetric assignment problem, and some of the sensor nodes will correspond to these virtual grid points, and they are assigned only in the IVBO instead of moving actually during redeployment. Therefore, IVBO obtains the rights of option for picking 53 nodes out of all sensor nodes to move, rather than moving all of them. As the number of sensors continues to increase, the sensors that are extremely close to cellular grids, predictably, can be found for redeployment, hence the moving distance so as to the TEC, MEC and URE is optimized.

Discussion
As the derivative algorithms of VFA, such as the strategies proposed in [8][9][10][11], their basic principle is to fill unmonitored areas and separate overlapping nodes. During each iteration, the virtual moving effect of a sensor is influenced by the threshold of the repulsion between the nodes, the threshold of the attraction from grid points to sensors, and the single-step moving speed. Therefore, the threshold of repulsion, attraction and the virtual moving speed by single-step have a significant impact on FCR; the optimal coverage effect will not be obtained if the best parameters cannot be determined. In general, VFA-based approaches involve a number of parameters to decide the magnitude of the force and to prevent sensors from oscillation, which means, that it is usually difficult to find the suitable values for these parameters in different cases. In addition, just similar to other swarm intelligence, PSO and GWO save the calculation time by sacrificing the accuracy of the solution, hence the convergence speed and the performance of the convergence value are poor for large scale problems, for example, optimizing the coverage effect. Conversely, the performance of IVBO is insensitive to its parameters thanks to it transforming the sensor coverage enhancement problem to a task-assignment problem, it ensures an optimum coverage rate after the movements, which is the reason that FCR of VFA, LGWO and VFPSO are lower than that of IVBODCG in 200 simulation experiments as shown in Figures 14 and 15 and Tables 2 and 3. Although the parameters of the former are adjusted repeatedly, the effect cannot be improved significantly.
The virtual movements of nodes in each iteration, similarly, the moving path of the nodes and the moving energy consumption, are affected by the defects of VFA and its derivative algorithms, which means it is difficult to account for the optimization of TEC, MEC and URE of sensors by VFA and VFPSO, not to mention LGWO. The task distributing model of IVBO results in that the first part of it is to optimize TEC on the basis of a full coverage rate. Therefore, IVBODCG has a superior performance in terms of energy cost, which is shown in Figure 11 and Table 2.
The first part of IVBO can minimize the TEC of sensors while ensuring the best coverage, just as it can optimize the benefits of the generation of vampire bats. However, the significances of the IVBO are far greater. The reverse blood-transfusion process in the second part of IVBO guarantees the balance volume of blood sucked by vampire bats, which means the moving task of a nodes with a long distance of movement is exchanged by another sensor. Thus, IVBODCG is undoubtedly superior than VFA, VFPSO, and LGWO when considering MEC and URE of sensors. As an example, energy-efficient coverage enhancement problem is merely one of the numerous applications of IVBO, and it may perform better than the general integer programming method when solving a large category of task assignment problems with the goal of equilibrium.

Conclusions
In this paper, by using cellular grids to stack the sensing area seamlessly, and the optimization problem of coverage enhancement and energy consumption is converted into a task distributing problem. In addition, CGDPA is proposed to improve the coverage effect for different numbers of sensors. Furthermore, IVBO is presented to tackle the asymmetric competition problem by introducing virtual bats and virtual preys, which solves not only the multi-objective problem of minimizing and balancing the energy consumption of sensor nodes, but also the asymmetric assignment problem when the number of sensor nodes is not equal to that of cellular grids. We combine SSBCG, CGDPA and IVBO that proposed into an energy-efficient coverage enhancement strategy IVBODCG for WSNs. Simulation results show that, compared with three classical algorithms, the strategy proposed shows improved performance in terms of FCR, TEC, MEC and URE, and it also has a superior robustness when the number of nodes changes. However, there are some limitations, for example, the same perception radius of sensor nodes and the disk coverage model, which are too simplistic and ideal to be used in realistic applications, some other related coverage models such as confident information and data fusion based coverage model which define coverage concept from the view of reconstruction and estimation should be considered. Additionally, some true experiments instead of theoretical simulations of IVBO in WSNs will be our research focus in future.