A Novel Coverage Enhancement Algorithm for Image Sensor Networks

The needs of diverse environmental information introduce the multimedia data into wireless sensor networks. The characteristics of most multimedia information, such as great amounts of data and high-quality requirement for network service, positively affect traditional wireless sensor networks, which also derive various new research areas. This paper focuses on the multimedia image sensor networks and proposes FVPTR (fuzzy image recognition and virtual-potential-field-based paired tangent point repulsion) method to enhance the perspective coverage of network. This approach utilizes fuzzy image recognition method to process the boundary nodes. Aimed at nonboundary nodes, based on potential field theory, it adopts paired tangent point repulsion mechanism, which attempts to obtain the optimal network sensing coverage through the multiple paired achievements between one current node and several target nodes. Combined with FVPTR, some algorithms such as LRBA, MBAA, and mixed superposition algorithm are put forward to single or multiple-time adjustment by the rotation of the direction angle. The results of simulation and all kinds of comparisons show that three-times pairing method enhances the coverage of networks well.


Introduction
Wireless senor networks (WSNs) enjoy great applications in traditional fields such as industry, agriculture, military, and environmental monitor. Besides, WSNs also show the superiority in areas of household paradigm, health care, transportation, and so forth. With the occurrence of new applications, the needs for diverse environmental information from users are increasing, and the multimedia information is introduced into wireless sensor networks. The characteristics of most multimedia information, such as amounts of data and high-quality requirement for network service, greatly affect the traditional techniques of WSNs and meanwhile derive some new research points. Deployment and coverage are two typical issues, which not only reflect the perceptive ability of networks to physical world, but also are directly related to the quality of network services [1].
Numerous researches have focused on the coverage issues [2][3][4][5][6][7]. Jing and Alhussein [5] gave out a coverage model of target points for direction sensor, proposing LPI algorithm to compare with CGA and DGA algorithms. Judging from the simulation results, it could not show significance in coverage enhancement, and it was a preliminary study on directional sensors. Tao et al. [8] changed the coverage-enhancing problem into virtual-potential-field-based centroid points' uniform distribution problem which ignored the effect of border nodes. Mohamed and Hossien [6] proposed PCP protocol based on omnidirectional image sensors and deduced several models, in which the exponential model is an inspiration for our method. Zou and Krishnendu [9] used VFA algorithm to generate the mobile paths for sensor nodes based on virtual potential field and artificially changed the location of each node in accordance with the calculated trajectory. Nonetheless, it is almost impossible to achieve in the case of large-scale deployment. Image sensor is a typical case of directional sensors. This paper discusses the scenario that stochastically deployed image sensors in limited areas and uses paired tangent point repulsion method based on fuzzy image recognition and virtual potential field theory to improve the coverage of image sensor networks.

Fuzzy Image Recognition.
Oztarak et al. [10] demonstrated that image sensors could perceive "video event" once accessing into field of view (FOV) [5,11]. He utilized joint fuzzy processing method with micro-SEBM (structural and event based multimodal) to compose the mobile trace of "video event" in an image captured by nodes and demonstrated that image sensors had the ability to identify the "video event." Then the specific location of "video event" would be positioned in the current image through scanning it and be clearly expressed by MBR (minimum bounding rectangle) information which could be recorded for further operation.
The frequency and the amounts of data of most monitors and household cameras cannot be afforded by wireless multimedia sensor networks (WMSNs). A specific image sensor node developed according to actual demand would be used as the hardware to test the fuzzy image recognition method in the following experiments.
The node as shown in Figure 1 is equipped with ATmega128 processor, OV7620 image processing chip, and CC2420 communication chip. Moreover, it can work at two modes of 24-bit color and 8-bit grayscale, and sample images at three different pixel resolution ratios such as 88 * 72, 160 * 120, and 320 * 240.
The flashlight with AA battery is used as point light source in this experiment. The node working at the mode of 24-bit color and the pixel of 160 * 120 takes photographs on the point light source, as shown in Figure 1.
The image sensor node scans the data of photo. It has not distinguished the point light source until scanning the Xth line and the Yth column and then records the value of column in an array named D until the light source disappears from the Xth line. When scanning the (X + K)th line, the same operation would be done until the node finishes scanning the whole image. After calculating the average value of D, the average column value (ACV) of "video event" in the current image can be figured out. The specific application of ACV will be described in Section 4.1. Figure 2, the effective sensing area of directional sensor is fan-shaped region of OAB which can rotate around the point O. The perceptive radius of directional sensor is R, which will be depicted in Section 2.3. C is a unit vector that begins at O, points to the center of fan-shaped region, and represents a sensor's direction of effective sensing area named as perceptive direction. By adjusting it, a circular area within R can be completely covered. 2α represents the sensor's FOV, which should be approximately π/3 for the special image sensor node in this paper according to actual tests.

Directional Sensing Model. In
At a discrete moment, it can be determined whether any point P is covered by a directional node or not by the following: If a point P meets the above two conditions simultaneously, it is covered.
There are two concepts that should be distinguished well.
(1)"Effective sensing area": fan-shaped region of OAB shown in Figure 2 can be calculated by measure of area.
(2) "FOV": sensor's field of view is the angle of effective sensing area and can be depicted by the dimension of angle.  (1) "Radius of clear perception": in range of this radius, an image sensor node can clearly identify any object accessing into its FOV. The scope of clear perceptive radius is [0, R]. The perceptive ability of an image sensor remains constant in the range of clear perceptive radius as shown in Figure 3.
(2) "Radius of fuzzy perception": in range of this radius, an image sensor node cannot clearly identify the object accessing into its FOV. The requirements of environmental monitoring cannot be well met, but the node can respond to special "video event." Strong light source is such a typical case of special "video event." The scope of fuzzy perceptive radius is [R, ∞]. As depicted in Figure 3, node's sensing capacity exponentially declines within the range of fuzzy perceptive radius. Nevertheless, the node's response to special "video event" should not be greatly affected.
One case is presented as follows: there is an image sensor at the starting point of O. Meanwhile, a strong light source is set at an infinite distance away from starting point as shown in Figure 4. The light source lies in the range of the node's fuzzy perceptive radius. The sensor cannot clearly identify the targets near the light source, that is, it also cannot monitor the environment there. But the "video event" of strong light source can be perceived just as the experiment described in Section 2.1.
The angle between − → C and the vector which begins at O and points to light source is named as ϕ. It is in line with the above-mentioned case when ϕ is roughly in the range of [−π/6, π/6]. In this situation, an image sensor node can find out the specific location of "video event" by scanning the current captured image. Inspired by model of micro-SEMB [10], we propose the video event search algorithm using ACV, which will be discussed in Section 4.1.

Virtual Potential Field.
Introducing virtual potential field into WSNs originates from its application in obstacle avoidance [12,13]. Virtual potential field, due to the simplicity and the characteristic of real time, has been introduced into the coverage problem in WMSNs [8]. In virtual potential field, each node can be considered as a virtual charge, suffers the virtual force from other nearby nodes, and gets the trend of moving towards the region of less node density in networks.
Tao et al. [8] figured out that the focus of each sensor node's effective sensing area would rotate around the node while suffering virtual force from neighboring nodes. Resultant force from all neighboring nodes within the scope of effective communication should be taken into account, which inevitably increased the difficulty of force analysis and the complexity of algorithm. So before FVPTR is executed, the process of pairing between two adjacent nodes should be achieved. The proposed method ignores the influence of most nearby nodes, only focusing on adjusting the two paired nodes' perceptive direction angles in terms of the virtual force between them as reducing the complexity of computation as possible. Moreover, the simulation results prove that FVPTR indeed enhances the perspective coverage of networks.

Framework
The fuzzy image recognition and virtual potential field methods are specifically introduced in this section.

Initialization.
Image sensor nodes are randomly and uniformly distributed in a monitored region. Some typical coverage enhancing algorithms often assume that nodes can estimate their locations. Equipping every node with GPS will bring some of the cost, so localization is typically performed by estimating distances between neighboring nodes. In this paper, we focus on application in which localization is unnecessary and possibly infeasible. So we consider that nodes are unaware of their locations.
Strong light source, that causes the "video event," is set in the center of each boundary in the region. Unfortunately, the coverage of networks is unsatisfactory after initial deployment, and there exists large amounts of "overlapped regions." "Overlapped region" can be defined as a region that is covered by more than two nodes effective sensing area at the same time.
In fact, how to enhance the coverage and reduce the "overlapped region" has already been the same issue in this paper.

Virtual Force.
The potential field is a model of electrostatic field in physics. Each node can be seen as a point charge in the electrostatic field and has the same energy. In other words, these nodes are identical with the same type of charge and the same equivalent electricity. Taking into account the nature of the homosexual repulsion and heterosexual attraction, it is reasonable to suppose that the virtual force in this potential field is repulsion between two homogeneous 4 International Journal of Distributed Sensor Networks nodes. But the repulsion is different from general repulsion between charges.
There are two image sensor nodes, whose clear perceptive radius is R, named as Q i and Q j , in Figure 5. The two circles with the radius R, whose center is Q i and Q j respectively, are called clear perceptive circle.
Not only the data quantity of WMSNs is larger than that of normal WSNs, but also the integrity of multimedia data is more important than that of scalar sensing data. In order to ensure the quality of communication while avoiding frequent data loss or interruption, reducing the actual distance between nodes becomes a valid method.
Effective communication radius of one node marked with R C is twice that of the radius of node's clear perceptive radius R.
The Euclidean distance between Q i and Q j is defined as When D i j is beyond R C , the repulsion force between two adjacent nodes would be so tiny that it almost tends to be zero. In this situation, two nearby nodes can independently exist in the respective virtual potential field and ignore the effect of repulsion between them as shown in Figure 5.
"Paired nodes" is a pair of nearby nodes, Q i and Q j , where D i j < R C , and Q j is the nearby node that has shorter distance away from Q i than all other unpaired nodes.
Q k is the nearest node away from Q i , but Q k has already paired with another node, so it could not become the paired node with Q i . It means that Q i must search for an unpaired node which has the shortest distance away from Q i .
As shown in Figure 6, the repulsion between two nodes should be considered when D i j is shorter than R C and can be defined as follows: where k R is the coefficient of repulsion force, which is set as a constant "1" [8].
F ji is the repulsion from Q j to Q i . These two forces comply with the principle that In order to move the force point without changing the direction and the value of force, we suppose that Q i can be moved to Q i along the border of fan-shaped region. This process is equivalent to actually moving The force at Q i can be decomposed into two components. One is F jiT along the direction of tangent line; the other is F jiC along the direction of normal axis of tangent line. With the influence of F jiT , the fan-shaped region gains rotatable trend around Q i . At the same time, The perceptive directions of nodes belong to uniform distribution after initial deployment. Hence, there might be some special situations, for example, when θ i equals π or zero and − → F jiT is zero, the perceptive direction need not be adjusted. Nevertheless, when the respective angles θ of two paired nodes are both zero, it would inevitably lead to the worst condition of coverage.
In practice, the value of RSSI (received signal strength indicator) [14,15] can be used to determine the nearest node away from the current node. The two paired nodes have to interchange messages to inform each other the distance and perceptive direction information. If both θ i and θ j are zero, it can be judged as the worst situation of coverage. When θ i and θ j are both π, we add some little turbulence, so that the FOV of such node could also be adjusted to further increase the coverage.

Perceptive Direction Adjustment.
Because of the virtual force between two paired nodes, adjustment of angular magnitude turns out to be a trouble in coverage problem. We propose two calculating methods, that is, one is linearrelation-based algorithm and the other is a mechanismbased approximate algorithm.

Linear-Relation-Based Algorithm (LRBA).
LRBA is used to calculate the angular magnitude marked by Δϕ that needs to be adjusted. Figure 7 is the paradigm based on LRBA.
International Journal of Distributed Sensor Networks C i , whose value is cumulatively changed for − → F ji during the process of rotating from one state that θ i is π/2 to another that θ i is zero. Meanwhile, Δϕ is adjusted for π/2. So with the influence of − → F ji · sin θ i , Δϕ can be depicted as follows: Equation (3) can generate the following: The range of θ i is [0, π] and sin θ i gains the range of [0, 1]. The range of Δϕ that should be adjusted is [0, π/2] for each image sensor node in networks. This conclusion fits for actual demand.

− →
F jiC is changing all the time, it is impossible that the FOV can circle at an immutable speed. Obviously, the larger − → F jiT is, the faster the rotational speed is. In fact, the change of θ i from zero to π does not affect the value of − → F ji according to (2). However, D i j has a significant impact on − → F ji . In the range of [0, π/2], − → F jiT gradually becomes larger, while in the range of [π/2, π], it gradually becomes smaller: In (5), R is the clear perceptive radius of image sensor node, and ω is the angular velocity of the rotation of effective sensing area. Δt represents the time for an adjustment. The relationship between Δϕ and ω is described by (6). Equation (7) can be derived from the combination of (2), (5), and (6): θ i and Δt both have influence on the value of Δϕ. Their relation is shown in Figure 8. The range of Δϕ is so wide that it would directly aggravate the energy consumption while adjusting perceptive direction without some constrains. So it will assume that Δt is a fixed value in an adjustment in our proposed methods, regardless of the rotation angle value is very large or very small. And (7) can be simplified into (8).
In (8), Δϕ is affected by D i j , R, and Δt. D i j and R are both constant after initial deployment. So the value of Δϕ is directly decided by Δt. As described above, the time for any one adjustment is equivalent. So it is roughly considered that the impact of microvariables can be ignored, and k 2 can be just normalized without affecting the relationship between Δϕ and θ i as shown in Figure 9. The range of (sin θ i ) 1/2 is [0, 1], which is feasible for adjusting the perceptive direction of sensor nodes: Equations (4) and (8) are consistent with the needs of practical application and will be elaborated in Section 4.2.

Detailed
Steps. The FVPTR method proposed in this paper follows steps below with the premise of the network is connected.
(I) deploy N sensors randomly in the monitored region. The sink broadcasts the hop-explored packet. After receiving it, each sensor node records the hop information and adds Δϕ θ i Figure 9: Relationship between θ i and Δϕ.

Begin:
γ is the value of rotation; ε is a given tiny value; β is a value that is larger than ε (we assume that β=10ε); "M" is defined as half of the pixel width value of current image.
A current captured image is scanned by the node. Algorithm 1 one to current hop value then rewrites the packet and sends it out. Finally, each sensor node sends packet back to the sink to notify the hop information. Then the sink chooses those sensor nodes, with the largest hop value, records them as boundary nodes and sends "video event" searching command to them.
(II) sensor nodes that accept video event searching command start up video event searching algorithm. It is described as in Algorithm 1. (III) the nodes that do not accept video event searching command start virtual-potential-based paired repulsion algorithm as in Algorithm 2.
(IV) each nonboundary node adjusts its perceptive direction in terms of Δϕ.

4.2.
Remark. The result of step I can effectively distinguish boundary nodes from nonboundary nodes. No doubt that it is the foundation and prerequisite for adjusting perceptive direction of two different kinds of nodes.
Step II elaborates the whole process of video event searching algorithm. It is important for initialization, since actual testing should be done to make sure that image sensor nodes can effectively keep sensitive to "video event" such as strong light source without responding to general event.
In step III, if the current node cannot receive confirmation-paired packet from target node within the given time slot τ, it will search for another node nearby in the range of R C for pairing. When θ i and θ j are both π, it is dispensable to adjust nodes' perceptive direction. If the current node cannot search any target node to complete the pairing process, it will expect the subsequent executions.
From the perspective of optimizing overall coverage, two improved algorithms based on (4) and (8) will be further proved in Section 5.

Single-Time Adjustment.
The value of FOV is π/3 in line with actual test in Section 2.2, and α is π/6. N nodes are distributed in a square region of 500 m * 500 m. Their perceptive directions are subject to uniform distribution in the range of [0, 2π]. Different equations are used to calculate Δϕ of each node. All simulation data listed below are the average value of numerous (100 times) test results.
The results of simulation for LRBA, MBAA, and mixed superposition algorithm are shown in Table 1. The calculation on Δϕ of mixed superposition algorithm can be depicted as follows: Obviously the enhancement of mixed superposition algorithm seems to be better than two other methods. However, owed to the stochastic character, mixed superposition algorithm reduces the complexity of computation and the degree of enhancement.
International Journal of Distributed Sensor Networks 7 Begin: a. Each current node searches for other nodes in its range of R C and chooses one that has the maximum RSSI value (inversely proportional to the distance), then marks it as target node; b. Request-paired packet is sent from current node to target node. If target node is idle { It responses the request and marks the sender as its paired node, then sends back confirmation-paired packet to the current node; so the target node becomes the paired node of current node; GOTO c; The target node is in the process of pairing with other nodes; the current node will wait for the time slot of τ then select another target node with the second largest RSSI value and continue pairing process with reference to a.

}
Repeat the above procedure until the current node finds its all neighbors are paired with others or the time domain is over. c. Information is exchanged between paired nodes.
If the worst condition of coverage occurs A NUMBER is generated randomly in the range of [−1, +1] by the node sent the request-paired packet.
If NUMBER > 0 The perceptive direction is clockwise adjusted for π/6; Else counterclockwise rotates for π/6; Taking inverse operation to NUMBER value, the sensor obtains NUMBER and sends it to another paired node. Then, the paired node does the same operation as follows: The perceptive direction is clockwise adjusted for π/6; Else counterclockwise rotates for π/6; }//end if d. Force analysis between two paired nodes: each node calculates θ, then computes Δϕ according to (4) and (8) respectively; End.

Algorithm 2
Whatever any algorithm, single-time adjustment fails to gain enhancement more than 6%, which cannot meet the actual demands.

Multiple-Time Adjustment.
After the first successful pairing, each node starts the second pairing. For each node, it neglects the node that has paired with itself in the first time and searches for another node in the range of R C again and then does the same operations of adjustment. Through repeated tests, the best coverage enhancement occurs after adjusting for three times. In the first time, (4) is used to adjust and (8) is used for the second adjustment. Finally, mixed subtraction (not the mixed superposition) algorithm is applied in the third time, in which the absolute value of (8) subtracted from (4) is used, shown as in the following: It is significant for three-time adjustment to use different algorithms. In the second pairing, the current node searches  for another node that has the shortest distance from it. Nonetheless, the distance should be longer than that of the first paired node from the current node, as similarly in the third adjustment. According to (2), the farther the distance between two paired nodes is, the less the force between them should be. Moreover, the less the force is, the smaller Δϕ is. So the farther the distance is, the smaller the adjusting angle value is. The relation of the average value of each algorithm's Δϕ can be arranged that LRBA in the first adjustment > MBAA in the second adjustment > mixed subtraction algorithm in the third adjustment.
We examine the effect of the number of adjustment. In the condition that α = π/6, R = 60 m, and N is set 100, 200, and 300, respectively, as shown in Figure 10, with the increase of the number of adjustment, the coverage increases linearly until the number of adjustment reaches 3 and then becomes saturated when the number of adjustment is above 3. So we assume the number of adjustment is 3 in the following simulation.
The boundary nodes are colored with green, while nonboundary nodes are blue. The coverage of networks is enhanced evidently as shown in Figure 11(d). The detailed values are listed in Table 2.
In the premise of initial deployment, coverage is 72.13%, by 100 times tests, Table 3 displays the comparative average results on adjusting for five times with the parameters α = π/4, R = 60 m, and N = 200. From Tables 2 and 3, after three adjustments, the coverage improves nearly twice that of the single-time adjustment. By actual tests, there is only little difference between the results of three-time adjustment and adjusting for more times. However, the unsatisfactory results of declining in coverage occur after adjusting for more than ten times. Because of the random feature of algorithms, the more the times of pairing are, the more difficult to control the final outcome of adjustment is.

Coverage Enhancing versus Different Parameters
Case 1 (changes in the value of "α"). As shown in Figure 12, the enhancement gains the optimal value when α = π/4 in all of four algorithms. The worst situation occurs while α is π/3. The changes generated by diversity of α in different algorithm roughly have the same trend which turns out to be Z-shaped discipline in Figure 12.    As shown in Figure 13, the effect on enhancement caused by different algorithms can be sorted that three-time pairing > mixed superposition > linear ration > mechanical  approximation. And the following conclusions can be drawn after analyzing the data in Figure 13.
(a) Different algorithms gain different peak position. The peak of three-time pairing and mixed superposition appear  in vicinity of 100 nodes while that of two others occur in vicinity of 150 nodes.
(b) In vicinity of 200 nodes, the effect of each algorithm turns out to a certain degree slow-paced.
(c) When the number of nodes is beyond 250, the coverage improvement of each algorithm gradually declines.The reasonable explanations for those phenomena are as follows.
Situation A. The average values of angles that need to be adjusted of three-time pairing and mixed superposition are larger than that of two other algorithms. Meanwhile, once reaching the coverage peak value, required nodes number of three-time pairing and mixed superposition is less than lined relation and mechanical approximation. It is confirmed that there must be some relationship between the peak of enhancement and angles that need to be adjusted.
Situation B. In vicinity of 200 nodes, the NR/NC (quotient of network redundancy dividing network coverage) gains its minimum value. In other words, the network configuration resource is utilized adequately when the number of nodes is about 200. So in this situation, the outcome of adjustment is not remarkable.
Situation C. With the continuous increasing on the number of nodes in a limited area, the coverage enhancement gradually becomes saturated.
Case 3 (changes in the value of "R"). Figure 14 shows the condition that N is 100, α is π/6, and the value of R is gradually increasing along the horizontal axis direction.
The following conclusions can be drawn. (A) The peaks of three-time pairing and mixed Superposition occur in vicinity of 60 m while that of linear ration and mechanical approximation do in vicinity of 80 m.
(B) The changes of enhancement of four algorithms keep the same trend, which rises firstly and then falls down after reaching the peak.
(C) Except in vicinity of 80 m and 90 m, the enhancement of algorithms can be sorted that three-time pairing > mixed superposition > linear ration > mechanical approximation.
The trend of changes in Figure 14 can be understood from several aspects.
Situation A. Like the situation described in Figure 13, the peaks of different algorithms emerge at different positions. From the macropoint of view, the impact on coverage enhancement caused by average value of angle adjusted shows that the peak position of linear ration and mechanical approximation lags behind two other algorithms.
Situation B. When the radius of nodes tends to zero, analysis of coverage cannot be set up and coverage enhancement cannot be realized as well. When the radius tends to infinite the complete coverage can be achieved after initial deployment. In practice, the enhancement would be conspicuous only in a certain range of radius.
Situation C. As shown in Figure 14, ignoring the effect of peak value, the three-time pairing algorithm shows its advantage mainly due to taking into account the impacts of other neighboring nodes.

Conclusion
In this paper, based on virtual potential field, the paired tangent point repulsion for nonboundary sensor nodes and fuzzy image recognition for boundary sensor nodes realize the enhancement of perspective coverage, together with LRBA, MBAA, and mixed superposition algorithm for rotation angle adjustment. Furthermore, through simulation experiments based on the above algorithms, three-time adjustment method gains better performance than single one and more satisfactory cost-efficacy ratio than more-time adjustment method.
However, there exists defects in the algorithm execution of FVPTR, for example, some nodes could not find their respective paired partner such as those with only one neighbor or the remaining one alone from the odd numbers of total. This is the emphasis in the further research, and meanwhile coverage issue on video or audio sensors networks will be developed in the next work.