Adaptive Localization in Wireless Sensor Network through Bayesian Compressive Sensing

The estimation of the localization of targets in wireless sensor network is addressed within the Bayesian compressive sensing (BCS) framework. BCS can estimate not only target locations but also noise variance of the environment. Furthermore, we provide adaptive iteration BCS localization (AIBCSL) algorithm, which is based on BCS and will choose measurement sensors according to the environment adaptively with only an initial value, while other frameworks require prior knowledge such as target numbers to choose measurements. AIBCSL suppose that environment noise variance is identical in interested area in a short period of time and change measurement numbers until terminal condition is reached. To suppress noise, we optimize estimation result by energy threshold strategy (ETS), which takes that transmit energy of noise focused on single grid is much lower than signal into consideration. And multisnapshot BCS (MT-BCS) will be explained and lead to a good result in low SNR level situation.


Introduction
Wireless sensor network (WSN) has enjoyed tremendous popularity in the last decades [1,2]. Around it there are lots of subjects. Estimating the localization of target in wireless sensor network is one of the hot topics [3]. Localization subject in most of self-organization networks has experienced excited discussion in both single target environment [4] and multitargets [5] environment, although the location of target is known to sources in some situations which are merely for data aggregation or system management, such as networks based on industrial wireless communication protocol WIA-PA [6]. Wireless network localization often appears in smart applications or tracking system such as road traffic monitoring, health monitoring, or military target tracking, which have enjoyed tremendous popularity. Stateof-the-art paper gives to the interested reader several effective approaches proposed in the last decades for localization in WSN. The measurement techniques in wireless sensor network localization can be classified into three categories [3], which include angle-of-arrival estimation (AOA), distance related measurements, and RSS profiling techniques.
However, the custom methods lead to poor performance in multitargets situation.
In this paper, we will introduce a novel method to wireless sensor network localization. We assume that target numbers are far less than the number of grids utilized to represent the locations of the targets. Thus target locations could be considered as sparse signal. We will consider using Bayesian compressive sensing (BCS) standpoint to solve the localization problem. BCS originated from custom compressive sensing (CS), while it employs Bayesian formalism to estimate the underlying signal. CS theory has been applied to the area of localization or target counting in [7]. However, there is no information showing that BCS method has been introduced to wireless sensor network localization. BCS have many merits compared to custom CS: (i) BCS provides not only estimation result of sparse signal but also the error message (also be known as error bars); (ii) the BCS framework provides an estimate for the posterior density function of additive noise encountered when implementing the compressive measurements. BCS algorithm is better than many other custom CS algorithms such as OMP, L1-norm, and LASSO. Another attractive advantage is that BCS is suitable   (1) Set 1 = , 2 = , = 1, Given initial sensor number = 1 Determine mean and variance of ( for developing adaptive method by using message from error bars. Our paper analyzes the basis of BCS and introduces an adaptive iteration BCS method later. Adaptive iteration BCS will improve the accuracy of localization compared to BCS, which can be seen in the simulation section. Except for that, our paper proposes an energy threshold strategy for postprocess and yields good effect.
The original work about CS is [8]. CS is a novel sampling theory that can reconstruct sparse signal from a small number of measurements [9]. There are many methods for recovering signal from sparse signal, which include L1-norm in [8], OMP in [10], and LASSO in [11]. Reference [7] has applied CS to the area of target count and localization in wireless sensor network, that is, the first paper to rigorously justify the validity of the problem formulation of CS localization in wireless sensor network. Since CS is put forward, there are many branches from it, such as distributed CS and Bayesian CS. Reference [12] is a work of using distributed CS in wireless sensor network localization. Bayesian compressive sensing was proposed by Ji and Xue in 2008 [13]. Bayesian compressive sensing uses Bayesian standpoint to explain CS problem and has never been seen in the area of wireless sensor network. Reference [14] comes up with an adaptive Bayesian compressed sensing algorithm and applied it to wireless network. Reference [15] executed DOA estimate by Bayesian estimate in antenna array, which directly links measurements with signal directions and an energy method could been applied. Reference [16] utilized a hierarchical form of the Laplace prior to model the sparsity of the unknown signal and in that way provided a method to estimate the localization by BCS in this paper. From another perspective, CS is welcome in the area of multitarget localization [17].
The organization of this paper is as follows. In Section 2, we will first explain the basis of BCS and multisnapshot BCS. Localization model and adaptive iteration BCS algorithm is demonstrated in the second part of Section 2. Section 3 is our simulation work and Section 4 concludes merits and application scenarios of our method.

Bayesian Compressive Sensing.
Compressive sensing (CS) theory implies that if a signal has a sparse representation by a certain basis then it can be recovered from fewer measurements than Nyquist sampling theorem. A signal ∈ will be represented by a × transform basis Ψ and a sparse signal , formulizing that If has only entries (‖ ‖ 0 = ) with ≪ being not equal zero, signal is called -sparse. Assume we get measurements ∈ ; and are expressed to be linear formula: where Φ is × 1 measurement vector, such that Θ is a × projection matrix. Take zero mean Gaussian noise ∼ (0, 2 ) into consideration: Equation (3) depicts the projection of sparse signal and measurements . If we can recover of dimension from of dimension, then less memory costs and calculation costs will be taken.

Single-Snapshot BCS-Based Sparse Signal Estimation (ST-BCS).
Bayesian compressive sensing (BCS) will recover from Bayesian estimation standpoint. Because is a zero mean Gaussian random, the likelihood model of recovering from is According to Bayesian estimation theorem, the analysis of CS problem turns out to be a priori constrained optimization problem. The priori condition is that is a sparse vector (‖ ‖ 0 = ). The posterior probability [16] is where ( | 0, −1 ) means that is Gaussian random where mean is zero and variance is −1 . = [ 1 , 2 , . . . , ] is super-parameters at the service of solving priori probability, which will control the sparse level of .
BCS provides a method to estimate and even noise covariance by only a few measurements through maximize posterior probability. In that viewpoint, posterior probability of and 2 are maximized; that is,̂= arg{max ([ , 2 , ] | )}. Posterior probability is ( | 2 , , ) provides the estimation of , and ([ 2 , ] | ) gives the result of noise estimation. Our target is maximizing the first item [16]. Formulize it by multivariable Gaussian distribution [16]:    where mean and covariance are Thus, we must first estimate and Σ to accomplish the task above. Assuming 0 = 1/ 2 , we use maximum likelihood where = 2 + ΘΛ −1 Θ and is eye matrix. max ( 0 , ) with respect to 0 and will get iteration relationship as below: where Σ is th diagonal element of Σ. Given the initial values of 0 , , update Σ and by (8) and (10)

Multisnapshot BCS-Based Sparse Signal Estimation (MT-BCS).
To suppress the noise, take the relevant of more than one signal in a short period of time into account. Do samples at different times, and then International Journal of Distributed Sensor Networks Super-parameters can iterated by following equations: Similarly but not the same, estimated result of is as below in MT-BCS:

Localization Model.
First we divide localization area to be = × grids. Then assign index for grids from 1∼N as Figure 1. The coordinate of index is calculated by  Assume targets (also known as sources) are distributed sporadic in those areas, such that they can be represented by a sparse vector = [ 1 , 2 , . . . , ] , where represents the energy of target in th grid area. If there is no target in th grid, = 0.
Energy attenuation [3] from th grid to th gird is where is signal intensity in th grid and 0 is a very short reference distance. is Euclidean distance from th grid to th gird.
is Rayleigh fading factor from to which is related to specific environment. is distance factor, often choose between [2.0, 5.0].

BCS Based Adaptive Localization.
Compared with custom CS method, BCS not only provides sparse estimation result, but also gives estimation of noise. In this section, we develop an adaptive BCS algorithm that will choose measurements adaptively for target localization in wireless sensor network. The algorithm assumes that noises in the interested area are consistent, which means that the variances of noise in anywhere of interested area are identical in a short period of time. Measurement numbers change based on the feedback of variances of noise by BCS estimation, until the inconsistency of variances comes to be a small value.
At the beginning, we choose 1 sensors randomly (that is an initial value) by primary device to acquire signal from interested gird area. The BCS estimated noise variance of those sensors is (1)2 = { (1)2 1 , (1)2 2 , . . . , (1)2 (1) }, where (1) means that signal is (1) -sparse at the first iteration. We develop a constraint condition then, and sensors will be increased from the next iteration if that condition is not satisfied. The approach of choosing the sensors comes to two situations: (i) if there is no sensor in the grid area localized by BCS at the last iteration, we will increase a measurement in that grid and (ii) if there have been sensor in the grid area localized by BCS at the last iteration, we choose another grid that without been measured randomly. Set Bayesian estimation error ( )2 = { ( )2 1 , ( )2 2 , . . . , ( )2 ( ) } at the th iteration, where ( ) means that signal is ( ) -sparse. There are two iteration termination (constraint) conditions used in our paper. One of them is the comparison of 1 with threshold 1 , and the expression of 1 is (20) 1 represents the convergence rate in some degree, which put forward that the mean and variance change little at the convergence process. Second, consider that the Gaussian random is identical anywhere in the interested area, such that we let 2 = ( )2 be reasonable. The combination of 1 and 2 comes to be Algorithm 1 gives the whole adaptive algorithm procedure.

Strategy to Determine
Targets. According to (19), is -sparse vector. BCS localization returns the energy result of all locations. Random noise will increase error rate. In this section, we assume that energy of noise source is much less than true target, such that we can estimate targets' location by energy threshold strategy (ETS) further [15]. More specifically, the entries of estimated sparse signal are sorted according to their energy, | | 2 , = 1, . . . , , from high to

Results and Discussion
Because _ is not identical to sometimes, we also take target number error (TNE) as a performance criterion, where TNE is Simulations are executed under noise situation, such that signal to noise ratio is defined: 3.1. BCS Simulation. Compared to custom methods, such as L1-norm [18], OMP [19], and LASSO [19], BCS is superior to them at certain circumstance. We will use CVX toolbox to implement L1-norm and SPAMs to implement OMP and LASSO in our experiments. In our simulation, each measurement is corrupted by Gaussian random according to (24). Energy threshold strategy is applied with = 0.99 in this section unless otherwise specified. All custom methods give the simulation target numbers that are identical to the real target numbers at the initialization time in all of our simulation tasks. Figure 2 shows the result of single target experiment under different SNR level. The result of BCS method is better than OMP and approximate performance compared with L1-norm and LASSO. Figure 3 shows the MLE result of multitargets localization by BCS and custom CS methods in different SNR level. Target number = 5 in reality here. Setting of target number of all custom methods is 5, such that TNE is 0 of all methods except for BCS. Figure 3 proves that Bayesian is better than all of custom methods in some degree in multitarget situation. Figure 4 shows the TNE result in different SNR level for BCS method both in single target and multitargets. In single target ( = 1) situation, TNE is monotone decreasing along with SNR's increase. In multitargets situation, most of TNEs are less than 0.6 and there is little variation. We have set _ = for all custom CS methods, and the TNE of them are zeros. Figure 5 examines the MLE under different measurements. It demonstrates directly that BCS is better than other methods for that BCS has less MLE than others. Another clue of that figure tells us that MLE is decreasing along with the increasing of measurements. Figure 6 gives the TNE result of BCS method when measurements increase. The trend of TNE is decreasing along with the increasing of measurements, such that we can improve the localization accuracy through increasing measurements until satisfying our demand.
The comparison between energy threshold strategy version and the not optimized version is depicted in Figure 7. After ETS optimizing, TNEs and MLEs are smaller than the not optimized version with MLE and TNE decline of about 50%. It proves that ETS is effective in improving accuracy of localization.

AIBCSL Simulation.
Adaptive iteration BCS localization algorithm could choose the number of sensors adaptively without user participating except for an initial value. The terminal condition used here for iteration is = 1 < 1 and 2 < 2 , where 1 = 5 * 10 −3 and 2 = 5 * 10 −3 . The maximum iteration of AIBCSL is 50 in all of our simulations in this section and tests are under 100 repetition. Energy threshold strategy is applied with = 0.99. Figure 8 shows that AIBCSL leads to less MLE and TNE. AIBCSL is superior to BCS. It almost yields perfect result (results close to zero) when SNR is greater than 20 dB. Figure 9 demonstrates visualizing that number of measurements changes adaptively according to environment with initial value = 20 for = 1 and = 100 for = 5. With the increasing of SNR, the measurements decrease slowly, while measurements change a lot comparing = 5 with = 1. Figure 10 is the iteration process of AIBCSL with different target numbers. 1 and 2 change along with iteration's increase. At the convergence time, both 1 and 2 are approximately comes to be zero. Then it terminates the iteration and returns the localization result.
With the changing of target numbers, AIBCSL choose measurements adaptively. Figure 11 shows the relationship between target numbers and measuments. Measurements at convergence are linear with target number, which means that AIBCSL could determine the measurements just according to the environment without needing any message about target numbers in advance. Figure 12 gives a snapshot of simulation process. In that figure, MLE of AIBCSL is better than OMP, L1-norm, and LASSO.

Multisnapshot Simulation.
In this simulation work we will combine multisnapshot with adaptive localization algorithm proposed in Section 2, which is called MT-AIBCSL for short in this paper. In this simulation, the terminal condition is = 1 < 1 and 2 < 2 , where 1 = 5 * 10 −3 and 2 = 5 * 10 −3 . Number of snapshots = 2. Initial value of measurements is 20 for = 1 and 100 for = 5. All the simulation results are based on 100 times statistics. Tests are executed on different SNR and Figure 13 shows that MT-AIBCSL reduces the MLE and TNE relative to AIBCSL, especially low SNR region. That means MT-AIBCSL is really useful for reducing noise interference.
Because MT-AIBCSL takes full advantage of correlation of time domain, it suits the situation that the environment is not varied in a short period of time. An excellent choice is applying AIBCSL to real-time requirements and low correlation of time domain situations and vice versa.

Conclusions
In this paper, BCS is introduced to wireless sensor network localization and the results compared with custom CS are assessed. We then proposed AIBCSL and MT-AIBCSL to improve the accuracy of localization on the basis of BCS. AIBCSL take the assumption that noise variances are identical in the interested localization area in a short period of time. And simulation results proved that method is effective in noise environment. Advantages and limitations of AIBCSL and MT-AIBCSL have been analyzed in comparison with custom CS methods. The proposed adaptive methods have demonstrated the following superiorities.
(i) Estimating target locations with noise variance being known at the same time. (ii) AIBCSL provides higher MLE and TNE than BCS and custom methods. And MT-AIBCSL is better than AIBCSL in low SNR level. (iii) There is no need for user to know the number of targets ahead of time and only an initial value of measurement is required by using AIBCSL and MT-AIBCSL.
Expect for that, an ETS provides performance improvement with almost 50% error decline, which is well suited for BCS framework and multitarget localization situation.