Next Article in Journal
Data Augmentation Based on Generative Adversarial Network with Mixed Attention Mechanism
Previous Article in Journal
Segmentation of Echocardiography Based on Deep Learning Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Target Location Method Based on Compressed Sensing in Hidden Semi Markov Model

1
College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China
2
Business School, University of Shanghai for Science and Technology, Shanghai 200093, China
3
School of Engineering, Huzhou University, Huzhou 313000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(11), 1715; https://doi.org/10.3390/electronics11111715
Submission received: 15 May 2022 / Accepted: 26 May 2022 / Published: 27 May 2022
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

:
A compressive sensing-based target localization method based on hidden semi-Markov model (HsMM) is proposed to address problems like unpredictable data and the multipath effect of the Receive Signal Strength (RSS) in indoor localization. The method can achieve both coarse and precise positioning by combining HsMM and the compressive sensing algorithm. Firstly, the hidden semi-Markov model is introduced to complete the coarse positioning of the target, and a parameter training method is proposed; secondly, the Davies-Bouldin Index and the Calinski-Harabasz Index based on the Euclidean distance and on the proposed connection distance herein are introduced; then, on the basis of coarse positioning, a precise positioning method based on compressive sensing is proposed; in the compressive sensing method, Gaussian matrix is introduced and a selection method of two screening matrices of the deterministic matrix is proposed; finally, the performance of coarse positioning is verified by experimental data for Hidden Markov Model (HMM) and HsMM, respectively, and the performance of the compressive sensing algorithm based on the two screening matrices of Gaussian matrix and deterministic matrix is respectively verified; the effectiveness of the proposed algorithm is experimentally verified.

Graphical Abstract

1. Introduction

Location information plays a crucial role in location-based services and their applications [1,2,3]. In outdoor environments, accurate location information can be easily obtained from the Global Positioning System (GPS) [4]. However, in indoor environments, GPS signals are blocked by buildings [5], thus becoming weaker or even undetectable. Consequently, GPS positioning results cannot meet indoor positioning requirements. The development of devices such as smart phones has made indoor positioning possible. However, due to the complexity of indoor environments, it is usually difficult to achieve the desired accuracy in most applications [6,7]. Therefore, it is a key challenge to design a high-accuracy positioning system. At present, there are already several technologies available for indoor positioning, such as ultra-wide band [8], ultrasonic, bluetooth [9], Wi-Fi, etc. Most of these technologies rely on the following: Angle of Arrival (AOA) [10], Time Difference of Arrival (TDOA) [11], Time of Arrival (TOA) [12,13,14] and Receive Signal Strength Index (RSSI) [15,16,17,18].
In recent years, Received Signal Strength (RSS)-based positioning algorithms have been widely studied and applied as a cost-effective indoor positioning solution [19,20,21]. Compared with the TOA or the TDOA measurements ultra-wide band, the RSS can be easily obtained via RFID [22], Wi-Fi or Bluetooth integrated mobile devices without any additional hardware. However, RSS data are difficult to predict in practice and usually exhibit large volatilities [23]. Factors such as multipath effects and non-line-of-sight make RSS-based positioning very sensitive to environmental changes. Therefore, it will be a difficult challenge to process such signals [24].
In RSS-based positioning problems, auxiliary prior information is used for modeling, e.g., Hidden Markov Model (HMM) [24,25] can be used. In recent studies, the use of HMM for positioning has proven to be effective. However, HMM cannot reflect the dwell time between state transitions. Therefore, the existing framework can be improved [26,27] by using hidden semi-Markov model (HsMM). HsMM provides a flexible state dwell time framework and is used in many signal processing related applications [28]. In HsMM, the dwell time in each state is variable, and this flexible dwell time based modeling can better solve the positioning problem.
The selection and use of anchor nodes in precise positioning is also an important task in the study, and the use of measured information from poorly performing anchor nodes can lead to large positioning errors. The compressive sensing technique can match the measured signal strength with the fingerprint database while providing a new framework for compressible signals. After filtering the anchor nodes to reduce the measurement dimensionality, compressive sensing techniques can be used to precisely reconstruct sparse signals with high probability by solving the L1 minimization problem, thereby realizing the location estimation of the target.
In this paper, a target positioning method based on compressive sensing under the hidden semi-Markov model is proposed, which first completes coarse target positioning by constructing a hidden semi-Markov model, and then achieves precise target positioning analysis using the compressive sensing technique. The innovative points of this paper are listed as follows.
(1)
A coarse positioning algorithm based on the hidden semi-Markov model is proposed, and a parameter training method is proposed. The method is able to achieve area-level coarse positioning of targets in a large-area environment.
(2)
Davies-Bouldin and Calinski-Harabasz indexes based on the Euclidean distance are introduced, and a method for determining indexes based on connection distances is proposed.
(3)
On the basis of coarse positioning, a precise target positioning algorithm based on compressive sensing is proposed, and two screening matrix construction methods based on Gaussian matrix and deterministic matrix are proposed.

2. System Model

In this paper, the system model in discussion is constructed in a two-dimensional environment. Suppose there are N anchor nodes, which transmit information via RSS. The position of the ith anchor node is denoted as s i = [ x i , y i ] T , where x i is the position of the ith anchor node on the x-axis, and y i is the position of this anchor node on the y-axis. The position of the target is denoted as x = [ x , y ] T . The measured value between the ith anchor node and the target is denoted as:
r i = r 0 10 γ lg d r , i d 0 + n i ,
where r i is the received signal power, d r , i is the real distance between the i-th anchor and the target at time k, r 0 represents the received signal strength at reference distance d 0 , n i is the measurement noise which satisfies the Gaussian distribution with zero-mean, i.e., n i N ( 0 , σ i 2 ) , γ is the path loss exponent.
In the process of localization, the measured target may receive a large number of anchor node signals. In order to reduce the processing time of the measured target node, only consider whether the measured target can effectively receive the signal of each anchor node. Therefore, the variable O ˜ is defined, which means the connection vector of the measured target. Suppose there are N p different values of O ˜ , and the i-th value is represented by O ˇ i . Besides, o k is used to represent the index corresponding to the connection vector value at time k.

3. Coarse Localization Based on an Explicit-Duration Hidden Markov Model

In practice, the joint probability associated with the observed sequence usually exhibits an exponential decay with the increase of the sequence length. Therefore, computer programming to achieve model training will encounter serious underflow problems. Consequently, the concept of the posterior probability is used to redefine the Forward Backward (FB) variable, which effectively solves the underflow problem.
Let s 1 , s 2 , , s M be states of a semi-Markov chain, the values of which correspond to the index of the area where the target is located. The initial distribution π m represents the probability of the target in each area at the initial time. If the initial position of the target is known, the probability of the area corresponding to the target position is 1, and the probability of other areas is 0. On the contrary, if the initial position of the target is unknown, the probability of each area can be set to 1 / M . The transition probability matrix a m n denotes the probability matrix of the target moving from one area to another. Let q k denotes the state of the semi-Markov chain at time k, where k { 1 , 2 , , K } , and o k represents the observation at time k. The duration probability p m ( d ) represents the probability that the dwell time of the target in the area m is d. The maximum dwell time is usually set, i.e., max ( d ) = D .

3.1. Forward Algorithm

In the initialization phase, all parameters of the forward algorithm are expressed as follows
α 1 | 0 ( m , d ) = π m p m ( d ) , b m * o 1 = b m o 1 m , d α 1 | 0 ( m , d ) b m o 1 , E 1 ( m ) = α 1 | 0 ( m , 1 ) b m * o 1 , S 1 ( m ) = n E 1 ( n ) a n m ,
where α 1 | 0 ( m , d ) represents the probability when the state at time 1 is s m and the duration is d with the observation sequence at time 0, π m represents the probability that the initial area where the target is located is m, p m ( d ) represents the probability that the target will last for d times in area m, b m * o 1 is defined as the ratio of the filtered probability, b m ( o 1 ) represents the probability that the signal connectivity of the target in area m is measured as o 1 , E 1 ( m ) and S 1 ( m ) represent the conditional probability of a state ending at time 1 given o 1 t and that of a state starting at 2 given o 1 t , a m n represents the probability of the target moving from area m to area n.
After the parameters are initialized, the recursion of the parameters is completed by the following formula:
α t | t 1 ( m , d ) = S t 1 ( m ) p m ( d ) + b m * o t 1 α t 1 | t 2 ( m , d + 1 ) , b m * o t = b m o t m , d α t | t 1 ( m , d ) b m o t , E t ( m ) = α t | t 1 ( m , 1 ) b m * o t , S t ( m ) = n E t ( n ) a n m ,
where α t | t 1 ( m , d ) represents the probability when the state at time t is s m and the duration is d with the observation sequence at time t 1 , b m * o t is defined as the ratio of the filtered probability in state m and observation o t , E t ( m ) and S t ( m ) represent the conditional probability of a state ending at time t given o 1 t and that of a state starting at t + 1 given o 1 t .

3.2. Backward Algorithm

Similarly, the initial parameters of the backward algorithm are as follows
β T ( m , d ) = b m * o T , E T * ( m ) = d p m ( d ) β T ( m , d ) , S T * ( m ) = n a m n E T * ( n ) ,
where β T ( m , d ) denotes the backward variable by the ratio of the smoothed probability α T | T ( m , d ) over the predicted one α T | T 1 ( m , d ) , E T * ( m ) and S T * ( m ) represent the conditional probability of a state ending at time T given o T and that of a state starting at T 1 given o T .
After the parameters are initialized, the recursion of the parameters is completed by the following formula
β t ( m , d ) = S t + 1 * ( m ) b m * o t , d = 1 β t + 1 ( m , d 1 ) b m * o t , d > 1 , E t * ( m ) = d p m ( d ) β t ( m , d ) , S t * ( m ) = n a m n E t * ( n ) ,
where β t ( m , d ) denotes the backward variable by the ratio of the smoothed probability α t | T ( m , d ) over the predicted one α t | t 1 ( m , d ) , E t * ( m ) and S t * ( m ) represent the conditional probability of a state ending at time t given o t T and that of a state starting at t 1 given o t T .

3.3. Parameter Re-Estimation

In the case that the model parameters are not given, we need to train the model through the given observations. it is initially estimated, and then re-estimated many times until convergence. This training and updating process is called parameter re-estimation. Model parameters can be calculated by the following formula:
a ˜ m n = t = 2 T E t 1 ( m ) a m n E t * ( n ) N a , p ˜ m ( d ) = t = 2 T S t 1 ( m ) p m ( d ) β t ( m , d ) N p , π ˜ m = π m E 1 * ( m ) N π , b ˜ m v k = t = 1 T d = 1 D S t 1 ( m ) p m ( d ) β t ( m , d ) , × i = 0 d 1 δ o t + i , v k N b ,
where
δ o t + i , v k = 1 , o t + i = v k 0 , otherwise ,
Normalizing the above equation, we can obtain:
a ^ m n = a ˜ m n i = 1 M a ˜ m i , p ^ m ( d ) = p ˜ m ( d ) i = 1 D p ˜ m ( i ) , π ^ m = π ˜ m i = 1 M π ˜ i , b ^ m v k = b ˜ m v k i = 1 K b ˜ m v i .
The parameters of the HsMM model can be obtained according to formula (7).

3.4. Selection of Training Parameters

In order to improve the discrimination between classifications, appropriate parameters need to be selected for training, which include the number of classifications M and maximum dwell time D. For the selection of parameters, two indexes are used in this paper: Davies-Bouldin Index (DBI) and Calinski-Harabasz Index (CHI) [29].
DBI is a performance index, representing the ratio of the degree of aggregation within clusters and the degree of separation between clusters, where the degree of aggregation within the ith cluster can be expressed as:
S i = 1 N ˜ i j = 1 N ˜ i p ˜ i , j c ˇ i .
The distance between two clusters can be expressed as:
d i j = c ˇ i c ˇ j ,
where c i denotes the clustering center of the ith cluster.
Thus, DBI can be expressed as:
D B = 1 M i = 1 M R i ,
where R i = max j , j i S i + S j d i j . The objective is to minimize the DBI for achieving proper clustering.
CHI can be expressed as
C H = i = 1 M N ˜ i c ˇ i c ˇ 2 M 1 / i = 1 M j = 1 N ˜ i p ˜ i , j c ˇ i 2 N ˜ M ,
where c ˇ represents the center of all the sample points and N ˜ means the number of sample points.
The purpose of finding the best clustering is to minimize the distance in one cluster and maximize the distance between two different clusters. In the course of coarse localization, the type of distance can be defined from two aspects.
On the one hand, the distance is defined as the physical distance, that is, the Euclidean distance between two sample points. Therefore, DBI and CHI can be easily obtained by (10) and (11).
On the other hand, the distance can be defined from the measured features. It can be called the connected distance, which is expressed as follows:
d ^ j , k = | | O ^ j O ^ k | | ,
where O ^ j represents the connection vector of the j-th sample point, | | · | | means the second norm of ·. Therefore, p ˜ i , j , c ˇ i and c ˇ can be replaced as
p ˜ i , j = O ^ i , j ,
c ˇ i = 1 N ˜ i j = 1 N ˜ i O ˜ i , j ,
c ˇ = 1 N ˜ i = 1 M j = 1 N ˜ i O ˜ i , j .
Remark 1.
Since all elements in O ^ j are 0 or 1, d ^ j , k must be a non-negative integer. However, the cluster center vector and the average distance need to be averaged based on sample points, so the elements in this vector may not be integers.

3.5. State Prediction

Two methods are usually used to predict HsMM, which are maximum a posteriori (MAP) [30] and Viterbi algorithms [31]. MAP algorithm calculates the most likely state at each moment, that is, the region where the target is most likely to be located at each moment. The Viterbi algorithm solves the hidden semi-Markov model prediction problem through dynamic programming, that is, dynamic programming is used to find the path with the highest probability. At this time, a path corresponds to a state sequence.
By state prediction, the region of the observation sequence can be obtained at all times. After the region of the target is obtained, the fine target localization can be achieved by using the method of compression perception.

4. Fine Localization Based on Compressed Sensing

When the data set is known, the location index of the target is sparse. Because in the discrete-time series, the position of the target is unique at a certain time. In an ideal situation, the target is in one of the locations in the data set, and the corresponding location index is η = [ 0 , 0 , 1 , 0 , , 0 ] T , the sparsity of which is 1. The measurement vector at each time can be expressed as follows
y = Φ Ψ η + ε ,
where Φ denotes observation matrix, Ψ is basis matrix, ε represents measurement noise and y expresses observation vector. In the process of fine localization, only the measured values of some anchor nodes are used. Therefore, the length of y is usually much smaller than the number of anchor nodes. As a consequence, (16) is actually a problem of solving underdetermined linear equations, with an infinite number of solutions. However, compressed sensing theory proves that if the signal η is sparse and some certain conditions are satisfied, the signal η can be reconstructed accurately.
In (16), Ψ is known and can be obtained from the data set, which is expressed as
Ψ = r 1 , 1 r 1 , 2 r 1 , j r 1 , N ˜ i r 2 , 1 r 2 , 2 r 2 , j r 2 , N ˜ i r l , 1 r l , 2 r l , j r l , N ˜ i r N , 1 r N , 2 r N , j r N , N ˜ i ,
where r l , j represents the RSS measurement between the j-th sample point and the l-th anchor node. For selecting the data of some anchor nodes as the measurement, it needs to be filtered by Φ . Consequently, Φ can also be called the filter matrix.
In order to ensure that the signal can be reconstructed, Φ needs to satisfy the Restricted Isometry Property (RIP). Therefore, the design of the matrix Φ is crucial in the localization process. There are usually two ways to construct Φ matrix, one is to construct Gaussian random matrix [32], Bernoulli random matrix [33], etc. These random matrices have been proven to satisfy the RIP condition with a high probability in a statistical sense. If matrix Φ is set to a Gaussian random matrix, it can be expressed as follows:
Φ = [ Ω ] M ¯ × N ,
where Ω satisfies Ω N ( 0 , 1 / M ¯ ) .
Another method is to construct a deterministic matrix. Screening available anchor nodes through observation matrix to achieve fine localization. Define a M ¯ × 1 filter vector ζ , which is a sequence composed of indexes of anchor nodes used for localization, and each element of the vector satisfies ζ j [ 1 , N ˜ i ] . Therefore, the matrix Φ can be obtained as
Φ = [ e ζ 1 , e ζ 2 , , e ζ j , , e ζ M ¯ ] T ,
e j is a column vector of length N, the j-th element of which is 1 and others are 0.
After constructing the deterministic matrix, the matrix also needs to satisfy RIP. Intuitively, RIP requires that the matrix Φ Ψ does not project two different k-sparse signals into the same sampling set, which theoretically guarantees the accuracy and uniqueness of the reconstructed signal. In fact, to verify whether the matrix satisfies RIP, we need to enumerate all combinations of the matrix. Therefore, it is a NP hard problem, which is difficult to realize in numerical calculation. Baraniuk proposed that the observation matrix Φ can also satisfy the conditions of compressed sensing reconstruction signal when the row vector ϕ i T of the observation matrix Φ is not related to the column vector ψ j of the basis matrix Ψ [34]. Therefore, the observation matrix can be optimized by fixing the basis matrix. As a result, the coherence criterion that is easy to calculate is generally adopted as the evaluation criterion of the observation matrix.
Definition 1.
Assuming that Φ and Ψ are observation matrix and basis matrix, respectively, the coherence coefficients of Φ and Ψ are defined as
μ ( Φ , Ψ ) = max 1 i M ¯ , 1 j N ˜ i | < ϕ i , ψ j > | | | ϕ i | | · | | ψ j | | ,
where ϕ i means the vector composed of the elements of the i-th row of Φ, ψ j denotes the vector composed of the elements of the j-th column of Ψ.
According to the propagation characteristics of RSS, when the target is far away from the anchor node, the interference of noise to the measurement is more generous. As a result, the received signal strength threshold ζ and the number of anchor nodes n ^ can be set. Suppose that in the fine positioning stage, the number of selected anchor nodes is n ˇ , and the vector formed by the index of the selected node is κ , the optimal anchor node selection problem can be given as follows:
min κ μ ( Φ , Ψ ) s . t . ϕ i = e κ i , 1 i n ˇ , r ˇ κ i ζ , 1 i n ˇ , length ( κ ) = n ^ ,
where r ˇ i indicates that the target receives the signal strength of the i-th anchor node, κ i represents the i-th element of κ and length ( · ) denotes the length of the vector ( · ) .
The threshold value ζ and the number of selected anchor nodes n ^ determine the number of all candidate combined nodes, which determines the computing time. Therefore, these two parameters usually require multiple experiments to get the best value.

5. Experimental Results and Analysis

5.1. Setting of the Experimental Environment

In order to verify the feasibility of the algorithm, an experiment was conducted in an environment of 70 × 25 m. Figure 1 shows the positions of 100 anchor nodes in the experimental environment. Since the algorithm does not need the positions of known anchor nodes, we only consider the positions of anchor nodes that are unknown. The target moves around the environment to obtain 10,000 sample points. Given a threshold τ t h , when the measured distance of the target and anchor node is below τ t h , the target is considered unable to receive measurements from the anchor node. In this experiment, the threshold τ t h is set to 40 .

5.2. Selection of HsMM Parameters

In this experiment, the selection of the hidden semi-Markov model parameters is an important part of the algorithm system. In this section, the selected M value is 13, the range of D is [ 1 , 40 ] . Under different D values, the corresponding DBI and CHI can be obtained according to formulas (10) and (11). The values corresponding to the minimum DBI and the maximum CHI are shown in Table 1 and Table 2.
According to Table 1 and Table 2, it can be seen that for CHI and DBI based on the Euclidean distance, the values of D are respectively 37 and 40. From these indexes, it can be known that only when the upper limit of the dwell time is set to be relatively large can the differentiation between clusters and the closeness of elements within clusters be well reflected. In the case of CHI and DBI based on the connection distance, the values of D are 20 for both indexes. It can be concluded that the maximum dwell time set to 20 can meet the optimal classification requirement.

5.3. Results of the Coarse Positioning Algorithm

When the parameters of the hidden semi-Markov model M and D are determined, the model needs to be trained using samples. In this experiment, it is assumed that the signal is considered to be not received properly when the signal strength is below 40 dBm.
The parameters in Table 1 and Table 2 are used to train the HsMM model. Then, the classification for all the training samples can be calculated. In this section, the test samples are tested using the MAP (Maximum a Posteriori) method. To verify the performance of the coarse positioning algorithm, a determination method is given which is that the classification of a test sample is considered correct when one of the test samples is consistent with the classification of its nearest training sample. Through experiments, the accuracy of state estimation based on different distance types and with different models is given, as shown in Table 3.
As seen from Table 3, the accuracy of the estimation based on the HMM model is 93.988 % , which is significantly lower than the estimation accuracy of the HsMM model. Therefore, it can be seen that the HsMM model has stronger applicability and reliability compared to the HMM model. In addition, there are small differences in the estimation accuracy of the HsMM model based on different distance types and indexes: the estimation accuracy based on the Euclidean distance/CHI is 96.579 % , the estimation accuracy based on Euclidean distance/DBI is the lowest at 96.192 % , and the highest estimation accuracy is based on the connection distance/CHI and DBI at 97.395 % .
Figure 2 represents the comparison of the real and estimated states of the HMM-based test samples. From the figure, it can be seen that the real states of the test samples show a brief jump under some time series, as shown in state 11 in the figure. However, the estimation using HMM has failed to identify them and classified them as state 3, exhibiting an obvious bias. Frequent switching of states may affect the accuracy to a certain extent.
Figure 3, Figure 4 and Figure 5 represents the comparison of the real and estimated states of the HsMM-based test samples under different D parameters. From the figures and tables, it can be seen that when the parameter D is 20, the optimal state estimation is achieved. From Figure 3 and Figure 5, it can be seen that there are state estimation errors at the initial moment, but the estimation accuracy of Figure 3 is slightly higher than that of Figure 5 in the subsequent state estimation. The sequences with state estimation errors usually occur at the time of state switching, during which a small number of sequences show some state estimation bias, i.e., switching to the new state too early or too late. However, on the whole, during state switches, the sample points are usually located at the boundaries of different classifications, and their positioning accuracy does not show large deviations even if they are classified in classifications on different sides of the boundary. In addition, the stability of the estimation is higher than that of the HMM method.

5.4. Results of the Fine Positioning Algorithm

This section describes the experimental results of the precise positioning algorithm. The experiment verifies the performance of the algorithm in terms of the number of different anchor nodes screened, the selection of different observation matrices, and the number of classifications, respectively. In this section, the number of region classifications for coarse positioning is set to 13 and 17, and then in the precise positioning stage, the performance is tested by constructing two screening matrices, deterministic matrix and Gaussian matrix, respectively, as shown in Figure 6.
Figure 6 represents the performance comparison for different classification numbers and different screened anchor node numbers. From the figure, it can be seen that compared with the performance of other numbers of anchor nodes, the performance is worst when the number of screened anchor nodes is 4. The overall performance gradually improves with the increase of the number of screened anchor nodes. When the number of screened anchor nodes is 8–10, the optimal performance is achieved by these algorithms, respectively. On the whole, the performance of the BP neural network algorithm based on Levenberg-Marquardt (BPNN-LM) is better than the BP neural network algorithm based on Bayesian regularization (BPNN-BR), but lower than other algorithms. The random anchor nodes algorithm randomly selects anchor nodes for localization estimation after the coarse localization stage. The experimental results show its overall performance is worse than the proposed algorithm. As the number of anchor nodes increases, the error increases steadily, because setting too many anchor nodes for screening leads to the selection of more anchor nodes with poor measurement performance, which leads to the increase of error. On the whole, the method using the deterministic matrix is superior to other method, this is because the deterministic matrix is able to screen the anchor nodes with better measurement performance, while the algorithm based on Gaussian matrix only compresses the dimensionality of the measurement with certain linear relationship without any real screening effect. In addition, because the Gaussian matrix is a random matrix, the result may experience a certain degree of fluctuation due to randomness in the calculation process.

6. Conclusions

In this paper, a compressive sensing based target positioning method based on the hidden semi-Markov model is proposed. Firstly, the hidden semi-Markov model is introduced to complete coarse target positioning; secondly, Davies-Bouldin Index and Calinski-Harabasz Index based on Euclidean distance and connection distance are proposed; then, on the basis of coarse positioning, a compressive sensing based precise positioning method is proposed, and the screening matrices including Gaussian matrix and deterministic matrix are proposed; finally, the performance of coarse positioning using the HMM and the HsMM is verified by experimental data respectively, and the performance of the compressive sensing algorithm based on two screening matrices, Gaussian matrix and deterministic matrix is verified respectively. The experiments have proven the effectiveness of the proposed algorithm.
The advantage of this solution is that it does not require analysis and modeling for different environments. It is of great significance to apply in airports, hospitals, warehouses and other scenarios where it is difficult to analyze their environmental characteristics. In future research, on the premise of meeting the specified accuracy requirements, the deployment planning of anchor nodes should also be considered, thereby reducing the energy consumption of the entire network.

Author Contributions

Conceptualization, X.T.; methodology, X.T. and G.W.; software, X.T.; validation, X.T., G.W. and J.W.; writing—original draft preparation, X.T., G.W. and J.W.; writing—review and editing, X.T., G.W. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under Grants 61873169 and Natural Science Foundation of Zhejiang under Grants LQ22F030011.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Z.; Chen, L.; Chen, C.; Guan, X. Joint clustering and routing design for reliable and efficient data collection in large-scale wireless sensor networks. IEEE Internet Things J. 2015, 3, 520–532. [Google Scholar] [CrossRef]
  2. Alarfaj, M.; Su, Z.; Liu, R.; Al-Humam, A.; Liu, H. Image-tag-based indoor localization using end-to-end learning. Int. J. Distrib. Sens. Netw. 2021, 17, 2371. [Google Scholar] [CrossRef]
  3. He, W.; Lu, F.; Chen, J.; Ruan, Y.; Lu, T.; Zhang, Y. A Kernel-Based Node Localization in Anisotropic Wireless Sensor Network. Sci. Program. 2021, 2021, 9944358. [Google Scholar] [CrossRef]
  4. Kim Geok, T.; Zar Aung, K.; Sandar Aung, M.; Thu Soe, M.; Abdaziz, A.; Pao Liew, C.; Hossain, F.; Tso, C.P.; Yong, W.H. Review of indoor positioning: Radio wave technology. Appl. Sci. 2020, 11, 279. [Google Scholar] [CrossRef]
  5. Cai, S.; Liao, W.; Luo, C.; Li, M.; Huang, X.; Li, P. CRIL: An efficient online adaptive indoor localization system. IEEE Trans. Veh. Technol. 2016, 66, 4148–4160. [Google Scholar] [CrossRef]
  6. Abu-Shaban, Z.; Zhou, X.; Abhayapala, T.D. A novel TOA-based mobile localization technique under mixed LOS/NLOS conditions for cellular networks. IEEE Trans. Veh. Technol. 2016, 65, 8841–8853. [Google Scholar] [CrossRef]
  7. Cheng, L.; Li, Y.; Xue, M.; Wang, Y. An indoor localization algorithm based on modified joint probabilistic data association for wireless sensor network. IEEE Trans. Ind. Inform. 2020, 17, 63–72. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Duan, L. A phase-difference-of-arrival assisted ultra-wideband positioning method for elderly care. Measurement 2021, 170, 108689. [Google Scholar] [CrossRef]
  9. Poveda-García, M.; Gómez-Alcaraz, A.; Cañete-Rebenaque, D.; Martinez-Sala, A.S.; Gómez-Tornero, J.L. RSSI-based direction-of-departure estimation in Bluetooth low energy using an array of frequency-steered leaky-wave antennas. IEEE Access 2020, 8, 9380–9394. [Google Scholar] [CrossRef]
  10. Yang, C.; Shao, H.R. WiFi-based indoor positioning. IEEE Commun. Mag. 2015, 53, 150–157. [Google Scholar] [CrossRef]
  11. Elwischger, B.P.B.; Sauter, T. Efficient ambiguity resolution in wireless localization systems. IEEE Trans. Ind. Inform. 2017, 13, 888–897. [Google Scholar] [CrossRef]
  12. Wu, S.; Zhang, S.; Huang, D. A TOA-based localization algorithm with simultaneous NLOS mitigation and synchronization error elimination. IEEE Sens. Lett. 2019, 3, 1–4. [Google Scholar] [CrossRef]
  13. Tian, X.; Wei, G.; Wang, J.; Zhang, D. A localization and tracking approach in NLOS environment based on distance and angle probability model. Sensors 2019, 19, 4438. [Google Scholar] [CrossRef] [Green Version]
  14. Cheng, L.; Wang, Y.; Xue, M.; Bi, Y. An indoor robust localization algorithm based on data association technique. Sensors 2020, 20, 6598. [Google Scholar] [CrossRef]
  15. Wu, C.; Yang, Z.; Xiao, C. Automatic radio map adaptation for indoor localization using smartphones. IEEE Trans. Mob. Comput. 2017, 17, 517–528. [Google Scholar] [CrossRef]
  16. Li, Y.; Yan, K.; He, Z.; Li, Y.; Gao, Z.; Pei, L.; Chen, R.; El-Sheimy, N. Cost-effective localization using RSS from single wireless access point. IEEE Trans. Instrum. Meas. 2019, 69, 1860–1870. [Google Scholar] [CrossRef]
  17. Kulas, L. RSS-based DoA estimation using ESPAR antennas and interpolated radiation patterns. IEEE Antennas Wirel. Propag. Lett. 2017, 17, 25–28. [Google Scholar] [CrossRef]
  18. Tazawa, R.; Honma, N.; Miura, A.; Minamizawa, H. RSSI-based localization using wireless beacon with three-element array. IEICE Trans. Commun. 2018, 101, 400–408. [Google Scholar] [CrossRef] [Green Version]
  19. Kaltiokallio, O.; Hostettler, R.; Patwari, N. A novel Bayesian filter for RSS-based device-free localization and tracking. IEEE Trans. Mob. Comput. 2019, 20, 780–795. [Google Scholar] [CrossRef]
  20. Maddio, S.; Cidronali, A.; Manes, G. RSSI/DoA based positioning systems for wireless sensor network. In New Approach of Indoor and Outdoor Localization Systems; Elbahhar, F.B., Rivenq, A., Eds.; IntechOpen: Rijeka, Croatia, 2012. [Google Scholar] [CrossRef] [Green Version]
  21. Honma, N.; Tazawa, R.; Kikuchi, K.; Miura, A.; Sugawara, Y.; Minamizawa, H. Indoor-positioning using RSSI: DOD-based technique versus RSSI-ranging technique. In Proceedings of the Eighth International Conference on Indoor Positioning and Indoor Navigation, WIP171, Sapporo, Japan, 18–21 September 2017. [Google Scholar]
  22. Gilmartinez, A.; Poveda-Garcia, M.; Canete-Rebenaque, D.; Gomez-Tornero, J.L. Frequency-scanned monopulse antenna for RSSI-based direction finding of UHF RFID tags. IEEE Antennas Wirel. Propag. Lett. 2022, 21, 158–162. [Google Scholar] [CrossRef]
  23. Patwari, N.; Ash, J.N.; Kyperountas, S.; Hero, A.O.; Moses, R.L.; Correal, N.S. Locating the nodes: Cooperative localization in wireless sensor networks. IEEE Signal Process. Mag. 2005, 22, 54–69. [Google Scholar] [CrossRef]
  24. Cao, B.; Wang, S.; Ge, S.; Liu, W. Improving positioning accuracy of UWB in complicated underground NLOS scenario using calibration, VBUKF, and WCA. IEEE Trans. Instrum. Meas. 2020, 70, 1–13. [Google Scholar] [CrossRef]
  25. Haeberlen, A.; Flannery, E.; Ladd, A.M.; Rudys, A.; Wallach, D.S.; Kavraki, L.E. Practical robust localization over large-scale 802.11 wireless networks. In Proceedings of the 10th Annual International Conference on Mobile Computing and Networking, Philadelphia, PA, USA, 26 September–1 October 2004; pp. 70–84. [Google Scholar]
  26. Sun, S.; Wang, X.; Moran, B.; Rowe, W.S. A hidden semi-Markov model for indoor radio source localization using received signal strength. Signal Process. 2020, 166, 107230. [Google Scholar] [CrossRef]
  27. Yu, S.Z.; Kobayashi, H. Practical implementation of an efficient forward-backward algorithm for an explicit-duration hidden Markov model. IEEE Trans. Signal Process. 2006, 54, 1947–1951. [Google Scholar]
  28. Yu, S.Z. Hidden semi-Markov models. Artif. Intell. 2010, 174, 215–243. [Google Scholar] [CrossRef] [Green Version]
  29. Maulik, U.; Bandyopadhyay, S. Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1650–1654. [Google Scholar] [CrossRef] [Green Version]
  30. Nagata, T.; Mori, H.; Nose, T. Dimensional paralinguistic information control based on multiple-regression HSMM for spontaneous dialogue speech synthesis with robust parameter estimation. Speech Commun. 2017, 88, 137–148. [Google Scholar] [CrossRef]
  31. Wan, Y.; Si, Y.W. A hidden semi-Markov model for chart pattern matching in financial time series. Soft Comput. 2018, 22, 6525–6544. [Google Scholar] [CrossRef]
  32. Juan, Z.-Y.; Yu, Z.-. B; Ning, C-S. The design of adaptive measurement matrix in compressed sensing. Signal Process. 2012, 28, 1635–1641. [Google Scholar]
  33. Lu, W.; Li, W.; Kpalma, K.; Ronsin, J. Compressed sensing performance of random Bernoulli matrices with high compression ratio. IEEE Signal Process. Lett. 2014, 22, 1074–1078. [Google Scholar]
  34. Baraniuk, R.G. Compressive sensing (lecture notes). IEEE Signal Process. Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
Figure 1. Coordinate positions of 100 anchor nodes in the experimental environment.
Figure 1. Coordinate positions of 100 anchor nodes in the experimental environment.
Electronics 11 01715 g001
Figure 2. Comparison diagram of real state and estimated state of test samples based on HMM.
Figure 2. Comparison diagram of real state and estimated state of test samples based on HMM.
Electronics 11 01715 g002
Figure 3. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 20 .
Figure 3. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 20 .
Electronics 11 01715 g003
Figure 4. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 37 .
Figure 4. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 37 .
Electronics 11 01715 g004
Figure 5. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 40 .
Figure 5. Comparison diagram of real state and estimated state of test samples based on HSMM when parameter D = 40 .
Electronics 11 01715 g005
Figure 6. Performance comparison in different number of classification and different number of screening anchor nodes.
Figure 6. Performance comparison in different number of classification and different number of screening anchor nodes.
Electronics 11 01715 g006
Table 1. The optimal parameter D in CHI.
Table 1. The optimal parameter D in CHI.
Type of DistanceDMaximum of CHI
Euclidean distance37 4.712 × 10 4
Connection distance20 1.311 × 10 4
Table 2. Optimal parameters of D in DBI.
Table 2. Optimal parameters of D in DBI.
Type of DistanceDMaximum of DBI
Euclidean distance40 2.5
Connection distance20 1.97
Table 3. Accuracy of state estimation in different models and index types.
Table 3. Accuracy of state estimation in different models and index types.
Types of Models and IndexesClassification Accuracy
HMM 93.988 %
HsMM based on Euclidean distance/CHI 96.579 %
HsMM based on the connection distance/CHI 97.395 %
HsMM based on Euclidean distance/DBI 96.192 %
HsMM based on connection distance/DBI 97.395 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, X.; Wei, G.; Wang, J. Target Location Method Based on Compressed Sensing in Hidden Semi Markov Model. Electronics 2022, 11, 1715. https://doi.org/10.3390/electronics11111715

AMA Style

Tian X, Wei G, Wang J. Target Location Method Based on Compressed Sensing in Hidden Semi Markov Model. Electronics. 2022; 11(11):1715. https://doi.org/10.3390/electronics11111715

Chicago/Turabian Style

Tian, Xin, Guoliang Wei, and Jianhua Wang. 2022. "Target Location Method Based on Compressed Sensing in Hidden Semi Markov Model" Electronics 11, no. 11: 1715. https://doi.org/10.3390/electronics11111715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop