1 Introduction

Source localization based on location-bearing information gathered at spatially separated sensors [18] plays a pivotal role in many science and engineering areas such as cellular networks [15], Internet of Things [31], and wireless sensor networks [24]. Being perhaps the most popular measurement model, time-of-arrival (TOA) defined as the one-way travel time of the signal between the emitting source and a sensor has co-existed with numerous communication technologies for positioning ranging across ZigBee [5], radiofrequency identification device [3], ultra-wideband (UWB) [16], and ultrasound [9] and will be the main focus herein.

A challenging issue in this context is that due to the obstruction of signal transmissions between the source and sensors, non-line-of-sight (NLOS) propagation is generally unavoidable in the real-world scenarios (e.g., urban canyons and indoor locales). The NLOS error in a contaminated TOA appears as a positive bias because of additional propagation delay, indicating that special attention has to be paid to alleviating its adverse impacts on positioning accuracy. While studies of TOA-based localization under NLOS conditions may date back more than one-and-a-half decades [7], NLOS mitigation schemes subject to relatively few specific assumptions about the errors have yet only lately been investigated in the literature [4, 7, 14, 19,20,21, 23, 25,26,27, 32].

The first branch of these methods takes a so-called estimation-based strategy to alleviate the adverse impacts of NLOS conditions on positioning accuracy. For instance, as the primary contribution of [23], the authors propose to replace multiple NLOS bias errors by only one (viz., a balancing parameter to be estimated), based on which the effects of NLOS propagation are partially mitigated. Next, convex relaxation techniques [2] including second-order cone programming (SOCP) and semidefinite programming (SDP) are employed to tackle the formulation with non-convexity. The tactic of jointly estimating the source location and a balancing parameter is later reused in [19], only the solving process thereof is organized in a two-step weighted least squares (LS) manner, while the unconstrained minimization problem in each step, by construction, falls into a computationally simpler generalized trust region subproblem (GTRS) framework [1] and thus can be addressed exactly. Apart from them, in [21], a set of bias-like terms are treated as the optimization variables in addition to those for the source position. The authors then discard the constraints between these new variables and NLOS errors and put forward a distinct SDP estimator to eliminate the non-convexity of the established nonlinear LS problem.

Instead of precisely setting the NLOS-error-related optimization variables, one may model the uncertainties robustly using a less sensitive worst-case criterion [4, 20, 23, 32], i.e., searching for parameters over all plausible values that have the best possible performance in the worst-case sense [2]. The essence of this scheme is to exploit the predetermined upper bounds on the NLOS errors, which are more readily ascertainable compared to their distribution/statistics and the path status [23]. Specifically, a robust SDP method built upon the S-procedure [2] is developed in [23], whereas the approximations without leveraging S-procedure are made in [32] and [20], finally boiling down to a robust SOCP method and a bisection-based robust GTRS solution, respectively.

Toward a complementarity between the aforementioned two categories of methodologies, a more recent work [4] turns to regard the NLOS error in a TOA measurement as the superposition of a balancing parameter and a new variable to which robustness is conferred. Bearing a close resemblance to [23], the S-procedure is followed to eliminate the maximization part of the cumbersome minimax problem, whereupon the semidefinite relaxation is conducted to yield a tractable convex program. To boost the resilience of TOA-based localization system, there are also frequently chosen options other than the worst-case formulation which are less heavily dependent on the prior knowledge of NLOS information, e.g., the recursive Bayesian approaches with robust statistics in [14], model parameter determination of probability density function for the non-Gaussian distributions in [26, 27], and robust multidimensional similarity analysis (RMDSA) in [25] borrowing the idea from outlier-resistant low-rank matrix completion, to name just a few.

Robust statistics-based schemes usually benefit from their removal of requirements for a priori noise/error information and, therefore, fit in perfectly with the practical localization applications. Such an assumption is in contrast to the majority of existing work, e.g., [4, 7, 20, 21, 23, 32], which more or less rely on the prior knowledge about noise variance/error bounds, in addition to the TOA-based range measurements and sensor positions. Motivated by its \(\ell _0\)-like insensitivity toward grossly biased samples and widespread use in non-Gaussian signal processing including robust low-rank tensor recovery [29] and robust radar target localization [10], the correntropy measure [11], essentially a Welsch M-estimator-based cost function, is herein utilized for achieving higher degree of resistance to the NLOS errors. The half-quadratic (HQ) theory [13] is then exploited to convert the reshaped maximum correntropy criterion (MCC) estimation problem into a sequence of quadratic optimization tasks [2], after which the computationally attractive GTRS technique is applicable. It is noteworthy that our MCC-induced robustification is imposed upon the squared-range (SR) [1] rather than range measurement model. This, as we show in Sect. 3, can make the development of the HQ algorithm more tractable. Furthermore, our localization approach does not require any extra prior information except the TOA-based range measurements and sensor positions.

The remainder of this paper is organized as follows. Section 2 justifies our use of the noise/error mixture model and correntropy measure, and formulates the robust estimation problem. Section 3 expatiates the derivation process and important properties of the proposed algorithm. In Sect. 4, numerical results are included. Finally, conclusions are drawn in Sect. 5.

2 Preliminaries and Problem Formulation

Consider \(L \ge d+1\) sensors and a single source in the d-dimensional space (\(d = 2\) or 3). Denoting the known position of the ith sensor and unknown source location by \(\varvec{x}_i \in \mathbb {R}^d\) (for \(i = 1,\ldots ,L\)) and \(\varvec{x} \in \mathbb {R}^d\), respectively, the TOA-based range measurement between the ith sensor and source is modeled as \(r_i = {\Vert \varvec{x} - \varvec{x}_i\Vert }_2 + e_i\), where \({\Vert \cdot \Vert }_2\) stands for the \(\ell _2\)-norm, and \(e_i\) is the error in the ranging observation \(r_i\) under possible NLOS propagation conditions, following a mixture model of Gaussian and non-Gaussian distributions. In this mixture model, the relatively lower-level Gaussian distributed term represents the measurement noise due to thermal disturbance at the sensor, whereas the non-Gaussian counterpart stands for the NLOS bias error in the corresponding source-sensor path. Also notable is that the similar noise/error modeling schemes have been widely reported in the literature on TOA-based source localization under NLOS propagation [7]. While the recent efforts tend to perform error mitigation using as little NLOS information as possible, it is increasingly common to generalize the NLOS bias error term (i.e., one does not assume any specific non-Gaussian distribution) in the derivation of robust location estimators [4, 19,20,21, 23, 25, 32]. Depending on what kind of distributions are applied to generate the NLOS errors for simulation, these studies can be classified into the exponential [21] and uniform [4, 19, 20, 23, 25, 32] ones.

In this paper, we adopt the aforesaid robust localization setting, in which no prior knowledge about the statistics of NLOS bias errors or the error status is available to the algorithm in the problem-solving stage. By convention, the only information we assume is that the non-Gaussian error term in \(e_i\) (in the NLOS scenarios) is positive and possesses the bias-like feature, namely its magnitude is much larger than that of the Gaussian random process. We simply follow the more frequently used uniform distribution to produce the non-Gaussian turbulence in \(e_i\) in our computer simulations. Note that there are also other noise/error modeling strategies among the related work discussed in Sect. 1, such as the Gaussian mixture of two components [14, 26, 27] and Gaussian–Laplace mixture [24]. Since both Gaussian and Laplace distributions are with infinite support, they are normally utilized for the approximations of impulsive noise rather than the positively biased NLOS errors.

Fig. 1
figure 1

Comparison of different loss functions: \(1-\kappa _{\sigma }(x)\), |z|, and \(z^2/2\)

A local, nonlinear, and generalized similarity measure between two random variables X and Y, known as the correntropy [11], is defined as \(V_{\sigma }(X,Y) = \mathbf {E} \left[ \kappa _{\sigma } (X-Y)\right] \), where \(\mathbf {E}\left[ \cdot \right] \) denotes the expectation operator and \(\kappa _{\sigma }(x)\) is the kernel function with size \(\sigma \) satisfying the Mercer’s theorem [22]. In this paper, we fix \(\kappa _{\sigma }(x)\) as the Gaussian kernel, i.e., \(\kappa _{\sigma }(x) = \exp \left( -x^2/(2\sigma ^2)\right) \). In the practical scenarios where only a finite amount of data \(\left\{ X_i, Y_i \right\} _{i=1}^N\) is available, the sample estimator of correntropy: \(\hat{V}_{N,\sigma } (X,Y) = \frac{1}{N} \sum _{i=1}^{N} \kappa _{\sigma } (X_i - Y_i)\) is used instead. The MCC aiming at maximizing the sample correntropy function, or equivalently, minimizing its decreasing function which is closely associated with the Welsch M-estimator, has found many applications in non-Gaussian signal processing [10, 29]. Equipped with a redescending influence function, Welsch M-estimator is accepted to outperform not just \(\ell _2\)- and \(\ell _1\)-minimization criteria but also the Huber and Cauchy M-estimators in terms of outlier robustness [29], while, on the other side, have the advantage of being smoother than the Tukey’s biweight M-estimator [30]. For comparative purposes, Fig. 1 plots |z|, \(z^2/2\), and \(1-\kappa _{\sigma }(z)\) with different \(\sigma \)s. We observe that \(1-\kappa _{\sigma }(z)\), essentially the Welsch loss, can well approximate the \(\ell _2\) loss and hence be statistically quite efficient with respect to (w.r.t.) lower-level Gaussian disturbance. Oppositely, it will eventually saturate, behave like cardinality, and exhibit insensitivity to outliers as the magnitude of z increases. What is more, all of its properties are controlled by the kernel size \(\sigma \). These characteristics have justified our use of the correntropy measure for handling the bias-like NLOS errors.

Based on the MCC, a maximization problem is formulated as

$$\begin{aligned} {} \max _{\varvec{x}} \sum _{i=1}^{L} \kappa _{\sigma }\left( r_i^2 - {\Vert \varvec{x} - \varvec{x}_i\Vert }_2^2\right) . \end{aligned}$$
(1)

It should be noted that the fitting errors in (1) are expressed using the SR model [1] instead of the range-based one, i.e., \(X_i - Y_i = r_i^2 - {\Vert \varvec{x} - \varvec{x}_i\Vert }_2^2\). As illustrated in what follows, such a treatment is crucial for a computationally simple \(\varvec{x}\)-ascertainment step in solving (1).

3 Algorithm Development

MCC-based optimization problem (1) is in general difficult to solve because of the severe non-convexity. In this section, we tackle it based on the HQ reformulation and bisection-based GTRS solution.

According to the HQ theory [13], there exists a convex conjugate function \(\zeta : \mathbb {R} \rightarrow \mathbb {R}\) of \(\kappa _{\sigma }(x)\) so that \(\kappa _{\sigma }(x) = \max _{p} \left( p \frac{x^2}{\sigma ^2} - \zeta (p) \right) \), and for any fixed x, the maximum is attained at \(p = -\kappa _{\sigma }(x)\).

By employing the HQ technique, Equation (1) is reformulated as

$$\begin{aligned} {} \max _{\check{\varvec{x}}} \mathcal {A}_{\sigma }(\varvec{x},\varvec{p}) := \sum _{i=1}^{L} \left[ p_i \frac{{\left( r_i^2 - {\Vert \varvec{x} - \varvec{x}_i\Vert }_2^2\right) }^2}{\sigma ^2} - \zeta (p_i) \right] , \end{aligned}$$
(2)

where \(\check{\varvec{x}} = \left[ \varvec{x}^T, \varvec{p}^T \right] ^T \in \mathbb {R}^{d+L}\) and \(\varvec{p} = \left[ p_1,p_2,\ldots ,p_L \right] ^T \in \mathbb {R}^L\) is a vector containing the auxiliary variables. This can also be interpreted as introducing an augmented cost function \(\mathcal {A}_{\sigma }\) in the enlarged parameter space \(\left\{ \varvec{x}, \varvec{p} \right\} \). A local maximizer of (2) is then calculated using the following alternating maximization (AM) procedure:

$$\begin{aligned} \varvec{p}_{\left( k+1\right) }&= \arg \max _{\varvec{p}} \mathcal {A}_{\sigma }\left( \varvec{x}_{\left( k\right) },\varvec{p}\right) \end{aligned}$$
(3a)
$$\begin{aligned} \varvec{x}_{\left( k+1\right) }&= \arg \max _{\varvec{x}} \mathcal {A}_{\sigma }\left( \varvec{x},\varvec{p}_{\left( k+1\right) }\right) \end{aligned}$$
(3b)

where the subscript \((\cdot )_{\left( k \right) }\) denotes the iteration index.

We can derive from the properties of convex conjugate function and simple observations that the solution of subproblem (3a) is

$$\begin{aligned} {} \left[ \varvec{p}_{\left( k+1\right) }\right] _{i} = -\exp \left( -\frac{{\left( r_i^2 - {\Vert \varvec{x}_{\left( k\right) } - \varvec{x}_i\Vert }_2^2\right) }^2}{2 \sigma ^2}\right) , \end{aligned}$$
(4)

where \(\left[ \cdot \right] _i \in \mathbb {R}\) represents the ith element of a vector. By ignoring the constant terms independent of the optimization variable \(\varvec{x}\) and rewriting the problem into a minimization form, subproblem (3b) amounting to the SR-LS estimation [1] problem

$$\begin{aligned} \min _{\varvec{x}} \sum _{i=1}^{L} \left\{ -\left[ \varvec{p}_{\left( k+1\right) }\right] _{i} {\left( {\Vert \varvec{x} - \varvec{x}_i\Vert }_2^2 - r_i^2 \right) }^2\right\} \end{aligned}$$

can actually be transformed into a GTRS w.r.t. \(\varvec{y} = \left[ \varvec{x}^T, \alpha \right] ^T \in \mathbb {R}^{d+1}\), viz.

$$\begin{aligned} {} \min _{\varvec{y}}~ {\left\| \varvec{W} \left( \varvec{A} \varvec{y} - \varvec{b} \right) \right\| }_2^2,~~s.t. ~~ \varvec{y}^T \varvec{D} \varvec{y} + 2 \varvec{f}^T \varvec{y} = 0, \end{aligned}$$
(5)

where \(\varvec{W} = diag \left( \varvec{w} \right) \) is a diagonal matrix with the elements of vector \(\varvec{w}\) on its main diagonalFootnote 1, \(\varvec{w} = \left[ \sqrt{-\left[ \varvec{p}_{\left( k+1\right) }\right] _1}, \sqrt{-\left[ \varvec{p}_{\left( k+1\right) }\right] _2},\ldots , \sqrt{-\left[ \varvec{p}_{\left( k+1\right) }\right] _L} \right] ^T \in \mathbb {R}^L\),

$$\begin{aligned} \varvec{A}&= \begin{bmatrix} -2 \varvec{x}_1^T &{} 1 \\ \vdots &{} \vdots \\ -2 \varvec{x}_L^T &{} 1 \end{bmatrix},~\varvec{b} = \begin{bmatrix} r_1^2 - \left\| \varvec{x}_1 \right\| _2^2 \\ \vdots \\ r_L^2 - \left\| \varvec{x}_L \right\| _2^2 \end{bmatrix},\\ \varvec{D}&= \begin{bmatrix} \varvec{I}_d &{} \varvec{0}_{d} \\ \varvec{0}_{d}^T &{} 0 \end{bmatrix},~\varvec{f} = \begin{bmatrix} \varvec{0}_{d} \\ -1/2 \end{bmatrix}, \end{aligned}$$

\(\varvec{0}_d \in \mathbb {R}^d\) denotes an all-zero vector of length d, and \(\varvec{I}_d \in \mathbb {R}^{d \times d}\) is the \(d \times d\) identity matrix. Interestingly, the GTRS problem which aims to minimize a quadratic function subject to a single quadratic constraint, albeit usually non-convex, possesses necessary and sufficient conditions of optimality from which effective algorithms can be derived [1]. To be specific, the exact solution of (5) is given by \(\hat{\varvec{y}}(\chi ) = \left( \varvec{A}^T \varvec{W}^T \varvec{W} \varvec{A} + \chi \varvec{D} \right) ^{-1} \left( \varvec{A}^T \varvec{W}^T \varvec{W} \varvec{b} - \chi \varvec{f} \right) \), where \(\chi \) is the unique solution of \(\psi (\chi ) = \hat{\varvec{y}}(\chi )^T \varvec{D} \hat{\varvec{y}}(\chi ) + 2 \varvec{f}^T \hat{\varvec{y}}(\chi ) = 0\) for \(\chi \in I\), \(I = \left( -\frac{1}{\chi _1 \left( \varvec{D}, \varvec{A}^T \varvec{W}^T \varvec{W} \varvec{A} \right) }, \infty \right) \), and \(\chi _1 \left( \varvec{U},\varvec{V} \right) \) denotes the largest eigenvalue of \(\varvec{V}^{-1/2} \varvec{U} \varvec{V}^{-1/2}\), given a positive definite matrix \(\varvec{V}\) and a symmetric matrix \(\varvec{U}\). Since \(\psi \left( \chi \right) \) is strictly decreasing on I (Theorem 5.2 in [12]), the optimal \(\chi \) can be found using a simple bisection method.

So far, the two subproblems in the AM procedure have been successfully addressed. We provide here a short remark on the convergence of our algorithm (termed SR-MCC by following the conventions in [1, 19, 20]). Analogous to Proposition 2 in [28], it can easily be deduced from (3a), (3b), and the definitions of convex conjugate function that \(\mathcal {A}_{\sigma }(\varvec{x},\varvec{p})\) increases at each AM step. Therefore, the sequence \(\left\{ \mathcal {A}_{\sigma }\left( \varvec{x}_{\left( k\right) },\varvec{p}_{\left( k\right) }\right) \right\} _{k=1,2,\ldots }\) generated by SR-MCC is non-decreasing. Based on the properties presented in [11], one can further verify that \(\mathcal {A}_{\sigma }\left( \varvec{x}_{\left( k\right) },\varvec{p}_{\left( k\right) }\right) \) is always bounded above. Then, convergence of the sequence to a limit point is assured.

The robustness of the MCC to a great extent hinges on the kernel size \(\sigma \). In other words, a relatively small \(\sigma \) assigns a much smaller weight (i.e., the role played by the auxiliary variable \(p_i\)) to the outliers during the iterations of HQ optimization and hence achieves robustness against them. To ensure that the kernel size is always in the neighborhood of the best values [11], we follow [10, 11] to adaptively select \(\sigma \) at each HQ iteration based on the Silverman’s heuristic [11, 17], namely

$$\begin{aligned} \sigma _{\left( k+1\right) } = 1.06 \times \min \left\{ {\sigma _E}_{{\left( k+1\right) }}, R_{{\left( k+1\right) }}/1.34 \right\} \times L^{-1/5}, \end{aligned}$$
(6)

where \({\sigma _E}_{{\left( k+1\right) }}\) is the standard deviation of the error \(r_i^2 - {\Vert \varvec{x}_{\left( k+1\right) } - \varvec{x}_i\Vert }_2^2\) and R is the error interquartile range [11].

figure a

The termination criteria for the iterative algorithm SR-MCC are set as follows. The optimization variables \(\varvec{p}\) and \(\varvec{x}\) are iteratively updated until \(k = N_{\max }\) or \(\left\| \varvec{x}_{\left( k+1 \right) } - \varvec{x}_{\left( k \right) }\right\| _2 < \gamma \) is reached, where \(N_{\max } \ge 1\) and \(\gamma > 0\) are the predefined maximum number of iterations for the loop and tolerance parameter, respectively. For a clearer view, we summarize the whole procedure of SR-MCC in Algorithm 1.

Table 1 Complexity of considered NLOS mitigation algorithms

It is not hard to find that the computational cost of operations in (3a) is negligible compared to that in (3b), i.e., in which the GTRS leading to a complexity of \(\mathcal {O}(KL)\) [20] is incorporated. Here, K is the number of steps taken by bisection search. The dominant complexity of our SR-MCC algorithm is thus \(\mathcal {O}(N_{HQ }KL)\), where \(N_{HQ }\) denotes the number of HQ iterations. In Table 1, the computational complexity of SR-MCC is compared to several state-of-the-art approaches for TOA-based localization with NLOS mitigation,Footnote 2 where \(N_{ADMM }\) is the iteration number of the alternating direction method of multipliers in [25]. As our empirical results show, the proposed SR-MCC algorithm can already exhibit decent performance with a few number of \(N_{HQ }\) and K and, hence, is fairly computationally simple. Note that we also provide comparison results in terms of average run-time in the next section for further confirmation.

Fig. 2
figure 2

RMSE versus \(\sigma _{G }\) and b in LOS and different NLOS scenarios, respectively. a \(L_{NLOS } = 0\). b \(\sigma _{G }^2 = 0.1, L_{NLOS } = 2\). c \(\sigma _{G }^2 = 0.1, L_{NLOS } = 5\). d \(\sigma _{G }^2 = 0.1, L_{NLOS } = 8\)

4 Numerical Results

This section contains numerical investigations with the use of both synthetic and real experimental data. In addition to SR-MCC, state-of-the-art algorithms indicated in Table 1 are also included for comparison. We give a summary of the associated methods in Table 2, expatiating on the a priori information required in their implementations. All the convex programs are realized using the CVX package [8]. Their infeasible runs are simply discardedFootnote 3 and do not count toward the totals of Monte Carlo (MC) trials [19]. We set the stopping criteria of SR-MCC as \(\gamma = 10^{-5}\), \(N_{max } = 10\), and \(K=30\). On the other hand, algorithmic parameters of the existing methods remain unchanged as in their respective work. The computer simulations are all conducted on a Lenovo laptop with 16 GB memory and Intel i7-10710U processor.

Table 2 Summary of methods incorporated in numerical investigations

4.1 Results of Synthetic Data

Basically, we consider a single-source localization setup with \(L=10\) sensors and \(d=2\). The source and sensors are all randomly deployed inside a 20 m \(\times \) 20 m square region in each Monte Carlo (MC) run. In our setting, the Gaussian disturbance is assumed to be of identical variance \(\sigma _{G }^2\) for all choices of is, and the NLOS bias is drawn from a uniform distribution on the interval [0, b]. Based on 3000 MC samples, the root-mean-square error (RMSE) defined as

$$\begin{aligned} {} RMSE = \sqrt{\frac{1}{3000}\sum _{j=1}^{3000}{{\left\| \tilde{\mathbf {x}}^{\{j\}} - \mathbf {x}^{\{j\}}\right\| }^2}} \end{aligned}$$
(7)

is taken as the metric of positioning accuracy, where \(\tilde{\mathbf {x}}^{\{j\}}\) denotes the estimate of source location \({\mathbf {x}}^{\{j\}}\) in the jth run.

We start with the ideal case, where all sensors are under LOS propagation (namely \(L_{NLOS } = 0\) with \(L_{NLOS }\) being the number of NLOS paths) and our mixture model of Gaussian and non-Gaussian distributions reduces to simply additive white Gaussian noise of variance \(\sigma _{G }^2\). Figure 2a plots the RMSE versus \(\sigma _{G }^2\) for all the considered algorithms in this scenario, with the Cramér–Rao lower bound (CRLB) [18] being included for benchmarking purposes. It is observed that SR-MCC, RMDSA, and RSR-WLS have much lower RMSEs than the others, though SR-MCC is slightly inferior to RMDSA and RSR-WLS. Among all the methods, only the solution accuracy of RSR-WLS can achieve the CRLB up to low Gaussian noise levels. Fixing the variance of noise as \(\sigma _{G }^2 = 0.1\), Fig. 2b – d subsequently compares the performances of diverse approaches under three different and typical NLOS conditions. We clearly see from Fig. 2b that SR-MCC outperforms the other methods for all bs in a mild NLOS environment with \(L_{NLOS } = 2\). As depicted in Fig. 2c, when the number of NLOS connections is moderate, i.e., \(L_{NLOS } = 5\), our proposed scheme is superior to RMDSA, SR-WLS, SDP, and SOCP while yielding a bit higher RMSE values than RSR-WLS and RSOCP. Figure 2d illustrates the RMSE versus b in an extremely dense NLOS environment with \(L_{NLOS } = 8\). Although SR-MCC degrades in a sense that it cannot overwhelmingly outperform SOCP and SDP in this case, it still produces the minimum RMSE for all bs among SR-MCC, RMDSA, and SR-WLS, which are the only schemes whose operations require no more than the sensor locations and TOA-based distance measurements. On the contrary, the other solutions more or less take advantage of and are reliant upon additional a priori knowledge of the noise variance and/or error bound. Apart from these, the performances of all the considered algorithms deteriorate as \(\sigma _{G }\) or b grows.

To summarize, it is preferred to employ our SR-MCC method if the number of the NLOS connections is not large enough. This actually coincides with the properties of the CIM counted on in building our objective function (see Sect. 2) and is further verified in Fig. 3 demonstrating the RMSE versus \(L_{NLOS } \in \left[ 1, 8 \right] \) at \(\sigma _{G }^2 = 0.1\) and \(b = 5\). Apart from the statistical robustness of the Welsch loss to large errors as shown in Fig. 1, more explanations for the outstanding performance of the MCC-based robustification strategy in several mixed LOS/NLOS environments are given below from the perspective of HQ iterations. As the iteration summarized in Algorithm 1 proceeds, the auxiliary variables in \(\varvec{p}\) updated according to (4) play the role of Gaussian-like weighting functions [11], thus capable of mitigating the adverse effects of large SR fitting errors in GTRS (5) to a great extent [10].

Fig. 3
figure 3

RMSE versus \(L_{NLOS }\) at \(\sigma _{G }^2 = 0.1\) and \(b = 5\)

4.2 Results of Real Experimental Data

This subsection substantiates the efficacy of SR-MCC through the use of real experimental data. The localization experiments have been conducted within a 50 m \(\times \) 50 m open area (see Fig. 4) at the Technische Fakultät campus of the University of Freiburg, Freiburg im Breisgau, Germany, and the data have been acquired by using the ranging systems developed based on Decawave DWM1000 modules [6, 16]. Each DWM1000 module is an IEEE 802.15.4-2011 UWB implementation based on Decawave’s DW1000 UWB transceiver integrated circuit [6], and we have installed five modules in our real-world experiments. Among them, four modules attached to the wooden rods with known positions (see Fig. 4a) are specified as the sensors, whereas the remaining one serves as the source to be located. The power is supplied using the power banks. For the purpose of testing, two reference points are considered, and the source stops its movements and stays long enough at each of the reference points, such that 100 sets of steady two-way ranging measurements between the source and sensors are performed. By deploying a Topcon GPT-8203A total station at the origin, we set up the coordinate system (shown in Fig. 4b) and the true positions of the sensors and reference points can be measured. Here, we have \(d = 2\) because the source and all the sensors are intentionally always of the same height 1.2 m. The positions of the sensors and reference points are tabulated in Table 3. In particular, several obstructions are created in the path between the source and first sensor on purpose to construct the NLOS environments.

Fig. 4
figure 4

Experimental environment for data collection. a Real-world deployment. b 2-D illustration of localization geometry

Table 3 Sensor and reference point positions
Fig. 5
figure 5

Empirical CDF of Euclidean distance between true range and observed value based on 50 datasets acquired at 2 reference points

To determine the upper bound \(\bar{b}\) on the NLOS errors needed by RSOCP and RSR-WLS, Fig. 5 plots the empirical cumulative distribution function (CDF) of the Euclidean distance between the range measurement and its true value. Following the similar strategy to [4], we set it as \(\bar{b} = 4\) associated with the probability of \(90\%\) in Fig. 5. Furthermore, the noise variance required by SDP, SOCP, and RSOCP is set as \(\sigma _{G }^2 = 0.02\). Table 4 shows the average run-time recorded using MATLAB commands tic and toc and RMSEFootnote 4 values for different algorithms. The results of the measured elapsed time roughly accord with the complexity analysis in Table 1. We see that the amounts of average run-time for the SOCP/SDP-based approaches all exceed 1 s, reinforcing the general consensus that convex optimization usually results in non-negligible computational overheads. In contrast, SR-MCC, RMDSA, SR-WLS, and RSR-WLS are computationally much simpler. We point out that the complexity level of SR-MCC is a bit higher than RMDSA, SR-WLS, and RSR-WLS, as it involves solving a series of GTRSs. Nonetheless, our SR-MCC method has the best localization accuracy in terms of the RMSE.

Table 4 Performance comparison using real experimental data

5 Conclusion

In this paper, we have devised a novel NLOS mitigation technique for TOA-based source localization. Our key idea is to utilize the correntropy-based error measure to achieve robustness against the bias-like NLOS errors. An HQ framework has been adopted to deal with the nonlinear and non-convex correntropy-induced optimization problem in a computationally inexpensive AM fashion. The mentionable merit of the proposed algorithm is its low prior knowledge requirement. Extensive numerical results have confirmed that our method can outperform several existing schemes in terms of localization accuracy, especially in mixed LOS/NLOS environments, where the number of NLOS connections \(L_{NLOS }\) is not large enough. Nevertheless, the presented approach has its limitation that it might suffer from the loss of localization accuracy as \(L_{NLOS }\) increases. An important direction for the future work is to further robustify the estimator w.r.t. \(L_{NLOS }\), and a possible solution can be combining the statistical robustification scheme with the worst-case criterion.