Multisensor Data Fusion in IoT Environments in Dempster–Shafer Theory Setting: An Improved Evidence Distance-Based Approach

In IoT environments, voluminous amounts of data are produced every single second. Due to multiple factors, these data are prone to various imperfections, they could be uncertain, conflicting, or even incorrect leading to wrong decisions. Multisensor data fusion has proved to be powerful for managing data coming from heterogeneous sources and moving towards effective decision-making. Dempster–Shafer (D–S) theory is a robust and flexible mathematical tool for modeling and merging uncertain, imprecise, and incomplete data, and is widely used in multisensor data fusion applications such as decision-making, fault diagnosis, pattern recognition, etc. However, the combination of contradictory data has always been challenging in D–S theory, unreasonable results may arise when dealing with highly conflicting sources. In this paper, an improved evidence combination approach is proposed to represent and manage both conflict and uncertainty in IoT environments in order to improve decision-making accuracy. It mainly relies on an improved evidence distance based on Hellinger distance and Deng entropy. To demonstrate the effectiveness of the proposed method, a benchmark example for target recognition and two real application cases in fault diagnosis and IoT decision-making have been provided. Fusion results were compared with several similar methods, and simulation analyses have shown the superiority of the proposed method in terms of conflict management, convergence speed, fusion results reliability, and decision accuracy. In fact, our approach achieved remarkable accuracy rates of 99.32% in target recognition example, 96.14% in fault diagnosis problem, and 99.54% in IoT decision-making application.


Introduction
The IoT environment consists of tiny sensor-enabled connected devices or things able of sensing, communicating, computing, and reasoning. Data generated in IoT environments are voluminous, with diverse representations, different qualities, and different reliability levels. Additionally, due to multiple factors such as environmental noise, sensor defects, or calibration errors, these data are prone to various imperfections. Then, they could be noisy, uncertain, conflicting, or even erroneous, which may lead to wrong decisions if those features are not taken into account properly.
Multisensor data fusion has been proven to be a very powerful technique to manage data coming from heterogeneous sources and to move towards effective decision-making. It aims to combine data gathered from various sensors in the best possible manner to get more precise, accurate, and consistent information. Multisensor data fusion technique can be performed at various levels, depending on the representation of data to be merged and according to the stage at which the combining operation takes place. One can distinguish low, intermediate, and high-level fusion [1]. Several mathematical methods are used for the data fusion process; they are mainly classified into three categories: probability-based, Sensors 2023, 23, 5141 2 of 24 artificial intelligence-based, and evidence-based techniques [2]. Evidence theory, also known as the theory of belief functions or Dempster-Shafer theory (or D-S theory in short), is a robust and flexible mathematical tool for modeling and merging uncertain, imprecise, and incomplete data. The theory was first introduced by Dempster in 1967 [3] as a generalization of Bayesian inference, and then further extended by his student Shafer in 1976 [4] into a general framework of uncertain reasoning. D-S theory has been extensively applied in various multisensor data fusion applications such as decision-making [5][6][7], fault diagnosis [8][9][10][11], target recognition [12][13][14], etc., owing to its flexibility and effectiveness in handling uncertainty problems and its ability in merging heterogeneous data obtained from multiple sensors without prior knowledge using Dempster's combination rule. However, the application of D-S evidence theory has its own limitations when dealing with highly conflicting multisensor data; Dempster's combination rule can lead to counter-intuitive results as illustrated by the conflictive example known as the Zadeh paradox [15]. To conquer this flaw and obtain reasonable combined results, several alternatives have been proposed in the literature; they are mainly divided into two major categories: (i) the modification of the classical Dempster's combination rule and (ii) the revision of the original evidence model before combination.
For the first category, scholars believe that the unreasonable results come from the direct normalization of the conflicting evidence, so they have proposed new combination rules [16][17][18][19][20]. These rules manage to solve the problem to some extent, but they lose both commutativity and associativity properties satisfied by the classical Dempster's combination rule. According to the researchers of the 2nd category, the solution lies in reducing the conflicting evidence impact on the final fusion result, so they proposed pre-processing the bodies of evidence before combining them . To this end, different distances between bodies of evidence and various entropies (to estimate the uncertainty of the bodies of evidence) are used. For instance, Murphy [21] suggested simply averaging the mass functions and then combining them using the classical Dempster's combination rule, but it seems unreasonable to assign equal weights to all the bodies of evidence without taking the correlation between them into consideration. Yong et al. [22] proposed a weighted average combination rule based on evidence distance (i.e., Jousselme distance) to measure the conflict degree between the bodies of evidence. Zhang [25] proposed an improved combining method based on the degree of support between the bodies of evidence using the cosine theorem. In our recent work [86], we combined the Jousselme distance and the cosine value to determine the conflict degree between the bodies of evidence, and we adopted Deng entropy to measure the uncertainty degree of each body of evidence; we used both conflict and uncertainty measurements to construct weighting factors to modify the original mass functions. Tang et al. [88] proposed a new approach to pre-process evidence by introducing a reliability coefficient based on the betting commitment evidence distance and the single-factor belief function. The betting commitment is constructed on the basis of the pignistic probability function to measure the dissimilarity between two BPAs, while the single factor belief function is a single subset used to evenly distribute the probability to each subset of the power set. In [89], the evidence model was revised using a novel correction coefficient based on a stochastic approach for the link-structure analysis (SALSA) algorithm combined with the Lance-Williams distance to better measure the degree of support for each piece of evidence. Ma et al. [90] adopted the trusted discount method to alleviate the shortcoming in D-S theory, incorporating Jousselme distance for conflict measurement, and Wasserstain distance for uncertainty measurement, and they proposed a new adaptive weights' allocation method. Huwa and Jing [91] introduced an improved belief Hellinger divergence that considers both belief and plausibility functions to measure the discrepancy between the pieces of evidence and used belief entropy to measure the uncertainty degree. While several methods and alternatives have been proposed in the literature, effectively addressing the conflict between evidence continues to be an open issue that requires further improvement. In this paper, an improved evidence combination method for multisensor data fusion in IoT environments is proposed to overcome the Dempster-Shafer theory flaw and fuse highly conflicting evidence without generating counter-intuitive results. It is based on a newly defined evidence distance and belief entropy. The method aims to evaluate the importance of each source in the fusion process by assigning evidence weights. The main idea is to weight the reliable sources most heavily and assign lower weights to the less reliable ones. This strategy intends to reduce the conflicting impact of less reliable sources on the final combination result, thereby enhancing decision-making accuracy.
The main contributions of this study are summarized as follows: -We first propose an improved evidence distance based on Hellinger distance, which can effectively quantify the conflict degree between the pieces of evidence. - The improved evidence distance considers the dependencies between the bodies of evidence through a Jaccard matrix, it satisfies true metric properties (non-negativity, symmetry, positive definiteness, and trigonometric inequality), and it allows a better conflict measurement. - We develop a novel multisensor data fusion strategy based on the improved evidence distance and Deng entropy for conflict and uncertainty measurements, respectively. Reward and penalty functions are designed to assign the weights accordingly. We reward the reliable pieces of evidence by weighting them more heavily, making their role more important in the final fusion result and, conversely, for the less reliable ones. -Finally, we apply the proposed approach to a benchmark example for target recognition, a fault diagnosis problem, and an IoT decision-making application to demonstrate its effectiveness compared to other similar existing methods. The results have shown the superiority and efficiency of the proposed method in terms of conflict management, convergence speed, fusion results reliability, and decision accuracy.
The remainder of this work is organized as follows: Section 2 introduces some relevant basic theoretical foundations, including Dempster-Shafer theory, Zadeh paradox, Hellinger distance, and belief entropy, and a review of the main related works is outlined in Section 3. The proposed improved evidence distance is presented in Section 4, followed by the proposed evidence combination method in Section 5. In Section 6, the simulation of fusion results and comparisons with similar existing methods are presented and discussed. Conclusions and future research directions are provided in Section 7.

Theoretical Foundations
In this section, some relevant preliminaries are introduced. First, we provide in Table 1 the symbols with their meanings used in the rest of the paper.

Dempster-Shafer Theory
Dempster-Shafer theory (D-S), also known as evidence theory or the theory of belief functions, is an effective mathematical method for reasoning under uncertainty; it was Sensors 2023, 23, 5141 4 of 24 first initiated by A.P. Dempster in 1967 [3] as a generalization of Bayesian inference, and then developed by his student Shafer in 1976 [4] into a general framework of uncertain reasoning, by introducing the concept of "trust function". Unlike probability theory, D-S theory doesn't only allow for the allocation of a probability mass to mutually exclusive singletons, but also to sets or intervals, and it doesn't require prior knowledge to combine the pieces of evidence.
Basic Definitions Definition 1. Frame of discernment: Denoted by Ω, it is a finite, nonempty set of mutually exclusive and exhaustive hypotheses, A i . It is expressed as follows: The power set of Ω, denoted by 2 Ω , is a set of all the possible combinations of all the elements in Ω. For all A ⊆ Ω, it is defined as follows: Definition 2. Basic Probability Assignment, BPA (or Mass function) BPA represents how strongly the evidence supports a hypothesis by assigning a probability to the different subsets. In a discernment frame, the mass function of a subset symbolized by m is defined as follows: satisfying the following conditions: ∀A ⊆ Ω, if m(A) > 0, A is called a focal element of evidence.

Definition 3. Uncertainty representation
Based on the definition of BPAs, belief function (Bel) and plausibility function (Pl), which represent, respectively, the lower and upper bounds of the uncertainty interval, are defined as follows: where the belief function represents the total belief in hypothesis A to be true, while the plausibility function refers to the possible belief in the Hypothesis A.

Definition 4. Evidence combination rule
In evidence theory, the two BPAs m 1 and m 2 under the same frame of discernment, separately obtained from two independent sources, can be combined using Dempster's combination rule, which provides a method to compute the orthogonal sum denoted by m 1 ⊕ m 2 as follows: where K is the conflict coefficient used to measure the conflict degree between two bodies of evidence. The case K = 0 means that the sources are consistent and in perfect agreement, while K = 1 implies that the sources are in total conflict. Dempster's combination rule satisfies both commutativity and associativity properties as follows: thus, it can be extended to the combination of N bodies of evidence.
It should be noted that Dempster's combination rule is efficient only when the pieces of evidence are not in conflict, unreasonable results will be generated when the sources are contradictory, as highlighted in Zadeh's counter-example [15].

Zadeh Paradox
Suppose a diagnosis of a patient from two doctors with three possible diseases, i.e., meningitis (M), brain tumor (BT), or concussion (C). The frame of discernment is then defined as follows: Ω = {M, BT, C}. The two doctors express their opinions, and the results are interpreted as mass functions as follows: By combining these two pieces of evidence with Dempster's combination rule, one can obtain the following: It's clear that the combined result is counter-intuitive, it shows that the patient suffers from a brain tumor, the possibility of which both of the doctors claim a very low degree of belief. In contrast, the possibility of meningitis and concussion, which were strongly supported by one of the bodies of evidence, are completely denied after using Dempster's combination rule. This proves the ineffectiveness of this combination rule in such situations.

Hellinger Distance between Bodies of Evidence
Hellinger distance is a complete distance metric defined in the probability distribution space. It is considered as the probabilistic analog of the Euclidean distance. It was expressed in terms of the Hellinger integral initiated by Hellinger in 1909. Hellinger distance is very stable and reliable, and it is widely used to measure the dissimilarity of two probability distributions, and it can be applied to evidence theory to measure the dissimilarity between two pieces of evidence.
In a finite complete frame of discernment, the Hellinger distance between two bodies of evidence is defined as follows [61]: Hellinger distance satisfies the following properties: Positive definiteness: d H (m 1 , m 2 ) = 0, if and only if m 1 = m 2 Hellinger distance is an effective tool to quantify conflict degrees between pieces of evidence, The smaller the distance is, the more similar and less conflicting the bodies of evidence.

Deng Entropy
The concept of entropy was first introduced in physics to characterize the disorder and chaos degree of a molecular state in thermodynamics [92]. In information theory, Shannon entropy [93] was proposed to measure the uncertainty of information under the framework of probability theory. Deng entropy is a belief entropy, proposed by Deng [27] as a generalization of Shannon entropy, defined under the Dempster-Shafer framework; it is a very efficient tool to quantify the uncertainty degree of the evidence. It takes into consideration the BPA of a hypothesis and the cardinality of the element of the BPA; it is given by the following: where A i represents a hypothesis of a belief function m and |A i | is the cardinality of the set A i . Deng entropy definitely degenerates to Shannon entropy when the mass function is only allocated to singletons (single elements), as follows:

Related Works
As mentioned in the Introduction, one of the major problems with Dempster's combination rule is the fact that it leads to counter-intuitive results when fusing highly conflicting evidence. To solve such a problem, two major methodologies are popular. One is to modify the combined rule, and the other is to preprocess the bodies of evidence.
For the first methodology, scholars believe that the unreasonable results come from the direct normalization of the conflicting evidence, so they have proposed new combination rules. According to Yager [16], conflicting data don't provide useful information, so he proposed to transfer the conflict to the total ignorance assigned to the universal set of the frame of discernment denoted by m(Ω). Dubois and Prade [17] came up with a disjunctive combination rule that considers the union of the evidence rather than their intersection, and they assumed that for two sources, at least one of them is reliable. Smets [18] brought up a conjunctive combination rule also known as the un-normalized combination rule, where all the sources are considered reliable and the conflict is considered as a kind of information and it is allocated to the empty set m(φ). Lefevre et al. [19,20] used the part of the conflicting evidence and distributed the conflict into the focal element sets of all the evidence proportionally. These rules manage to solve the problem to some extent, but they lose both commutativity and associativity properties that are satisfied by the classical Dempster's combination rule.
The second category believes that the problem of counter-intuitive results in conflict situations is caused by unreliable evidence rather than Dempster's combination rule itself. A methodology aiming at pre-processing mass functions without changing the combination rule is then proposed. The main idea is to revise and reconstruct the evidence model in order to reduce the conflicting evidence impact on the final fusion result. To this end, diverse methods such as weighted averaging of mass functions are used. Weights are assigned to the original BPAs to determine their roles in the fusion process. On this basis, several weighted combination approaches have been proposed in the literature with various weight measurements . The most well-known of them are summarized in Table 2 with their weights' measurements. Table 2. Improved weighted average approaches.

Improved Evidence Distance
In this paper, we newly define an enhanced evidence distance based on Hellinger distance for D-S evidence theory by building on the concept of the belief functions transformation presented in [26]. The proposed evidence distance incorporates the correlation between bodies of evidence using a Jaccard matrix, thereby providing a more effective measure of the conflict degree between them. Furthermore, the improved evidence distance satisfies the requirements of a true metric, meeting non-negativity, positive definiteness, symmetry, and the triangle inequality properties.
It is worth noting that when all the elements are singletons, the mass function conforms to the classical probability distribution. In this case, the Jaccard matrix degenerates to the identity matrix, and, consequently, the improved evidence distance simplifies to the traditional Hellinger distance.
We define the improved evidence distance as follows: in which m is expressed as follows: where D is a Jaccard matrix of the size 2 n × 2 n , whose elements are obtained by the following: The improved Hellinger distance satisfies the properties of non-negativity, symmetry, triangle inequality, and positive definiteness.
Proof. In the following, the properties of non-negativity, symmetry, triangle inequality, and positive definiteness of the improved distance are verified.
The improved evidence distance Formula (13) can be written as follows: Proof. Non-negativity 0 ≤ d I H (m 1 , m 2 ) ≤ 1: which implies the following: The non-negativity property of the proposed evidence distance is proven.
Proof. Symmetry d I H (m 1 , m 2 ) = d I H (m 2 , m 1 ). We have the following: It's obvious that therefore, the symmetry property of the proposed evidence distance is proven.
Proof. Triangle inequality: We use Minkowski inequality [61]: We receive the following: since: we can deduce the following: thus, the triangle inequality property of the proposed evidence distance is proven.
Finally, the positive definiteness property of the proposed evidence distance is proven.
It can be seen from the above proofs that the improved evidence distance satisfies all the requirements, so it can be applied as a true metric under the Dempster-Shafer framework.

The Proposed Evidence Combination Method
To address the issue of the classical Dempster's combination rule, we propose a new evidence combination method based on the revision of the evidence source model.
The proposed method utilizes (i) the newly defined improved evidence distance based on Hellinger distance to measure the conflict degree between the bodies of evidence and (ii) Deng entropy to quantify the uncertainty degree of each piece of evidence. These measurements are used to generate weighting factors that are applied to adjust the original pieces of evidence prior to the application of Dempster's combination rule to get the final fusion results.
We evaluate the trustworthiness of the evidence based on a reliability condition set as the average credibility and design reward and penalty functions accordingly. We reward the reliable pieces of evidence by weighting them more heavily making their role more important in the final fusion result. On the other hand, we penalize unreliable pieces of evidence by using Deng entropy to diminish their impact on the final fusion result. The steps for our improved combination method are described as follows: Step 1: According to Equation (13), the improved evidence distance between the two bodies of evidence m i (i = 1, 2, . . . , N) and m j (j = 1, 2, . . . , N) is calculated to measure the conflict degree.
The N × N distance matrix D is expressed below: Step 2: Similarity degree between every two pieces of evidence is obtained using the formula proposed in [61] as follows: The N × N similarity matrix can now be written as follows: Step 3: The support degree of each evidence can be evaluated using the previously calculated similarity degrees as follows: Step 4: The degree of credibility reflects the level of trustworthiness associated with the evidence. The greater the credibility is, the more reliable the evidence; it is given by the following: Step 5: According to Equation (11), Deng entropy for each evidence is calculated to quantify the uncertainty degree.
Step 6: In this step, we establish a condition for determining reliability by setting a threshold that classifies pieces of evidence as either reliable or unreliable. The threshold rate is defined as follows: When the credibility of a piece of evidence exceeds the threshold (i.e., CRD(m i ) ≥ α), the source is considered reliable. Conversely, if the credibility falls below the threshold (i.e., CRD(m i ) < α), the source is deemed unreliable.
Step 7: Subsequently, we establish the initial weights by evaluating the fulfillment of the reliability condition. The objective is to increase the weights that surpass the threshold while decreasing the weights that fall below it, as outlined below: If CRD(m i ) ≥ α, a reward function is defined as follows: If CRD(m i ) < α, a penalty function is defined as follows: Sensors 2023, 23, 5141 12 of 24 Step 8: The final weights can be determined based on both credibility degree and the initial weight, resulting in the following definition: Thus, the weighted BPAs can be obtained by the following: Step 9: Finally, for N body of evidence m 1 , m 2 , . . . , m N , classical Dempster's combination rule is applied N-1 times to get the final fusion result of the weighted BPAs. (27) The flowchart of the proposed approach is given in Figure 1.
If ( ) ≥ ⍺, a reward function is defined as follows: If ( ) < ⍺, a penalty function is defined as follows: Step 8: The final weights can be determined based on both credibility degree and the initial weight, resulting in the following definition: Thus, the weighted BPAs can be obtained by the following: Step 9: Finally, for N body of evidence 1 , 2 , … , , classical Dempster's combination rule is applied N-1 times to get the final fusion result of the weighted BPAs.
The flowchart of the proposed approach is given in Figure 1.

Experiments and Analyses
In this section, we present a benchmark example for target recognition and two application cases for fault diagnosis and IoT decision-making to demonstrate the feasibility and effectiveness of our proposed method. Additionally, we have selected several similar methods from the existing literature to conduct a comparative analysis with the newly proposed approach.

Benchmark Example: Target Recognition
Similar to our prior research [86], we apply our newly proposed method to a benchmark example for target recognition. The fusion results are subsequently compared with those obtained from the Dempster-Shafer method (D-S) as well as several other similar methods selected from the literature, namely, methods from Murphy [21], Yong [22], Wang et al. [32], Yuan [34], and Yan et al. [68].
In a multisensor-based automatic target recognition system, based on five different kinds of sensors, CCD sensor (S 1 ), sound sensor (S 2 ), infrared sensor (S 3 ), radar (S 4 ), and ESM sensor (S5), three objects, A, B, and C, are detected. Let us suppose that the frame of discernment Ω = {A, B, C} is complete and A is the current target.
Sensor data modeled as BPAs are given in Table 3.  Based on the data presented in Table 3, it can be observed that S 2 shows a strong conflict with other pieces of evidence, it assigns most of its belief to the wrong target B, while the remaining pieces of evidence mainly support the right target A. This situation may give rise to illogical results after combination using the classical Dempster's rule, ultimately leading to the misidentification of the target. Table 4 and Figure 2 depict the fusion results obtained from our proposed method and the other considered methods considered, using different numbers of evidence sources.  -As observed from Table 3 and Figure 2, classical Dempster's combination rule fails to identify the right target after combining data from the five sensors; results show that it assigns most of its belief wrongly to target C, while the belief assigned to A (the right target) remains at 0 regardless of the number of pieces of evidence considered. Obviously, the fusion process was disturbed by the abnormal source S2, highlighting As observed from Table 3 and Figure 2, classical Dempster's combination rule fails to identify the right target after combining data from the five sensors; results show that it assigns most of its belief wrongly to target C, while the belief assigned to A (the right target) remains at 0 regardless of the number of pieces of evidence considered. Obviously, the fusion process was disturbed by the abnormal source S 2 , highlighting the inadequacy of the classical Dempster's combination rule in dealing with highly conflicting evidence.

Fusion Results
Regarding the other fusion methods, including our proposed method, they also initially identify the wrong target B when only considering the two sensors S 1 and S 2 , due to the conflicting evidence m 2 that misguides the fusion process. However, as more sensors are included and more reliable pieces of evidence are considered (i.e., m 3 , m 4 , m 5 ), all the methods achieve reasonable results and correctly identify the target A. Figure 3 illustrates the evolution of the belief degree assigned to the right target A by the various methods after each combination of the five sensors. Although all methods eventually converge to A as the number of sensors increases, they do so at different rates (i.e., convergence speed) and with varying degrees of belief. Regarding the other fusion methods, including our proposed method, they also initially identify the wrong target B when only considering the two sensors S1 and S2, due to the conflicting evidence m2 that misguides the fusion process. However, as more sensors are included and more reliable pieces of evidence are considered (i.e., m3, m4, m5), all the methods achieve reasonable results and correctly identify the target A. - Figure 3 illustrates the evolution of the belief degree assigned to the right target A by the various methods after each combination of the five sensors. Although all methods eventually converge to A as the number of sensors increases, they do so at different rates (i.e., convergence speed) and with varying degrees of belief. - In summary, the proposed method exhibits superior performance compared to other methods, with the belief degrees of the right target A approaching 1 as the number of combined sensors increases. When five pieces of evidence are combined, it achieves the highest accuracy of 99.32% for identifying the right target, surpassing all other methods. It is worth noting that even a slight increase in accuracy is meaningful and represents a significant improvement in performance. These results prove the effectiveness and superiority of our proposed method, it can handle the conflict between the pieces of evidence effectively, with the best convergence speed and decision accuracy. It is designed to evaluate the credibility of each piece of evidence, determine the importance of each sensor in the final fusion result, and assign the weights accordingly, which allows to overcome the influence of conflicting pieces of evidence, and therefore achieve better decision accuracy and fusion results reliability.

Application 1: Fault Diagnosis
To validate and demonstrate the effectiveness of the proposed method, a fault diagnosis application previously presented by Lin et al. [50] is used.
The fusion results of the proposed method are compared to those of the Dempster-Shafer method, our previous work [86], and other similar existing methods, including the Lin [50], Wang [54], and IDCR [73] methods. In summary, the proposed method exhibits superior performance compared to other methods, with the belief degrees of the right target A approaching 1 as the number of combined sensors increases. When five pieces of evidence are combined, it achieves the highest accuracy of 99.32% for identifying the right target, surpassing all other methods. It is worth noting that even a slight increase in accuracy is meaningful and represents a significant improvement in performance.
These results prove the effectiveness and superiority of our proposed method, it can handle the conflict between the pieces of evidence effectively, with the best convergence speed and decision accuracy. It is designed to evaluate the credibility of each piece of evidence, determine the importance of each sensor in the final fusion result, and assign the weights accordingly, which allows to overcome the influence of conflicting pieces of evidence, and therefore achieve better decision accuracy and fusion results reliability.

Application 1: Fault Diagnosis
To validate and demonstrate the effectiveness of the proposed method, a fault diagnosis application previously presented by Lin et al. [50] is used.
The fusion results of the proposed method are compared to those of the Dempster-Shafer method, our previous work [86], and other similar existing methods, including the Lin [50], Wang [54], and IDCR [73] methods.
In rotating machinery, the faults can be categorized into four types: F 1 = "Imbalance", F 2 = "Shaft crack", F 3 = "Misalignment", and F 4 = "Bearing loose". Therefore, the frame of discernment is Ω = {F 1 , F 2 , F 3 , F 4 }. To monitor the system status and determine the fault type, five different sensors were used.
Fault features were extracted from the data provided by the various sensors to calculate the BPAs of the five sensors. Table 5 illustrates the obtained results, which indicate that the fault diagnosis should be F3. Fusion results obtained by varying the number of pieces of evidence for the different methods are shown in Table 6 and Figure 4. However, both the method proposed in our previous work [86] and the newly proposed approach maintain the accurate fusion performance, as the belief degree of the right fault type F3 continues to increase despite the inclusion of the conflicting sources S3 and S5. -Upon combining five sensors, our proposed method achieves the highest belief degree for F3 with a value of 0.9614, which is greater than the maximum value achieved by any other method, including our previous work, which did not exceed 0.90.  - Table 6 and Figure 4 show that all the methods, including our own, successfully diagnose the fault type as F 3 after each combination of five pieces of evidence. - Figure 5 depicts the evolution of the belief degree assigned to the right fault type F 3 for the various compared methods using different numbers of sensors. Our proposed method exhibits superior performance compared to all other methods, as demonstrated in the figure.  -Combining three sensors results in a slight decrease in the belief degree of F 3 for D-S, Lin [50], Wang [54], and IDCR [73] methods, which can be attributed to the conflicting data provided by S 3 . However, the inclusion of S 4 in the fusion process causes the belief degree of F 3 to increase again, reaching 0.7755 for D-S, 0.7906 for Lin, 0.8026 for Wang, and 0.8029 for IDCR, only to decrease once more when S 5 is incorporated in the fusion process for all methods. -However, both the method proposed in our previous work [86] and the newly proposed approach maintain the accurate fusion performance, as the belief degree of the right fault type F 3 continues to increase despite the inclusion of the conflicting sources S 3 and S 5 . -Upon combining five sensors, our proposed method achieves the highest belief degree for F 3 with a value of 0.9614, which is greater than the maximum value achieved by any other method, including our previous work, which did not exceed 0.90.

Application 2: IoT Decision-Making
An IoT decision-making application from the work of Boulkaboul et al. [64] is considered. The model was evaluated through a set of experiments applied in IoT and smart building projects, realized in the CERIST-ALGERIA research center laboratory.
In the considered scenario, IoT-enabled wireless sensors are used to monitor office occupancy and ambient light to control electrical lighting and optimize energy, and data fusion method is applied to make a decision about the light switch ON/OFF. Three (03) PIR (Passive Infrared) sensors S 1 , S 2 , S 3 , and a light sensor S 4 have been placed in optimal positions on the ceiling of the office and four hypotheses (H 1 , H 2 , H 3 , H 4 ) are defined as follows: H 1 : the office is occupied, and the lighting value exceeds 580 lx; H 2 : the office is empty, and the lighting value exceeds 580 lx; H 3 : the office is occupied, and the lighting value does not exceed 580 lx; and H 4 : the office is empty, and the lighting value does not exceed 580 lx. A basic scenario is considered when hypothesis H 1 is verified, and the system has generated evidence when hypothesis H 1 occurs.
In [64], the impact of the environment on the generation of evidence has not been considered. Therefore, in [94], 10% of the belief degree was assigned to Ω to represent a completely unknown situation. The BPAs obtained from data generated by the four sensors are depicted in Table 7, where the frame of discernment is defined as follows: Ω = {H 1 , H 2 , H 3 , H 4 }. We use the proposed method to combine the pieces of evidence and we compare the results obtained with Dempster-Shafer method (D-S) and six of similar existing methods including the following: Wang [94], Xiao [95], Jiang et al. [96], Wang and Xiao [97] methods, and our previous work [86]. Fusion results are depicted in Table 8 and Figure 6. The primary reason for this enhancement is that our proposed method accounts for the relevance and disparities among the pieces of evidence, which is enabled by the improved evidence distance. By identifying even minor conflicts among the pieces of evidence and assigning appropriate weights accordingly, the influence of untrustworthy pieces of evidence is reduced, and that of reliable evidence is increased. This leads to a significant improvement in convergence and decision accuracy.   -As evident from Figure 6 and Table 8, all the techniques, including the Dempster-Shafer method, are capable of identifying the correct hypothesis H 1 due to the absence of substantial conflicts among the pieces of evidence. -Prior to the combination of the pieces of evidence, none of the sensors in Table 6 report a belief degree of more than 0.75 for the correct hypothesis H 1 . However, upon employing various combination methods, including the proposed approach, it is evident that the results converge to yield high belief degrees. -It is worth noting that our proposed combination method delivers significantly better results, providing stronger support for the right hypothesis H 1 than the other methods. This is evidenced by the maximum belief degree of 0.9958, as illustrated in Figure 7  -The primary reason for this enhancement is that our proposed method accounts for the relevance and disparities among the pieces of evidence, which is enabled by the improved evidence distance. By identifying even minor conflicts among the pieces of evidence and assigning appropriate weights accordingly, the influence of untrustworthy pieces of evidence is reduced, and that of reliable evidence is increased. This leads to a significant improvement in convergence and decision accuracy. The primary reason for this enhancement is that our proposed method accounts for the relevance and disparities among the pieces of evidence, which is enabled by the improved evidence distance. By identifying even minor conflicts among the pieces of evidence and assigning appropriate weights accordingly, the influence of untrustworthy pieces of evidence is reduced, and that of reliable evidence is increased. This leads to a significant improvement in convergence and decision accuracy.

Conclusions
In this paper, an improved evidence combination method for multisensor data fusion in IoT environments has been proposed that aims to address the conflict issues encountered in the Dempster-Shafer theory. The approach relies on the revision of the evidence source model; it is devised to greatly reduce the conflicting impact of unreliable pieces of evidence while strengthening the influence of reliable ones on the final fusion results. To this end, an improved evidence distance based on the Hellinger distance was introduced to measure the degree of conflict among the pieces of evidence, while Deng entropy was employed to quantify the degree of uncertainty of each piece of evidence. The original mass functions are then adjusted using comprehensive weights derived from these metrics, and the classical Dempster's combination rule is then applied to obtain the final fusion results.
To demonstrate the effectiveness of the proposed method, a benchmark example for target recognition and two real application cases in Fault diagnosis and IoT decision-making have been provided. Several comparable methods have been selected from the literature to conduct a comparative analysis with the newly proposed approach, and simulation analyses have shown the efficiency and superiority of the proposed method in terms of conflict management, convergence speed, fusion results reliability, and decision accuracy. In fact, the proposed approach effectively achieved the highest accuracy rate of 99.32% in correctly identifying the target compared to other considered methods in the benchmark example. It also demonstrated a better performance in converging to the right fault type, with a remarkable accuracy rate of 96.14% in the fault diagnosis problem. Furthermore, in the IoT decision-making application, it correctly recognized the right hypothesis and outperformed other related methods, achieving the highest accuracy rate of 99.58% For future research directions, we intend to apply the proposed method on larger data to demonstrate its broader applicability; we also plan to further improve the proposed method by considering more relevant factors in weighting factors' construction such as data timeliness. The proposed method is only applicable in the closed-world situation, so adjusting the approach in order to obtain reasonable and accurate results in an open-world assumption is also a promising axis.