Symbiotic Organism Search Algorithm with Multi-Group Quantum-Behavior Communication Scheme Applied in Wireless Sensor Networks

: The symbiotic organism search (SOS) algorithm is a promising meta-heuristic evolutionary algorithm. Its excellent quality of global optimization solution has aroused the interest of many researchers. In this work, we not only applied the strategy of multi-group communication and quantum behavior to the SOS algorithm, but also formed a novel global optimization algorithm called the MQSOS algorithm. It has speed and convergence ability and plays a good role in solving practical problems with multiple arguments. We also compared MQSOS with other intelligent algorithms under the CEC2013 large-scale optimization test suite, such as particle swarm optimization (PSO), parallel PSO (PPSO), adaptive PSO (APSO), QUasi-Afﬁne TRansformation Evolutionary (QUATRE), and oppositional SOS (OSOS). The experimental results show that MQSOS algorithm had better performance than the other intelligent algorithms. In addition, we combined and optimized the DV-hop algorithm for node localization in wireless sensor networks, and also improved the DV-hop localization algorithm to achieve higher localization accuracy than some existing algorithms.


Introduction
In the past several decades, intelligent computing has developed rapidly. Researchers have evolved a variety of intelligent computing algorithms inspired by natural phenomena [1,2]. While conducting research, we often encounter complex problems that require the extreme value of a multivariate function to be found. The usual practice is to calculate the gradient of the function. When functions have higher dimensions or are relatively complex themselves, the gradient of the computational function will be very time consuming, and thus unable to complete tasks. Moreover, many problem functions are inseparable in real life, which limits the feasibility of the gradient calculation of correlated functions. The advent of computational intelligence provides a new way of thinking about the solution of these optimal solutions, thereby breaking the limitations of traditional methods for solving extreme values of functions that require low-dimensional differentiation. In this paper, we take the 28 unrestricted objective functions in CEC2013 [3][4][5] as an example to analyze and compare the newly proposed algorithms. Although the objective function has multiple constraints in the objective world, the performance of the objective function of the algorithm under unconstrained conditions plays a fundamental role in various optimization applications, which will then affect the performance and direction of the algorithm in a constrained objective function. Computational intelligence (CI) [6][7][8][9][10] not only deals with complex problems in real life (most notably the objective functions of uncertain or noisy problems [11]), it can also give calculation methods and solutions to solve these optimization problems. Evolutionary computation (EC) [12][13][14][15] is a branch of CI that provides an optimization method with evolutionary ideas. There are many branches in the CI field, such as computational learning theory (CLT), evolutionary computing, fuzzy logic (FL), artificial neural networks (ANNs), and quantum computing (QC). Swarm intelligence (SI) is one branch of evolutionary computing [16,17], including evolutionary algorithm (EAs), memetic calculation (MC), and so forth. EC encompasses a variety of optimization algorithms developed by researchers and various scholars. It is based on the inspiration and simulation of various natural phenomena in nature. The PSO algorithm is a classic intelligent algorithm inspired by the foraging behavior of birds. Many scholars have developed and improved on this basis to enhance the global optimization of PSO algorithms [18,19], with developments such as parallel PSO (PPSO) [20], adaptive PSO (APSO) [21], and so on. Differential evolution (DE) [5,5] is a stochastic model that simulates the evolution of organisms. It is an evolutionary algorithm that is preserved by individuals which adapt to the environment through repeated iterations. QUasi-Affine TRansformation Evolutionary (QUATRE) [22,23] is an algorithm based on the quasi-affine transformation in geometry. It overcomes the shortcoming of the DE algorithm where as the number of evolutionary iterations increases, the diversity of the population is reduced, and it converges to a local optimization point prematurely or the algorithm stagnates. The artificial bee colony (ABC) [24][25][26], ant colony optimization (ACO) [27], and cat group optimization algorithms (CSO) [28,29] also have similar functionalities.
Symbiotic organism search (SOS) [30,31] is a very promising meta-heuristic optimization algorithm with state-of-the-art global optimization ability. It uses the symbiotic relationship between two organisms in an ecosystem to survive, and cycles to find the optimal value of the objective function. In 2014, Cheng and Prayogo pioneered the algorithm. Subsequently, many researchers and scholars improved this algorithm [32,33]. For example, in 2019, Falguni Chakraborty, Debashis Nandi, and Provas Kumar Roy proposed oppositional SOS (OSOS) [30,34]. In order to further improve the global optimization ability of the SOS algorithm (including the optimal solution and convergence speed), this paper will integrate the quantum behavior [35,36] and the idea of multi-group optimization into the SOS algorithm, while drawing comparisons with the other algorithms mentioned in this paper. Experimental results indicate that the performance of the proposed MQSOS algorithm is superior to other algorithms.
Wireless network node positioning plays a very important role in wireless sensor networks (WSNs) [37][38][39][40], navigating, monitoring, and other applications [41,42]. According to whether the distance between nodes needs to be measured, the position can be divided into positioning based on the distance measurement and positioning not based on the distance measurement. On the basis of the deployment occasion, it can be divided into outdoor positioning and indoor positioning [43]. Common methods based on distance measurement include triangulation, trilateration, and maximum likelihood estimation [23,44,45]. Common ranging methods include DV-hop [46], RSSI [47,48], etc. The DV-hop-based positioning algorithm is simple and has high positioning accuracy, which has made the research results published in this area widely used in recent years. Many scholars have applied intelligent algorithms to wireless sensor network location algorithms based on distance measurement. The accuracy of positioning is being continuously improved. The combination of the MQSOS algorithm and the DV-hop algorithm to improve the performance of original DV-hop positioning is also proposed in this paper. Section 2 of this paper briefly introduces the DV-hop algorithm and several EC algorithms such as native PSO, SOS, QUATRE, OSOS, and so on. Section 3 is dedicated to the newly proposed MQSOS algorithm and the algorithm MQSOS_DV-hop that is generated in combination with the DV-hop algorithm. Section 4 shows the experimental results under the CEC2013 test suite and compares the proposed method with other EC algorithms. At the same time, the improvement of the DV-hop algorithm by MQSOS and other CI algorithms is also compared. Finally, in Section 5, the corresponding conclusions are drawn based on the experimental results.

Native PSO Algorithm
The PSO algorithm is a classic global optimization algorithm inspired by the foraging behavior of birds in nature, where the area where the birds forage is simulated as the range of the solution, the position of the bird is the position of the current particle, and the position of the food is the position of the global optimal solution. The specific process of the PSO algorithm is as follows. In the initial stage, the scope of the group search is set along with the critical value of the speed in the particle search process. The position of the randomly initialized group is X = x 1 , x 2 , . . . , x ps and the speed where x i and v i represent the position and velocity of the ith particle in the population, D represents the dimension of the population, and ps is the population size.
In the evolutionary stage, the iterative evolution of the population is continually carried out by Equations (3) and (4) x t+1 where v t i represents the velocity of the ith particle at the tth iteration, and x t i represents the position of the tth iteration of the ith particle. ω denotes the inertia coefficient of the particle that maintains its current velocity during the optimization process. P t i represents the individual optimal position after iterating i times, and G t i represents the global optimal position after iterating i times. c 1 and c 2 denote the ability to learn individual optimality and global optimality, respectively. r 1 and r 2 denote random numbers between 0 and 1.

QUATRE Algorithm
The QUATRE algorithm simulates the process of particles moving from one affine space to another in geometry [22,49]. The exact evolutionary formula of QUATRE is shown in Equation (5): where the operator ⊗ denotes the multiplication of the values on the corresponding parts of the matrix. X is expressed as the position matrix of all particles of the population. M is a contribution matrix consisting of 0 and 1. When ps = 2 * D + 2, the matrix M is formed as shown in Equation (6), where ps is the population number and D is the dimension of the space in which the population is located. M represents a matrix of binary inverse operations while M. B is represented as a mutation matrix with its generation strategy shown in Table 1. X r1 , X r2 , X r3 , X r4 , and X r5 represent a random matrix that randomly arranges the vectors in matrix X, respectively. X gbest represents the optimal position matrix. F denotes a control factor.

SOS Algorithm
There are three phases in the SOS algorithm-the mutualism phase, the commensalism phase, and the parasitism phase. In the mutualism phase, each individual X i interacts with other individuals X j (i = j) in the population to benefit both individuals. The interaction between two organisms in the ecosystem is shown in Equations (7)-(10).
In the commensalism phase, two organisms X i and X j undergo random selection from the ecosystem for interaction with symbiotic relationships. X i attempts to profit from the symbiotic relationship and find a better position, but X j will not be affected by this stage. This symbiotic relationship is shown in Equation (11): where r 3 is a uniform distribution between −1 and 1. f (x) is a fitness function. X i will update to the state of X inew when the fitness value of X inew is better than X i . By Equation (12), X i can benefit from this symbiotic relationship with X j , but the state of X j does not change.
To illustrate the parasitic phase, we can briefly describe the parasitic relationship between the malaria parasite and the human host. Plasmodium sp. infects human hosts via the bite of a mosquito carrying the parasite. After successfully infecting the humans, the malaria parasite will grow in the host and invade red blood cells to cause malaria. If the host's immunity is strong enough, the antibodies will destroy the parasite. Otherwise, the host will die of serious illness under the invasion of the parasites.
The process is implemented by first selecting a parasite carrier (e.g., a mosquito) X i and a human-like host X j from the ecosystem, and then performing host infection by Equation (12) to generate parasite X parasite . If the insect's X parasite has a better fitness value than the host X j , then the parasite X parasite will replace the host X j ; otherwise, the host X j will exert immunity against the parasite X parasite . The parasitic phase is calculated as shown in Equation (13).
In Equations (12) and (14), UB and LB are the maximum and minimum boundary values for the creature to search within the D-dimensional space, respectively.

OSOS Algorithm
Oppositional symbiotic organism search optimization (OSOS) is a new SOS algorithm with a solution based on opposite-based learning (OBL) [34] which can improve the performance of SOS through the concept of opposite learning. The specific flow of the OSOS algorithm is as follows.
First, according to the SOS algorithm, the reciprocal phase, the symbiotic phase, and the parasitic phase are iterated, and then based on the ratio of the number of the individuals to the number of all biological individuals after each iteration, a strategy of using opposite learning is determined. This strategy is shown in Equation (14): where P represents the rate of change of all biological states after one iteration. p is a proportionality constant. If the value of p is too large, the biological population is likely to fall into local optimum and if the p value is too small, the performance of the SOS algorithm cannot be improved. When the value of p is too small, the strategy based on opposite-based learning will not achieve the desired effect. When the value of p is too large, the group will easily fall into a local optimum and converge too soon in the optimization process. So, the value of P is usually set to 0.35. After the opposite-based learning, the creature can maintain its optimal position state.

Original DV-Hop Algorithm
The DV-hop positioning algorithm is a distributed positioning method using the idea of distance vector routing and GPS positioning. The algorithm is not only simple but also has high positioning accuracy. The DV-hop algorithm has the following three stages.
In the first stage, the location information packet of each anchor node in the network is broadcast to the neighboring nodes through itself by a typical distance vector exchange protocol, so that all nodes in the network can obtain the hop count information of the distance node.
In the second phase, after the average per hop of each anchor node is obtained, it needs to be broadcast to other nodes in the network.
where Hs i represents the average distance per hop from all anchor nodes to the ith anchor node, h i,j is the number of hops of anchor nodes i to j, and d i,j is the distance between the anchor nodes i and j.
where λ i,u represents the weight of the average hop distance of anchor node i, and h i is the number of hops from anchor node i to an unknown node u. Us u represents the average distance of each hop from the unknown node u to the anchor node i. Finally, the approximate distanced i,u of the anchor node i to the unknown node u can be found by Equation (17): In the third stage, after calculating the distance between an unknown node and three or more anchor nodes by Equation (17), it is possible to calculate the unknown node position by trilateration or maximum likelihood estimation. Since the maximum likelihood of the estimation method is more accurate than the trilateration method in the positioning process, the simulation experiment carried out in this thesis uses the method of maximum likelihood estimation to locate the nodes, as shown in Figure 1. Next, we will introduce the node location method for maximum likelihood estimation. Assuming that the unknown node coordinates are (x, y), the coordinates of the anchor node AN i are (m i , n i ) and the number of anchor nodes is k. The solution formula for the unknown node position coordinate point is as follows: We expand on each equation of Equation (18), and then subtract the last equation from each formula to get Equation (19). Finally, we use Equations (20)- (23) to simplify the form of the matrix solution.

MQSOS Algorithm
SOS and OSOS algorithms have the following disadvantages: the group is less diverse, they easily fall into local optima, and they converge prematurely. We introduce multi-group ideas and inter-group communication strategies based on quantum behavior to improve the SOS algorithm's global optimization ability. The algorithm is shown in the following steps.
Step 1: initialize the biological population size (ps), the range of the biological search space, the number of subgroups (gs) divided, and the inter-group communication step size (cs). Each subgroup undergoes three stages: mutualism, commensalism, and parasitism through Equations (7)- (14), and iterates independently.
Step 2: all subgroups are independently iterated gs times and then the best individuals in each subgroup are compared. The point with the best position state is chosen as the current global optimum. Then, the inter-group communication strategy based on quantum behavior is selected to update the position status of some individuals in each subgroup. Assuming that the population size is 100, divided into 4 subgroups, there are then 25 individuals in each subgroup. Five better-performing points are selected among all individuals in each sub-group, updated by Equation (24), and five points with poorer status are selected to update the position of the poorest creature using Equation (25).
The difference between Equations (24) and (25) is that when the point with better state in each subgroup is updated, a condition for judging whether the position state of the individual is superior is added, so that the individual with the better state always maintains the optimal value. In the case of updating a poorly-performing individual, there is no condition for judging whether the state is superior, so the poor points in the group are freed from the convergence process only toward one local optimum to find other advantageous positions. Therefore, each group can be explored to a wider range, and the diversity of the group can be enhanced, so that the group does not converge to the local optimal position in advance, resulting in poor performance of the algorithm in the global optimization performance, especially in the multi-objective function. Therefore, the convergence speed is improved by updating the individuals with better position states, and the diversity of the algorithm's global optimization is improved by updating the state of the poorly positioned particles.
where X i is the position vector of the current individual and X inew is the updated position vector. C i is a vector of the average value of the ith individual's historical optimal position, with the calculation formula shown in Equation (26). In quantum mechanics, P i is a local attraction point, and the quantum tilts toward the local attraction point during free movement until the potential energy is 0, as shown in Equation (27). α is the convergence coefficient that affects the convergence of the MQSOS algorithm.
α can be found according to Equation (28). u is a random variable that is uniformly distributed from 0 to 1.
where M is the current number of iterations, and Pbest i,t is the position of the ith best position state of the biological individual after the tth iteration in the population. Through this formula, the optimal center position of each individual is iterated M times.
where G i represents the most individual in the ith subgroup, gs is the number of subgroups, ϕ is the inertia weight coefficient, and c 1 and c 2 are the learning factors of the optimal learning group and other subgroups. Values of the constants range from 0 to 4, with a constraint of c 1 + c 2 4. r 1 and r 2 are between 0 and 1 subject to a uniformly distributed random variable.
where T max represents the maximum number of iterations set by the algorithm, and T is the number of iterations reached by the current algorithm. The pseudo-code of the execution process of the entire SOS algorithm is shown in Algorithm 1. if T%cs == 0 then //Each iteration cs selects an inter-group communication strategy based //on quantum behavior to communicate between subgroups. Select several organisms with poor positional status in the gth subgroup to communicate between groups by Equation (25). 13: end for 14: end if 15: for g = 1;g <= subgs;g + + do 16: for i = 1;i <= ps;i + + do 17: / * Mutualism phase * / 18: Randomly select a creature X j , where i = j 19: The interaction between the biological i and the biological j through Equations (7)-(10) is the mutualism phase. Randomly select a creature X j , where i = j 25: Organism i and organism j interact in the parasitism phase by Equations (12) and (13). 26: end for 27: end for 28: end while Output: The global optimum X gbest , global best fitness value f (X gbest ).

Our Proposed Algorithm's Application in WSN Localization Based on DV-hop
This section describes the use of MQSOS positioning in DV-hop-based wireless sensor network node location. We know that the hop count between anchor nodes is obtained by the anchor node transmitting broadcast information, and the distance between the anchor node and the unknown node is multiplied by the average distance per hop of the anchor node by the anchor node to the unknown node. The number of hops between the two nodes is obtained. The location of the unknown node is then estimated by the method of least squares or maximum likelihood estimation of the nodes. However, since the method has an error in estimating the average distance per hop, it presents a decrease in positioning accuracy. The main purpose of the positioning problem is to minimize the estimation error and improve the positioning accuracy. An improved DV-hop algorithm based on swarm intelligence for the unknown node location in WSN is proposed to reduce the estimation error.
It is known that in network node positioning, the greater the number of hops between nodes, the higher the positioning accuracy. Therefore, the fitness function in the WSN location node positioning algorithm is shown in Equation (30), where hop ui is the number of hops from anchor node u to an unknown node,d ui is the estimated distance from the anchor node u to the unknown node i by the maximum likelihood estimation method, and d ui is the actual distance between node u (x u , y u ) and position node i (x i , y i ).
The MQSOS_DV-hop algorithm first calculates the minimum hop count and the distance between the anchor nodes by communication between the anchor nodes, and then calculates the average step size of each anchor node. The hops received by all anchor nodes are then weighted to calculate the hop of the unknown node. Finally, the location of the unknown node is estimated by the proposed MQSOSE algorithm.
For each unknown node we use the complete MQSOS algorithm for positioning. The steps are as follows.
Step 1: calculate the average distance per hop of all anchor nodes, the number of hops between nodes, and so on.
Step 2: calculate the distance between the unknown node and all the anchor nodes.
Step 3: initialize the parameters used by the MQSOS algorithm (population size, population search space size, population dimension, etc.).
Step 4: randomly generate the position of the individual, and divide it into gs subgroups for iteration, with the inter-group communication based on the quantum behavior being performed every cs times.
Step 5: repeat step 4 until the maximum number of iterations is reached and then output the position of the optimal node.

Experimental Analysis
This section presents the results of node positioning using the CEC2013 benchmark suite test function and our newly proposed algorithm for DV-hop in wireless sensor networks.

Simulation Results on CEC2013 Standard Bounded Constraint Benchmark
In the following experimental results, we used the CEC2013 benchmark function set to verify the performance of our newly proposed MQSOS algorithm. There are 5 single objective functions (  Table 2. In Table 2

Simulation Results of Applied MQSOS to Node Localization in WSN Based on DV-Hop
In this section, the results of the practical application of the newly proposed MQSOS algorithm in wireless node positioning are shown and compared with the results of the PSO, QUATRE/best/1, OSOS, and SOS algorithms in this application, and the performance of our proposed algorithm in practical applications is verified. In the environment simulated in this experiment, there were 20 anchor nodes and 380 unknown nodes in a two-dimensional space of 100 m × 100 m, and the communication radius of the nodes was 20 m. The results of each of these algorithms in simulation experiments are shown in Table 3.

Conclusions
This paper proposes a novel SOS algorithm called the MQSOS algorithm that is based on a quantum state-based inter-group communication strategy. In the implementation process, the algorithm is first divided into several subgroups for independent iterative evolution, and the corresponding group communication is performed after each subgroup iterates to achieve the step size of inter-group communication. The quantum state-based inter-group communication strategy is used to iterate several organisms with better states in each subgroup and replace the organisms with poor states in order to enhance the convergence speed and improve the overall performance of the algorithm due to the increase of diversity. In order to verify the performance of the newly proposed MQSOS algorithm, the CEC2013 test suite was applied to compare the algorithm with other swarm intelligence algorithms. The experimental results indicate that the performance of the MQSOS algorithm was better than those of other intelligent algorithms (PSO, PPSO, original SOS, QUATRE, APSO, and OSOS algorithms). We also applied the algorithm to the location of wireless sensor nodes to form a new DV-hop algorithm called MQSOS_DV-hop, with the aim of improving the accuracy of DV-hop algorithm node positioning, and simulated the MQSOS_DV-hop wireless sensor location algorithm in Matlab. The experimental results show that the MQSOS algorithm had higher accuracy in wireless sensor network node location.
In the future, we will further study more accurate and efficient evolutionary algorithms, evolutionary programs, and communication strategies to improve the performance and efficiency of social intelligence algorithms. We also need to apply subsequent improved algorithms to different types of application scenarios, such as hierarchical routing, node deployment, clustering methods, and coverage issues in WSNs, in addition to applications in transportation, energy supply, etc.