A Modified Dragonfly Optimization Algorithm for Single- and Multiobjective Problems Using Brownian Motion

The dragonfly algorithm (DA) is one of the optimization techniques developed in recent years. The random flying behavior of dragonflies in nature is modeled in the DA using the Levy flight mechanism (LFM). However, LFM has disadvantages such as the overflowing of the search area and interruption of random flights due to its big searching steps. In this study, an algorithm, known as the Brownian motion, is used to improve the randomization stage of the DA. The modified DA was applied to 15 single-objective and 6 multiobjective problems and then compared with the original algorithm. The modified DA provided up to 90% improvement compared to the original algorithm's minimum point access. The modified algorithm was also applied to welded beam design, a well-known benchmark problem, and thus was able to calculate the optimum cost 20% lower.


Introduction
e fact that the real-life problems have increased from the past to the present led the scientists to produce more effective solutions by using optimization algorithms. e search for these effective solutions brought about the understanding of the behaviors of swarms in nature. Scientists have developed various algorithms by following the behavior, experiences, and reactions of swarms in nature. ese algorithms are known as swarm-inspired optimization algorithms.
So far, swarm-inspired optimization algorithms have successfully solved a lot of real-world problems: In 2013, the artificial bee colony (ABC) algorithm was used for data collection in wireless sensor networks [1], and the ant colony optimization (ACO) algorithm was used for multicompartment vehicle routing problems [2]. In 2015, the ABC algorithm was used for brain tumor segmentation in MRI images [3], and the ACO algorithm was used in job scheduling [4], in economic dispatch problems [5] and for task-scheduling problems in cloud computing [6]. In 2017, the multilevel image thresholding problem was solved with the elephant herding optimization (EHO) algorithm [7], and the ACO algorithm was used in estimating transportation energy demand in Turkey [8]. e dragonfly algorithm (DA) was used in the synthesis of concentric circular antenna arrays by Babayigit [9], and the EHO algorithm was used in support vector machine parameter tuning [10]. Lastly, Debnath et al. [11] made an important study on access point planning for disaster scenarios by using the DA in 2018.
As a result of the modification of nature-inspired optimization algorithms and combining them with other optimization algorithms or methods, hybrid optimization algorithms, which provide better results than the original ones, were developed. Hybrid optimization algorithms also solved many problems in previous studies: In 2010, the ACO algorithm for solving a complex combinatorial optimization problem was modified by Yang and Zhuang [12], and the particle swarm optimization (PSO) for nonconvex economic dispatch problems was improved by Roh et al. [13]. In 2011, Yu et al. [14] improved the ACO algorithm for the multidepot vehicle routing problem, and the ACO algorithm for constrained optimization problems was modified [15]. In 2012, the ACO algorithm on real-parameter optimization was improved [16], and Ishaque et al. [17] hybridized the PSO with the maximum power point tracking method for the photovoltaic system. In 2015, Forsati et al. [18] modified the ACO algorithm for document clustering. In 2016, Salam et al. [19] proposed a hybrid DA with an extreme learning machine for prediction, and lastly, a memory-based DA for numerical optimization problems was proposed in 2017 [20].
Random motion (randomization) is one of the most fundamental features in optimization algorithms to solve problems effectively. Random mobility ensures that there is no single way to solve the problem. e solution found during optimization suggests that even if it is the closest to the optimum and even optimal, random behavior can always be a better solution. is recommendation often prevents the best solution from getting stuck in the local best of problems. Another important benefit of random motion is the success of leaving no scanned space in the search space. If there is no random action, the optimization can only be installed in one region of the designated search space and may never see the results in other regions. Random motion increases the capacity of the algorithm to reach every field in the search space. e classical random motion which is randomization based on a random number to be generated by the processor is commonly used in optimization algorithms. However, the occasional inadequacy of this classic randomness solution has led researchers to find new solutions. One of the new solutions to hybridize optimization algorithms with a random flight method is the Levy flight mechanism (LFM). e LFM also has a random number to be generated by the processor, but the mechanism is based on a statistical mathematical formula.
ere are many examples in the literature using LFM for randomization: In 2007, Pavlyukevich [21] used LFM in his research to theoretically validate and justify a new stochastic algorithm for global optimization. In 2008, Barthelemy et al. [22] used the LFM to optimize transmission and transmission of light. In 2009, Yang and Deb [23] implemented the LFM into the cuckoo search algorithm which is based on the obligate brood parasitic behavior of some cuckoo species in combination with the LFM behavior. Yang [24] worked on the firefly algorithm to adapt LFM, and the numerical results of his study proved that the proposed algorithm is superior to existing metaheuristic algorithms. In 2010, Lin et al. [25] proposed a bat algorithm with LFM for parameter estimation in nonlinear dynamic biological systems. Hakli and Uǧuz [26] implemented the PSO algorithm with LFM in 2014. In that study, an improvement was achieved with LFM, and successful results were obtained due to the problem of early convergence of the agents during the optimization and localization of the agents. In 2017, Heidari and Pahlavani [27] adapted LFM to the gray wolf optimization. Similarly to the problem in PSO, they predicted that the lack of location of the wolves caused local minimization and solved this problem with the LFM. e DA developed by Mirjalili [28] with the LFM was used to model the search process for the optimal solution of dragonflies when there is no neighborhood solution. Nevertheless, the random motion of dragonflies is intermittently interrupted by LFM and the step control mechanism within the algorithm. e LFM's very large searching steps caused interruption, and dragonflies could extend beyond the search space. In order to prevent overflowing, a step control mechanism was applied to the original algorithm. However, the step control mechanism is contrary to the original movement of the dragonflies and disrupts the nature of swarm behavior.
e main objective of this study is an adaptation of the Brownian motion to DA instead of LFM and its application to benchmark functions as available in the literature [29]. e goal of our study is to improve the performance of the DA and overcome the interruption problem caused by LFM. e reason for choosing the Brownian motion method is that its isotropic approach (completely independent of direction) increases the discovery capability. On the contrary, the sizes of the steps, both controllable size and time-based random form, prevent the outgoing of the search space, providing continuity of motion. In addition, there is only one study in which the Brownian motion has been used so far in the area of optimization: An optimization method was developed by using the Brownian motion of gas molecules in nature and very successful results were obtained in the study [29]. ese results were compared with those of well-known heuristic algorithms such as PSO and genetic algorithm (GA). Within the scope of Abdechiri et al.'s [29] study, there are two aims to be achieved: (i) to increase the effect of random motion in metaheuristic algorithms and (ii) to present the contribution of the Brownian motion to swarm intelligence algorithms. Given the ease of implementation and the results of the previous study, the Brownian motion has a high potential to improve the performance of swarminspired optimization algorithms.
In this study, the randomization stage of DA is improved by means of the Brownian motion. e modified DA was compared with the original DA and tested in the optimization of single-objective and multiobjective benchmark functions. e results obtained from single-objective optimization functions were compared to the minimum point found, and the average values were calculated from 200 separate solutions of the benchmark functions. As a result of these comparisons, 11 of 15 benchmark functions managed to find better minimum points than the original DA. In multiobjective optimization, 5 of 6 benchmark functions achieved better results than the original DA in graphical results obtained from 100 iterations. e modified DA was finally applied to the welded beam design problem, which is a well-known real-life problem in the optimization field. According to the results, the modified DA found 20% better optimal cost than the original one. e rest of the paper is organized as follows: Section 2 presents detailed information about the materials and methods used in the study, Section 3 outlines the test methods and results of the study, and Section 4 concludes the paper.

Dragonfly Algorithm (DA).
e DA was developed by Mirjalili at Griffith University in 2016 [28]. is technique, which is a metaheuristic algorithm based on swarm intelligence, is inspired by the static and dynamic behaviors of dragonflies in nature. ere are two main stages of optimization: exploration and exploitation. ese two phases were modeled by dragonflies, either dynamically or statically searching for food or avoiding the enemy.
ere are two cases where swarm intelligence emerges in dragonflies: feeding and migration. Feeding is modeled as a static swarm in optimization; migration is modeled as a dynamic swarm. According to Craig and Hart [30], the swarms have three specific behaviors: separation, alignment, and cohesion. Here, the concept of separation means that an individual in the swarm avoids static collision with his neighbor (equation (1)). Alignment refers to the speed at which the agents are matched with the neighboring individuals (equation (2)). Finally, the concept of cohesion shows the tendency of individuals towards the centre of the herd (equation (3)).
Two additional behaviors are added to these three basic behaviors in DA: moving towards food and avoiding the enemy. e reason for adding these behaviors to the algorithm is that the main purpose of each swarm is to survive. erefore, when all individuals are moving towards food sources (equation (4)), they must avoid the enemy in the same time period (equation (5)). Each of these behaviors is mathematically modeled as follows: In the above equations, X represents the instantaneous position of the individual, while X j represents the instantaneous position of the j th individual. N represents the number of neighboring individuals, while V j represents the speed of the j th neighboring individual. X + and X − represent the location of the food source and enemy source, respectively.
In order to update the position of artificial dragonflies in the search space and simulate their motions, two vectors are considered: step (ΔX) and position (X). e step vector, which can also be considered as speed, indicates the direction of dragonfly motions (equation (6)). After calculating the step vector, the position vector is updated (equation (7)): where the values of s, a, and c in equation (6) represent separation, alignment, and cohesion coefficients, respectively, and f, e, w, and t values represent the food factor, enemy factor, inertia coefficient, and iteration number, respectively. is coefficient and the mentioned factors enable to perform exploratory and exploitative behaviors during optimization. In the dynamic swarm, dragonflies tend to align their flight. In the static motion, the alignment is very low, while the fit to attack the enemy is very high. erefore, the coefficient of alignment is high and the cohesion coefficient is low in the exploration process; in the exploitation process, the coefficient of alignment is low and the coefficient of cohesion is high.

Levy Flight Mechanism (LFM) and Dragonfly Algorithm.
LFM derives its name from the French mathematician Paul Levy. Technically, this mechanism has an infinite variance (possible length). Figure 1 shows the simulation of LFM in the first 1000 steps. In order to improve the randomness, the probabilistic behavior, and the discovery of artificial dragonflies, a random walk (LFM) solution is reached when there is no neighborhood solution. Accordingly, the position of artificial dragonflies is updated as follows: where d in equation (8) indicates the size of the position vector, r1 and r2 in equation (9) are random numbers in the range [0, 1], and β is a constant value. In LFM used in DA, a multiplication, not included in the original mathematical formula of the flight method, was taken. is multiplication was obtained by taking 1% of LFM size as seen in equation (9). e aim here was to control the step size. is multiplication defines the value of a solution, which states the amount of the best individual deviation after LFM (position of the best individual). e 1% deviation value can be set according to the range of variables in the application. For example, if the range of variables in the application is [−10e6, 10e6], the multiplication value of 1% can be set to 1.
Although LFM raises the performance of DA to a certain extent, it is disadvantageous in that very long steps may occur depending on the characteristics of the mechanism ( Figure 1). ese major steps are tried to be controlled in two ways in the algorithm: e first one is that if a long step is taken, the agents have to go outside the search space and a new step vector is produced. However, it is not certain that this solution will always give correct results. e new step to Computational Intelligence and Neuroscience 3 be produced may take the general operation back. e second one is to take 1% of the step size as seen in equation (9) or a different percentage according to the variable range in the application. is solution method is better than the first solution. However, the step size control is contrary to the nature of LFM.

Brownian Motion.
Another one of the random motion mechanisms is the Brownian motion [31]. is method is inspired by the motion of free liquid/gas molecules. e introduction to the literature was done, thanks to Ingenhousz, observing the random motion of the coal and dust particles in 1779 while they were swimming in alcohol. e motion was discovered by the botanical scientist Robert Brown in 1828. e Brownian motion is defined as one of a variety of physical phenomena where a quantity continuously passes through small, random fluctuations. e Brownian motion is the random motion of particles suspended in a liquid (or a gas) resulting from collisions with fast moving molecules in the fluid. e Brownian motion is considered to be a Markov process with a Gaussian process and a continuous path that occurs continuously over time. Figure 2 shows an example of the Brownian motion in 1000 steps.
ere are some basic differences between the LFM and the Brownian motion. Mathematically, a random walk can be defined as where X N is the solution that exists in step N and W N is a random vector generated from a known probability distribution. If W N is produced from the Gaussian distribution, random walking is isotropic. In this case, the motion takes the form of normal diffusion and is called the Brownian motion. e expected step size can be modeled with square root scaling as follows: If the steps (W N ) are obtained from a predetermined tailed probability distribution, such as the Levy distribution or the Cauchy distribution, the diffusion becomes abnormal. In this case, the expected step size becomes If q ≥ 1/2, the diffusion is called superdiffusion. Both the Levy distribution and the Cauchy distribution for step sizes may have some of the major steps leading to superdiffusion.
is means that the average distance increases faster than that for normal diffusion.

Improvement of Dragonfly Algorithm with Brownian
Motion. Modification of DA by the Brownian motion is expressed as follows: In equation (16), the term T represents the motion time period in seconds of an agent (dragonfly). In this study, the T value is taken as 0.01. e term N in equation (17)  distribution instead of the dominant tailed distribution. e periodic motion of the dragonflies was spread over time with a normal distribution, and sudden jumps and random motions were made in the form of vibrations. e equation was modified by the size and number of agents in the algorithm, and the Brownian motion was finalized (equation (18)).
e modified DA has been adapted for both single-and multiobjective problems. Pseudocodes of the modified DA for single-and multiobjective problems are given in Figures 3 and 4. Figure 5 shows the flowchart of the modified DA for single-objective problems. e optimization is started by randomly placing dragonflies in the search space and identifying step vectors. en, the current position is sent as a parameter to the comparison function. After that, the best and the worst solutions are determined. Following these solutions, the number of neighborhoods for each dragonfly is examined. If each dragonfly has at least one neighbor, the velocity vector is calculated with the coefficients determined at the beginning of the algorithm and the position vector is updated. If the dragonflies have no neighborhoods, the Brownian motion solution is used and the position vector is updated accordingly. en, it is checked whether the dragonflies are in the search space. If the control is positive, it is checked whether the criterion is fulfilled. If the control is negative, then the neighborhood is solved. en, again, it is checked whether the criterion of completion is achieved and the optimization is terminated. e flowchart of the modified DA for multiobjective optimization is shown in Figure 6. Multiobjective optimization starts with placing dragonflies randomly in the search space and identifying step vectors. en, the maximum archive size and the number of segments (hyperspheres) are defined. e instantaneous locations of the dragonflies are sent as a parameter to the comparison function, and the solutions that cannot be dominated by each other are produced. If the archive is full, some solutions are eliminated with the roulette wheel mechanism and new solutions are added to the archive. If any of the added solutions are left out of the hypersphere, the hypersphere is updated to cover all solutions. Following these processes, the best and the worst solutions in the archive are assigned as the source of nutrients and enemies, respectively. e neighborhood control is performed, and from there on, the algorithm works in the same way as the single-objective problem optimization.

Test Results and Discussions
In this section, the experimental evaluation of the modified DA is presented. e MATLAB [32] software was used for the solution of single-objective and multiobjective problem optimizations and the application of the welded beam design problem. In all cases, the computer used in simulations is configured with 2.2 GHz Intel Core i7 CPU and 6 GB DDR3 RAM. 15 benchmark functions were used for the optimization of the single-objective problems and 6 for the multiobjective problems. e results are compared and discussed with those of the original DA in terms of minimum point achieved and average performance.

Benchmark Functions.
Test functions are useful for evaluating convergence rate, precision, robustness, and overall performance characteristics of optimization algorithms. Here, some test functions are presented to give an idea of different situations in any optimization algorithm that can be encountered when dealing with such problems. Only a general form of equality, limits of object variables, and global minimum coordinates are given here. Tables 1  and 2 show the benchmark functions used for singleobjective and multiobjective problems.

Single-Objective Problem Optimization Results
. e modified DA was tested by using 15 benchmark functions for single-objective problem optimization, and the minimum value for each test was compared with LFM results in the DA (the original DA).
e Brownian motion in the modified algorithm was calculated separately by taking 1% of the original form and the calculated step size expressed in Section 2.3. e average value that is taken from 200 separate iterations of the original DA was compared with that of the two different versions of the modified DA: the Brownian motion and step size-controlled Brownian motion (SSCBM).
e numerical results are shown in Table 3, and the graphical representation of the results is presented in Figures 7 and 8. When the results are evaluated in terms of minimum and average values, the following inferences can be made: (v) e biggest success was obtained from F7 with 88.8% in terms of average values using SSCBM. It can be said that the local minimum is quite high and SSCBM is more successful than LFM in this type of function. When the worst three functions (F1, F3, and F4) are examined in this manner, it can be seen that although the minimum point in global functions is more optimal than LFM, this optimality is caused by random motion because these three functions are similar and there was a decline according to the average of 200 separate results. (vi) Finally, the Brownian motion randomization (with or without step size control), in accordance with its nature, proved its success by giving better results than LFM in most of the benchmark functions.

Multiobjective Problem Optimization
Results. e modified DA was applied to multiobjective problems given in Table 2, and 12 minimum values were obtained from 6 well-known benchmark functions used in the comparison. In the modified DA, the results were obtained from 100 iterations over 100 solutions which did not dominate each other after applying the Brownian motion step. e numerical f1 and f2 results are shown in Table 4, and graphical results are shown through Figures 9-14.
When the results are analyzed numerically, f1 values are mostly better in the original DA. is is because f1 minimization corresponds to the x1 position of artificial dragonflies in 5 of 6 problems. In the f2 minimization, the expected difference is observed. is is the phase where the    Figure 6: Flowchart of the modified DA for multiobjective problems.

Problem Definition
Zitzler-Deb-iele 1 Zitzler-Deb-iele 1 (with linear pareto front) Zitzler-Deb-iele 6 Minimize Computational Intelligence and Neuroscience real random motion of optimization is calculated to reach the minimum f2 value. Here, the Brownian motion achieved an average of 50% improvement in 5 out of 6 functions compared to LFM. If the statistical regression in a function (ZDT3) is examined, it is understood that the random motion should be applied stepwise in trigonometric rooted approaches. When the results were analyzed graphically, the success of the Brownian motion in terms of convergence and coverage compared to LFM increased in proportion to the increase in size. While the DA with LFM was tested in 5 dimensions in the previous study [28], the search space was increased to 10 dimensions in order to increase the difficulty after it has been modified, and the Brownian motion clearly revealed its true success against the LFM. e results of the modified DA were compared not only with those of the original algorithm but also with those of   Computational Intelligence and Neuroscience some very important optimization algorithms such as GA, PSO, and ACO by means of basic statistics (i.e., mean and standard deviation). Table 5 shows the results of 4 algorithms on 15 different benchmark functions. ese results were carried out with a total of 500 iterations with 40 agents for each optimization method.
When the results are analyzed, the proposed method for solving the benchmark functions with an early convergence problem has produced similar and successful results with those of the ACO algorithm according to other algorithms. is means that the pheromone solution used by the ants is similar to the short-step solution of the With step size control Without step size control Brownian motion. e pheromone, which keeps ants close to each other and increases their nutrient concentration, has shown the effect of the neighborhood radius on the Brownian motion. PSO was more successful than other algorithms in terms of standard deviations. e reason for this is that as in the original dragonfly algorithm, the long steps extend the search space and rarely reach the result in a shorter time. On the contrary, the proposed method has been more successful than PSO on benchmark functions with local minima. e reason for this is that the short steps to different directions in the Brownian motion method allow the particles to discover different aspects and produce better solutions at the relevant time interval.  Computational Intelligence and Neuroscience

A Solution of the Welded Beam Design Problem with the Modified Dragonfly
Algorithm. e welded beam design is a practical design problem that is often used as a benchmark in testing different optimization techniques. e problem is an example of structural optimization problems, which consists of a nonlinear objective function and five nonlinear constraints [33]. e welded beam design problem was solved by many algorithms such as GA [34], simulated annealing [35], evolutionary strategy [36], and gravitational search algorithm [37].
In this study, the welded beam design problem was implemented in order to show the effectiveness of the modified DA. e reason for choosing this problem is that it was used many times as an application of hybrid swarminspired optimization techniques in the past. One of these is Kaveh and Talatahari's study [38] which hybridizes PSO and ACO. Another study is the application of the upgraded ACO [39], and Brajevic and Tuba [40] proposed a solution for limited engineering problems. Liao et al. [41] used it in the application of mixed-variable optimization problems, and finally, it is used by Ranjini and Murugan [20] for the memory-based modification of DA. e welded beam design problem aims to minimize the manufacturing cost of the welded beam by finding a suitable  set of four structural parameters of the beam. ese four structural parameters are the thickness of the weld (x 1 ), the length of the clamped bar (x 2 ), the height of the bar (x 3 ), and the thickness of the bar (x 4 ). Relevant restrictions include shear stress (τ), bending stress (θ) in the beam, buckling load (P), and the last deflection of the beam (δ). Figure 15 shows the systematic design of the problem. e total cost is equal to labor costs (a function of the source dimensions) and the cost of welding and beam material. e beam will be optimized for the minimum cost by changing the source and element dimensions (x1, x2, x3, and x4). e variables x1 and x2 are usually integer multiples of 0.0625 inches but are considered to be continuous for this application. e parameters and values of the problem are given in the following equations: G � 12 × 10 6 psi, L � 14 inches, P � 6000 lb, Young's modulus (psi) is given in equation (19), the shear modulus (psi) for the beam material in equation (20), the protrusion length (inches) of the member in equation (21), welding design stress (psi) in equation (22), normal design stress (psi) in equation (23) for the beam material, maximum deviation in equation (24), and load (lb) in equation (25). e cost function of the problem is shown in equation (26).
In equation (26), C 0 represents the initial cost, but it is assumed that the connections required for the installation and retention of the rod during welding are available. erefore, the cost C 0 in the total cost model can be ignored. C 0 represents the cost of resources, and C 2 represents the cost of materials. e welded beam design problem was carried out with 40 dragonflies with 200 iterations. e average values were obtained after the optimization was performed 200 times. e results are shown in Table 6.
According to the results obtained, although the Brownian motion was less successful in terms of the minimum cost without step size controlling, it has almost the same result with LFM. e most significant success of the Brownian motion can be seen from the average values. When no step size controlling is applied to the algorithm, the Brownian motion is more successful than LFM. is means that the long premature jumps of LFM do not always have a positive effect. e step size controlling the movement of the Brownian motion increases the likelihood of reaching optimal results. On the contrary, the Brownian motion has yielded 20% more successful results than LFM in terms of minimum cost when 1% step size controlling has been applied. e success of the modified DA will be a guide for the solution of other realworld problems according to all these results.

Analysis of the Modified DA.
In this study, long sudden jumps which are one of the main differences between LFM and Brownian motion were examined. As previously mentioned, long sudden steps are a solution that LFM produces to avoid early convergence. However, as a result of this solution, the step produced from time to time leads to the renewal of the step when it goes out of the search space.
is causes time loss. At this point, the Brownian motion solution that we have implemented rescues the algorithm from these long steps. e data obtained from Figure 16 show the long steps taken by 40 dragonflies through 1 iteration. Here, the long steps are taken as a result of the threshold value applied by taking the average distance from the steps taken for each method. As can be seen in the results, in the original DA with LFM, the long steps produced for each function are at least 5 and the average is 9.26. In the modified DA with Brownian motion, these numbers are at least 0 and average is 1.53. is is the main difference between the methods in this study, and our suggestion is that this difference often leads to success.
Even though the irregular jumps of LFM which is the motivation of the study are corrected by the Brownian motion, dragonflies may need sudden jumps to escape from the local minima. Although the proposed method greatly improves the irregular jumps caused by LFM from time to time, there is a rare possibility that it may get stuck in local minima. On the contrary, the number of neighbors in which the dragonflies have mass mobility has experienced a slight decrease with the Brownian motion. is, in a very rare way, can reduce communication groups and cause discovery to be restricted.
e modified DA has time complexity just like all other optimization algorithms. It depends on the population size and number of iterations. e overall complexity of the modified DA can be expressed as O(number of iterations * population size). Using Brownian motion instead of LFM did not affect the time complexity of the original algorithm. e goal here is to achieve the optimum result with the optimum number of dragonflies. Apart from the number of dragonflies, the dimension of the benchmark functions and the number of iterations are among the factors affecting the execution time. However, in contrast to the number of dragonflies, the concept of dimension has an inverse proportional effect with velocity. Table 7  When the results were examined, 7 of the total 15 functions improved by averagely 5%. On the contrary, there is an average of 3% regression observed for the remaining 8 functions. Two of the functions provided for improvement are outstanding: When F5 is examined, it has been observed that improvement in processing time is due to Brownian motion's short steps in different directions. When LFM was used, the fact that the step had to be dismissed because of the long jumps caused the execution time to be prolonged. Additionally, when F8 is examined from these functions, it is seen how the problem of early convergence which is the main optimization problem is solved with the Brownian motion. LFM used to solve this problem in the original e Brownian motion, on the contrary, can be more successful in preventing getting stuck in local minima as it aims to go in different directions at every step. When F2 is examined from the observed functions, it is realized that the long steps in LFM worked this time and the short steps in the Brownian motion failed. e result to be taken as neutral here is that the success of the method may change according to the characteristic of the function.

Conclusions
Randomization is one of the essential elements of optimization techniques based on swarm intelligence. It plays a very important role in both exploration and exploitation stages. In this study, the randomization stage of DA, which is one of the swarm-based algorithms, used effectively in recent years, is modified with the Brownian motion. e results of the single-objective problem optimization from the obtained results clearly show that when the Brownian motion is used instead of LFM, definite success is achieved in the context of the minimum values in the benchmark functions. On the contrary, the modified DA was tested on 6 multiobjective problems. When numerical and graphical results are examined, it can be seen that the Brownian   motion has significant success in 10-dimensional space compared to LFM. As a general evaluation of the results, the long sudden steps from LFM sometimes caused the search space to be out of the search space. erefore, the random motion had to be reproduced from the beginning. is is a costly time-consuming situation and it changes the original movement. However, with the Brownian motion, there is a significant reduction in the number of long steps in each iteration (Figure 16), and the original randomness is retained without having to repeat the movement.
is has increased the importance of the neighborhood radius used in the original algorithm, both while saving time and facilitating the movement of dragonflies together.
In addition, the problem of getting stuck in the local minima (fast convergence) and the problem of infinite looping in the search field, as in most optimization algorithms and the original DA, have been significantly exceeded by the Brownian movement's principle of random motion generation in a different direction. In addition to the success of LFM in relation to the usual random motion, this success of the Brownian motion also highlights the importance of random flight mechanisms. Random motion algorithms have a high effect on performance of optimization techniques. Finally, this success of the Brownian motion has shown another way to improve the results of other optimization techniques as the future work.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.