Antenna S-parameter optimization based on golden sine mechanism based honey badger algorithm with tent chaos

This work proposed a new method to optimize the antenna S-parameter using a Golden Sine mechanism-based Honey Badger Algorithm that employs Tent chaos (GST-HBA). The Honey Badger Algorithm (HBA) is a promising optimization method that similar to other metaheuristic algorithms, is prone to premature convergence and lacks diversity in the population. The Honey Badger Algorithm is inspired by the behavior of honey badgers who use their sense of smell and honeyguide birds to move toward the honeycomb. Our proposed approach aims to improve the performance of HBA and enhance the accuracy of the optimization process for antenna S-parameter optimization. The approach we propose in this study leverages the strengths of both tent chaos and the golden sine mechanism to achieve fast convergence, population diversity, and a good tradeoff between exploitation and exploration. We begin by testing our approach on 20 standard benchmark functions, and then we apply it to a test suite of 8 S-parameter functions. We perform tests comparing the outcomes to those of other optimization algorithms, the result shows that the suggested algorithm is superior.

The optimization of antenna design parameters has recently attracted a lot of attention.The topological complexity of modern antenna constructions has progressively risen over time to satisfy ever-stricter requirements.In the sphere of communication, finetuning the geometric and material features of antenna design structures to meet these requirements has become a standard [33].The most popular methods for fine-tuning antenna parameters involve trial-and-error procedures based on scanning a number of design parameters using experience-based methods.These methods require a significant amount of time, and their success is not guaranteed, henceforth the need for optimization-based design automation.The main methods used to improve antenna performance through optimization are local and global numerical approaches.Despite the fact that numerical optimization is preferable to parameter scanning using experience-based methods, some challenges remain [34].In order to obtain effective and relevant results, local optimization approaches generally require an adequate starting point in designing modern antennas, which is rarely attainable.In contrast, global optimization methods seem to attract more interest because they are robust and require no significant modifications during their design or implementation.However, they usually necessitate a significant number of electromagnetic simulations, which can be costly, to obtain optimal design parameters [35].A particularly efficient alternative to conventional methods for resolving these issues is metaheuristic optimization.Meta-heuristic optimization algorithms, unlike other optimization techniques, are generally able to avoid local optima and are practical for a wide range of applications and domains without requiring significant changes to their implementation and design.This makes them superior to other optimization techniques.However, metaheuristic algorithms have limitations.

Table 1
Meta-heuristic optimization algorithms for antenna problems.

Algorithm
Challenges Type of antenna problem Method Subhashini and Satapathy [41] Enhanced Ant Lion Optimization (e-ALO) Find the best arrangement (spacing) of antenna array elements and their excitations for various antenna shapes, with the goal of minimizing sidelobe levels while remaining within boundary limitations for other restrictions.

Antenna array
Rather than using a uniform probability distribution function, the Pareto distribution was chosen from several other distributions and used to obtain a random number in the interval [0, 1].Additional weighting variables were also incorporated during the ant location updating stage, bringing their movements near both the Roulette wheel and the elite individual.Guttula et al. [32] Elephant Herding Optimization with New Scaling Factor (EHO-NSF) Improve narrow bandwidth which is not suitable for broadband equipment.

Microstrip Patch Antennas (MPA)
A new scaling factor is introduced for better search quality.The modification resulted in improving the antenna gain by choosing optimal values for the width, the length of the patch, the thickness, and the dielectric value of the substrate of the MPA.Janairo et al. [42] Genetic Programming with Lichtenberg Algorithm (GP-LA), Genetic Programming with Henry Gas Solubility Optimization (GP-HGSO), and Genetic Programming with Archimedes Optimization Algorithm (GP-AOA)

Capacitance improvement Plate-wire antenna
In order to establish quasi-static conditions, the antenna capacitance fitness function was constructed using GP and then minimized using GP-LA, GP-HGSO, and GP-AOA.Finally, using Altair Feko, the three antennas were 3D designed, then their electrical characteristics were compared to those of the standard antenna.The hybrid GP-LA antenna model produced practical outputs, while the hybrid GP-AOA and GP-HGSO produced coupled transceiver systems with inappropriate antenna feature capacitance.Li et al. [43] Neighborhood-Redispatch Particle Swarm Optimization (NR-PSO) Design and optimize a compact Ultrawideband (UWB) antenna that does not require an additional resonance structure for Bluetooth applications with a Wireless Local Area Network (WLAN) stopband

UWB antenna
The proposed algorithm includes three new factors: convergence factor, neighborhood factor, and dispatch factor.The first method is utilized to determine if a particle has entered the convergence area centered on the present global optimal.The second specifies the neighborhood research space into which a particle from the convergence decision area is redirected.The third component specifies the number of particles that will be redirected to the nearby research area.The three components combine to form a successful system in PSO, resulting in strong optimization performance.

Singh and
Kaur [44] Levy flight Archimedes Optimizer (LAO) Overcome the limitations of the original Archimedes optimizer, namely its tendency to converge too slowly and prematurely, which causes it to be stuck in local optima Microstrip Patch antenna A fixed limit is set for every decision variable; if a variable fails to reach its optimal solution within the search area till the end of the current generation, the limit for that variable is adjusted.If a decision variable exceeds the boundary, the levy flight introduction helps to restrict its speed hence improving the exploration phase.The Levy flight is also utilized to compute the step size for random walks and ignore the Archimedes Optimizer Algorithm's local optima while searching.Pal et al. [45] Modified Invasive Weed Optimization (M-IWO) experimental simulation and the results shown in section 6, in Section 7, GST-HBA is applied to antenna S-parameter optimization problem; Section 8 discusses the uncertainties and limitations associated with antenna S-parameter optimization.Section 9 explains the conclusion and future work.

Literature review
This section provides a review of several algorithms that have been improved by researchers to address antenna optimization problems.These algorithms aim to enhance the performance of different types of antennas by optimizing various features and overcoming specific challenges as expressed in Table 1 below.
Table 1.Shows each algorithm's main challenge addressed, the type of antenna problem targeted, and the method applied.In summary, these algorithms address different challenges in antenna design, such as selecting significant features, minimizing side lobe levels, improving narrow bandwidth, enhancing capacitance, achieving compact designs, overcoming convergence limitations, and optimizing various antenna parameters.

Honey badger Algorithm (HBA)
Recently, researchers introduced the HBA, a nature-inspired algorithm inspired by the intelligent foraging behavior of honey badgers [14].The honey badger employs two main hunting techniques: following "honeyguide birds and using its olfactory" abilities to search for food sources.In the exploration phase of the algorithm, inspired by both strategies, the honey badger locates its prey by relying on its sense of smell.Once it identifies the prey, the honey badger explores the surrounding area to find the optimal spot for capturing the prey.In the exploitation phase, inspired by the second hunting technique, the honey badger track food source with the aid of honeyguide birds to reach the hive.Its pace is influenced by the intensity of the prey scent detected at its current location, facilitating efficient exploitation.
First step: Initialization.Each potential solution's position is shown as a vector in the D dimension, with the formula The mathematical relation below is utilized to initialize the different positions of honey badgers with an n population: where ub i is the search space maximum limit, lb i denote the search space minimum limit, x i is a potential solution in a population of size N. r n1 is a number selected randomly within a range of [0, 1].Second Step: Intensity definition.
The following equation presents the relation between the scent intensity I i , the concentration strength S, and the distance d i between the honey badger and the prey.
where x i denotes honey badger's current position, x i+1 represents honey badger's immediate next position, x prey denotes the prey position, while r n2 represents a selected arbitrary number from the range [0, 1].
Third Step: Updating density factor.(α), the density factor is a randomization regulator employed to obtain an appropriate balance between the exploitation and exploration phases.α lowers with iterations to minimize population variety.
C represents a number drawn arbitrarily from the interval [1, +∞]. it is the current iteration, while it max represent maximum iteration number.
Fourth Step: Local optimum escape and updating the position of the agents.This action is used to limit the risks of being trapped in the local optimum.The HBA generates a flag Fl to alter the direction of the search in a bid to enhance the possibilities of a thorough scan of the search area by the agents.The "honey phase" as well as the "digging phase" are the two subphases of the updating process for the HBA position.They are detailed as follows: Fifth Step Digging phase: During this particular phase, the behaviors exhibited by the honey badger are expressed as: x prey denotes the prey's position and the global best position, β represents the honey badger ability, it's given as a preset value ≥ 1 (initial value = 6), while r n3 , r n4 , and r n5 are three distinct numbers drawn arbitrarily from the interval [0, 1].Fl is a flag that changes the search's direction and is calculated using Eq. ( 5): r n6 is a randomly selected number from [0, 1].The intensity of the prey's scent, the badger's position, and the parameter α affect prey detection.Disruption Fl may be encountered in the digging processes.Honey phase: The instance at which the honey badger follows the honeyguide to get to the beehive.
where r n7 and x new denote a random digit within [0,1] and a new position for the badger.
The pseudo code of HBA is given in Fig. 1.

Antenna S-parameter optimization problem
In the optimization algorithms' preliminary stage of design, they are usually evaluated on a collection of analytical functions for which the global and local optima are known.This collection of functions, also known as a test suite helps in validating the performance of the algorithm in relation to its effectiveness and efficiency in numerous circumstances [46,47].Several authors have proposed test suites for evaluating the effectiveness of optimization methods function when applied to antenna problems.One such test suite is by Ref. [48].They presented a set of functions that may be used to gauge and assess how well various Evolutionary optimization Algorithms (EAs) perform when used to address difficult Electro-Magnetic (EM) challenges.Liu  characteristics of a test function with an antenna landscape, one can effectively evaluate the performance of an antenna optimization algorithm.

Several types of antenna structures
It is nearly impossible to look into every type of antenna that has been developed over the years due to the enormous quantity of them.Based on their structural characteristics, antennas are categorized into four groups as presented in the following subsections [50].

Single-antenna with single-band
Considered the simplest antenna problem, it refers to single antennas that transmit and receive data on a single frequency band.The system can only transmit or receive one data stream at a time, and it can only operate on one frequency band at a time.Some examples are monopole and dipole antennas.Typically, theoretical calculations can be used to determine the resonance frequency of this type of antenna.

Single-antenna with multi-band
It refers to systems that use a single antenna to transmit and receive data on multiple frequency bands.Meaning the system can transmit and receive data on different frequency bands.This sort of antenna is built with parasitic components that have various resonance frequencies in order to increase bandwidth or add more bands.

Multi-antenna with one feeding
It refers to multiple antennas that transmit and receive data but with only one feed line connecting the antennas to the transceiver.The antennas are physically combined into a single structure or array, which allows them to share a common feed point.This type of antenna might result in a multi-objective optimization problem.It is possible to employ resonant frequency, S-parameters, and any pattern-related parameters, such as axial ratio, gain, and polarization concurrently as objective functions.

Multi-antenna with multi-feeding
It refers to systems that use multiple antennas and multiple feed lines for data transmission and reception.Each antenna is attached to a separate feed line, which allows each antenna to operate independently and transmit or receive data on its own.The feed lines are connected to a signal processing unit, which can combine or separate the signals from each antenna as needed to optimize performance.Each component might have specific polarization and directivity properties.To increase isolation, several polarization modes are occasionally used.With the aforementioned classification, A standard set of test functions was created for the antenna S-parameter, each function illustrates an antenna-type landscape [50].The test suite is described in Table 2.

Tent chaos mechanism
Chaos mapping is a sophisticated dynamic technique used in nonlinear systems that exhibits randomness, ergodicity, and regularity.It is applied widely for algorithm optimization to achieve a more thorough and extensive exploration of the search space.
Single and Multiple 0 Currently, Tent Chaotic mapping (TC) and Logistic Chaotic mapping (LC) are the two most popular chaos.Demir et al. contrasted the impacts of LC with TC [51].They were able to conclude that TC shows a more uniform population distribution and results in faster convergence.As displayed by Fig. 2a, the outputs produced by LC between [0.0, 0.2] and [0.8, 0.1] are bigger than those of other sections, while Fig. 2b shows that TC values are more uniform over all viable locations.Based on these, TC is widely adopted to substitute the random initialization of algorithms in order to ensure higher variety in the starting population, better convergence speed, and reduce the algorithm's tendency to be trapped in a local optimum.TC can be addressed by the following equation x i is the decision variable of a potential solution.
Eq. ( 8) is the Bernoulli shift transformed version of the previous equation: To avoid the possibilities of the TC expression entering short and unstable period points throughout the iteration process, and to retain its regularity, ergodicity, and unpredictability, rand (0, 1) × 1 N , is a random variable incorporated as presented in Eq. ( 9): Transformed by the Bernoulli shift, it is expressed as: where rand (0, 1) represents a number drawn arbitrarily from the interval [0, 1], and N denotes the total number of individuals in the sequence.

Golden sine (GS) mechanism
The GS algorithm which was proposed by Tanyildizi et al. is a new metaheuristic algorithm that simulates the golden ratio and the mathematical sine function [52].The GS algorithm performs an iterative search to approach the optimal solution by combining a sine function and the golden ratio.As illustrated in Fig. 3, the sine curve has a unique connection with the unit circle and is defined inside the interval [− 1, 1] with a period of 2π.The sine function's dependent variable changes when the related independent variable's value changes.In other terms, exploring all the positions on the unit circle is equal to exploring all sine function values.Based on this principle, the space for the search is progressively narrowed and the search is carried out in areas where there is a greater likelihood of reaching the optimal solution to increase efficiency of convergence [53], the model of the GS algorithm population update method is illustrated in Fig. 4.
After updating the population, the best potential solution is assessed in the population, and its position is modified using GS as Fig. 2. Sequence map.
O.R. Adegboye et al. follows: where x best is the best potential solution globally, x denotes an individual's current solution, c 1 and c 2 represent the coefficient factors, r 1 is a number taken arbitrarily from [0, 2π] and r 2 is a number selected arbitrarily from the interval [0, π ].
The Golden ratio is τ = (1 + 1)/2, while a and b are the golden ratio's initial values (they are problem dependent).The generated individual using GS must then be compared to the ideal solution, coefficient factors c 1 and c 2 are adjusted based on the result from the comparison.The new individual created using GS is compared to the optimal, and the coefficient factors c 1 and c 2 are modified accordingly. If Else In case c 1 = c 2 the variables are computed as follows a = rand (0, π) Using GS to enable the best candidate's values in order to increasingly approach the optimal solution may result in a balance between local exploitation and global exploration.

Work flow of the proposed GST-HBA
The pseudocode for GST-HBA is outlined in Algorithm 2, presented in Fig. 5.The initial step in the optimization process involves populating the search space randomly.In GST-HBA, we utilize the TC approach for population initialization, which is expressed in Eq. ( 10) to enhance population diversity, as demonstrated in Algorithm 2. Once the population is initialized, we employ the core mechanism of HBA until Eq. ( 6), which is responsible for updating the position of each honey badger.Subsequently, GS expressed in Eq. ( 11) is introduced to generate a new position for the best honey badger in the population.If this new position proves to be superior to the current position, it is used to update the position of the best honey badger, thereby further enhancing its solution.Iterations proceed until the maximum number of iterations is reached, signaling the termination of the algorithm.

Simulation experiment and result analysis
The Honey Badger Algorithm (HBA) [14], Differential Evolution (DE) [54], Jaya Optimization Algorithm (JAYA) [55], and Sine Cosine Algorithm (SCA) [56] were used as a control group and 20 groups of Congress on Evolutionary Computation (CEC) benchmark functions [6], commonly used by researchers, were used for testing in order to assess the GST-HBA's performance.Mono-peaked functions F1-F7, more challenging multimodal functions are F8-F13 with several peaks, and fixed dimensions multimodal O.R. Adegboye et al. functions are seen in F13-F20 as recorded in Table 3.Every algorithm was run 30 times separately, with 500 iterations per run, the population size is 30 and the dimension of function F1-F13 is 30 to ensure that the experiment is fair and to reduce mistakes.The Standard Deviation (STD), as well as the Average value (AVG) of the results of the 30 independent runs, are selected as evaluation metrics.The mean may be used to estimate the algorithms' stability, whereas the STD can be used to evaluate the accuracy of algorithm, the findings are presented in Table 4.The parameters of each algorithm are set as explained in the original literature of each algorithm.
In Table 4, GST-HBA found the best near-ideal values for functions F1 through F6.In contrast to other state-of-the-art algorithms, it is better, but when compared to DE, it did not find the best value for function F7.This corroborates the NFL theorem as stated in the introduction, worthy of note is that compared to the original HBA there is a significant improvement in function F1-F7.This attests to the efficacy of the enhanced technique put forth in this paper.For F8-20, these functions test an algorithm's ability to explore the search space for the optimal solution since there are several local optimal solutions.For functions, F8 to F13 which are high-dimension functions, GST-HBA has the potential to find the optimum values for five functions, in functions F14 to F20 GST-HBA is able to obtain the theoretical optimal solution, in these functions, GST-HBA results are comparable to that of the Differential Evolution algorithm (DE), Honey Badger Algorithm (HBA) and Sine Cosine Algorithm (SCA) since they are fixed length multi-peaked functions that test the stability and exploration ability of an algorithm which means the improvement on HBA did not negatively impact the traditional techniques in the original HBA.It can be concluded from Table 4 that GST-HBA is also able to improve the population variety and escape local optimal value in the multi-modal functions, while in the single-modal function, it's able to explore regions for an ideal solution by employing the combination of TC and GS.The result also shows that there is a good tradeoff between the exploration ability of GST-HBA and its exploitation ability.For the proposed GST-HBA and the compared algorithms, we outlined in Table 4 the instances in which each algorithm achieved the best optimization value when compared to others, which is denoted by "+," the instances in which each algorithm did not identify the optimal solution, which is denoted by "-," and the instances in which each algorithm achieved results that were similar to those of the compared algorithms is denoted by " = ".Furthermore, in Table 4, the best AVG and STD value obtained for each function is highlighted in boldface.

Statistical test
As stated by Garcia [57] Optimization algorithms cannot be evaluated by mean and standard deviation values alone, in this research two popular nonparametric statistical tests are used to further evaluate the improvement of GST-HBA.Firstly, the Wilcoxon test is employed, if the Wilcoxon probability value of assumption also known P-value is 0.05 or greater, negate the null hypothesis, which would imply that the compared algorithms have no statistically significant differences.If the P-value is less than 0.05, accept the null hypothesis, which would mean there are substantial differences between the compared methods.In Table 5, the symbols "+", "-", and " = " denote that GST-HBA is excellent, inadequate, and similar, in comparison to the compared algorithms respectively.The P-values in Table 5 are all less than 0.05.Another statistical test that ranks the performance of methods is Friedmans test, which contrasts at least three matched or paired methods.Each algorithm's fitness value is ranked by the Friedman test from low to high [58], to elucidate the statistical improvement and distinction achieved by GST-HBA, we a Friedmans test.Leveraging the data presented in Tables 4 and 6, the Friedmans Value (FV) obtained through Equation ( 19) is utilized, this value indicates significant improvement of an algorithm in relation to its compared counterparts.Subsequently, the Friedman Rank (FR) for each optimizer is ascertained by arranging the average FV obtained for all functions in ascending order, with the most favorable outcome corresponding to the lowest value.
in Eq. ( 19), the variables k, n, and R 2 j correspondingly denote the count of algorithms, the count of benchmark functions, and the average result from benchmark functions associated with the j-th algorithm.Algorithms are assigned an FV on a spectrum ranging from 1 (indicating the most favorable result) to k (indicating the least favorable outcome).As seen in 4 GST-HBA ranked number one affirming it's significantly different from other algorithms

Comparison of convergence curve
The optimization pace and precision may be more clearly shown by the convergence trajectory.The trajectory of convergence pertains to the path taken by the algorithm as it progressively moves toward the optimal solution during its iterative process.It showcases the algorithm's advancement towards achieving the most favorable result as iterations elapse.The speed of optimization, or optimization pace, denotes how swiftly the algorithm reaches the optimal solution.A rapid optimization pace signifies that the algorithm converges quickly, necessitating fewer iterations or steps to arrive at the optimal solution.Conversely, a slow optimization pace indicates that the algorithm requires more time and iterations to converge.The accuracy of the optimization algorithm signifies its ability to closely approach the true optimal solution.A highly precise algorithm will closely approximate the best possible outcome, while a less precise algorithm may yield a solution that deviates further from the optimal value.By analyzing the convergence trajectory, valuable insights can be gained regarding both the pace and precision of the optimization algorithm.The benchmark functions listed in Table 3 are used to execute the GST-HBA, HBA, DE, JAYA, and SCA tests.Each algorithm is executed 30 times individually, with 30 being the algorithm dimension and 30 being the population, with a total of 500 iterations.The average convergence curve of various algorithms is demonstrated in Fig. 6.Fig. 6 demonstrates that the convergence curve of the GST-HBA for functions from F1 to F5 and F7 to F12 are lower than other algorithms, demonstrating that it is superior in relation to both fast convergence and optimization precision.When compared to the DE in F6 and F12, DE performs better, but when compared to the conventional HBA, there are clear improvements.The GST-HBA was able to converge to the ideal value even though the convergence accuracy of each technique is not noticeably different from other algorithms for functions F14-F20.In comparison to the original algorithm, GST-HBA has increased both the speed and accuracy of the conventional HBA.

Antenna S-parameter problem
The result of the antenna S-parameter problem is presented in Table 6.On seven scalable antenna test functions and one non-scalable antenna test function, this study investigated five optimization algorithms, namely HBA, DE, JAYA, SCA, and our proposed GST-HBA.The scalable function dimension was set to eight, as recommended by Zhang [50], and the non-scalable function's dimension was set to two.A maximum of 500 iterations are used by each algorithm, with a population size of 30.Table 6 reports the outcomes of the function values throughout 30 executions with the best AVG and STD value obtained for each function is highlighted in boldface.
In Fig. 7, GST-HBA proves to be a proficient technique in tackling "single-antenna" optimization problems.It exhibits rapid convergence on both A1 and A3 test functions and attains superior objective function values compared to other heuristic methods, namely HBA, DE, JAYA, and SCA.Based on our observations, GST-HBA appears to perform exceptionally well in handling A2, A3, A4, and A5, which are commonly encountered in multi-antenna systems.Notably, it excels in locating the global minimum even when dealing with A5, a challenge that the compared methods seem to find difficult.These findings imply that GST-HBA is an ideal candidate for solving the multi-antenna problem, given the presence of multi-antenna characteristics in A2, A3, A4, and A5.When it comes to tackling A6, A7, and A8 with the isolation characteristic of multi-antenna systems, the effectiveness of GST-HBA is remarkable in terms of its ability to approximate complex landscapes.its curve exhibits a steady and continuous decrease, which is indicative of its robustness and reliability.Based on Friedman's statistical ranking in Table 6, GST-HBA ranked first and the Wilcoxon test in  shows the GST-HBA has significant improvement considering the P-value.For the proposed GST-HBA and the compared algorithms, we recorded in Table 5 the number of times each algorithm has the best optimization value compared to others which are denoted by "+", the number of times each algorithm did not obtain the best optimization value compared to other algorithm denoted by "-" and the number of times each algorithm as a similar result as the compared algorithms are denoted by " = ".

Uncertainty and limitations associated with antenna S-parameter optimization
Having tested GST-HBA and other state-of-art algorithms on S-parameter benchmark functions that mathematically model the characteristics of different antenna problems with GST-HBA showing promising results, there are still some uncertainties and limitations to consider.While benchmark functions offer valuable insights into the behavior of optimization algorithms in optimizing Sparameters of antennas and finding the optimal S-parameter value for these antennas, they might not fully capture the complexity of real-world antennas.Practical antenna design involves various constraints such as size, weight, cost, and manufacturability, which are     not explicitly considered in benchmark functions.Furthermore, the accuracy and validity of benchmark functions in representing actual antenna behavior can vary, requiring careful validation against empirical measurements or simulations.Optimization algorithms can be sensitive to initialization, making it crucial to select appropriate starting points and explore wide parameter space.The computational complexity of the optimization process and potential overfitting to benchmark functions also need to be addressed.Additionally, uncertainties and limitations in the accuracy of the S-parameter model used for optimization can impact the reliability of the optimized antenna design.

Conclusion
The Golden Sine (GS) Mechanism-based Honey Badger Algorithm (HBA) with Tent Chaos (TC) is a new optimization algorithm introduced in this research (GST-HBA).This algorithm's main goal is to balance exploration and exploitation more effectively during optimization, resulting in rapid convergence and population variety.We ran two different sets of tests including antenna S-parameter optimization to gauge the efficacy of the suggested method.In order to prove our algorithm's supremacy, we also evaluated its performance against that of other optimization algorithms.In the future, this study may investigate the use of the suggested algorithm  in other optimization issues, such as feature selection, as well as the hybridization of the algorithm with other approaches to enhance its performance.The limitations of GST-HBA can be categorized into two aspects.Firstly, the complexity of the problem being tackled by GST-HBA plays a significant role.Although the proposed approach has shown promising results in terms of convergence accuracy, improved exploitation, and exploration, it is crucial to acknowledge the influence of problem complexity.Certain problem instances or variations may present challenges that the GST-HBA might struggle to overcome, resulting in suboptimal outcomes.Secondly, the comparative performance of GST-HBA can vary when compared to different optimization algorithms or when applied to real-world scenarios.It is important to consider the effectiveness of the proposed approach in relation to other optimization algorithms and its applicability to diverse real-world scenarios.To address these challenges, future work can explore alternative techniques to further enhance the performance of GST-HBA.For instance, an adaptive parameter strategy can be devised to improve the optimization effectiveness of GST-HBA.Additionally, the evaluation of GST-HBA can be extended to encompass difficult benchmark functions such as multi-objective problems, constrained optimization problems, and image segmentation problems.Additionally, to mitigate these challenges uncertainties, and limitations expressed in section 8, further studies can be made to complement the optimization process with rigorous testing using fabricated prototypes of the optimized antennas, validation against real-world scenarios, and consideration of practical constraints, thereby enabling more reliable and effective antenna designs.Finally, the GST-HBA approach is aimed at improving the realization of more reliable and effective antenna designs, aligning with the demands of contemporary multi-objective, nonlinear, and emerging technological contexts such as Fifth-Generation (5G) systems and the Internet of Things (IoT), we recommend GST-HBA as a tool in the design, simulation, and fabrication to for researchers and practitioners in this field.

Table 2 S
-parameter test functions.

Table 4
Comparison of GST-HBA with other algorithms.

Table 6
Antenna S-parameter problem.

Table 7
Wilcoxon test for antenna test suite.