Advance Artificial Intelligence Technique for Designing Double T-Shaped Monopole Antenna

Machine learning (ML) has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls. ML is a massive area within artificial intelligence (AI) that focuses on obtaining valuable information out of data, explaining why ML has often been related to stats and data science. An advanced meta-heuristic optimization algorithm is proposed in this work for the optimization problem of antenna architecture design. The algorithm is designed, depending on the hybrid between the Sine Cosine Algorithm (SCA) and the Grey Wolf Optimizer (GWO), to train neural networkbased Multilayer Perceptron (MLP). The proposed optimization algorithm is a practical, versatile, and trustworthy platform to recognize the design parameters in an optimal way for an endorsement double T-shaped monopole antenna. The proposed algorithm likewise shows a comparative and statistical analysis by different curves in addition to the ANOVA and T-Test. It offers the superiority and validation stability evaluation of the predicted results to verify the procedures’ accuracy.


Introduction
Over the past couple of decades, the art of machine learning (ML) has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout all strolls of scientific research and design. Though perhaps still in its early stage, ML has just about revolutionized the technology sector. ML professionals have handled to modify the foundations of many industries and fields, consisting of lately the style and optimization of antennas. In the light of the Big Information age the world is experiencing, ML has amassed a great deal of interest in this area. ML shows excellent assurance in the field of antenna layout and antenna behavior prediction, whereby the substantial acceleration of this process can be accomplished while preserving high precision [1].
ML has been thought about thoroughly as a complementary method to Computational Electromagnetics (CEM) in developing and enhancing different kinds of antennas for several benefits because of their integral nonlinearities. ML is a vast area within artificial intelligence (AI) that focuses on obtaining valuable information out of data, explaining why ML has been often related to stats and data science [2]. Undoubtedly, the data-driven technique of ML has enabled us to create systems like never previously, taking the world's actions closer to constructing autonomous systems that can match, compete, and occasionally outperform human capacities as well as intuition. Nonetheless, ML techniques' success counts heavily on the top quality, amount, and availability of data, which can be testing to obtain in specific instances. From an antenna style viewpoint, this information requires to be gotten, otherwise already available, because no standard dataset for antennas, such as the ones readily available for computer system vision, are yet available. This can be accomplished by replicating the wanted antenna on a wide variety of values using CEM simulation software [3].
The coming era of the worldwide network of factors has allowed an enormous growth in the need for virtually all electronic gadgets for application-specific antennas. The need for a wise and effective means of aerial architecture has thus become imminent. The new antenna architecture relies heavily on the builder's pragmatic adventures and even electromagnetic (EM) simulations. Standard methods, like 3-D imprinted antennas, are fundamentally unproductive and computationally intensive. Rendering them unfeasible as several antenna concept parameters are improved [4]. Machine learning (ML) techniques can be helpful to solve the issue of constructing complex 3-D structures. ML is widely used as a simple data analysis and decision-making resource in a large set of applications from handwritten finger recognition [5] to individual genomics [6]. Analysts have analyzed antenna establishments' marketing by applying heuristic optimization techniques, such as particle swarm optimization and genetic algorithm, to antenna concepts [7,8]. However, these protocols also look for the right solution by evaluating the findings on individual data points and producing totally different and perhaps even better search directions before determining global optimums or minimums.
On the other hand, ML extends to both procedures and marketing formulas to analyze the data and check for the covert algebraic relationship in data. The organization can quickly connect input actions to output actions and use this link to generate future predictions or even decisions [9]. The critical conveniences in using ML approaches are that the team can predict the outcome for any data point once they have the relational architecture instead of merely going for fine and minimal required points worldwide. This continuity is exceptionally beneficial because the experts wish to use the same information prepared for multiple purposes. As demonstrated in [10], to design an oblong spot antenna, the support vector machine (SVM) functionality is evaluated. There is an early operation using ML methods for antenna review and preparation [11][12][13][14]. Oblong set of patches when using SVMs in the process for linear and nonlinear beamforming and the type of parameters. Artificial Neural Network, an ML technique, has also been used in this field.
The clustering approach is also applied in [14]. Find the optimal microstrip function for shorting blog posts. To achieve adequate bandwidth, browse the Spot Range Layout, Slant, and polarization, some searches, as noted away. For making use of ML techniques for marketing antenna layout, a rigorous review was carried out and coordinated. Indeed, the assessment of several ML techniques for antennas does not. It was revealed that loading is a required payment for this unique task, the distance by offering brand-new ML-based methods courses. There were evaluating for automated aerial concept optimization-performance in predicting accuracy and robustness, EM simulations, and making contrasts. ML is an excellent alternative for automatic, viable, efficient computing, and effective strategies for the antenna concept. This study's utmost goal is to broaden the built principles to different complicated style aerials, and scalable and functional algorithms [15]. Facing computer barriers by handling a range of design requirements [16].
An advanced meta-heuristic optimization algorithm based on the hybrid of the Sine Cosine Algorithm (SCA) and the Grey Wolf Optimizer (GWO) is proposed in this paper to train the Multilayer Perceptron (MLP) neural network. For antenna design optimization, the proposed algorithm is designed to illustrate these modern antenna design methods' feasibilities via their application of a reference double T-shaped monopole antenna optimization problem.

Methodologies
A double T-shaped monopole antenna reference is employed in this paper. This antenna's efficiency typically depends on five design parameters of l 21 , l 22 , w 1 , w 2 , and w, as shown in Fig. 1. The design allows these five parameters to differ in the definition of the process while retaining the various other three parameters: L; h 1 and h 2 , in their values, as indicated in [17]. For each example score, antenna performance is calculated by removing the amount of value (FOM) specified to obtain the maximum bandwidth in both desired antenna bands. FOM is calculated as follows.
where S 11 (f ) is the value of the reflection coefficient at frequency f .
Several industry situations, monitoring, organizing, style, engineering, clinical solutions, and logistics are considered. Any question for a world-class (fastest, most affordable, most robust, very most beneficial, etc.) is a marketing problem; a metaheuristic is a way of dealing with these complications. A metaheuristic is an approach to fix challenging concerns. A concern is challenging if locating the most effective feasible option that may not always be achievable within a feasible time [18][19][20][21][22].

Grey Wolf Optimizer
The GWO algorithm is mainly simulating the movements of the wolves during the hunting for prey process. Wolves live in packs; usually, there are four wolves in one pack, named alpha, beta, delta, and omega [23]. In one pack, the alpha wolves can make choices, and the beta wolves support them in making decisions. The GWO algorithm is introduced step by step in Algorithm 1. In equation form, the alpha (X α ) represents the best solution while the beta (X β ) and the delta (X δ ) indicate second and third optimal solutions. The rest of the solutions are named omega (X ω ). In the catching process of the prey, the first, second, and third best solution guide the rest of the wolves, as shown in the following Equations. where t indicates current iteration, X p is the prey's position, and X indicates a wolf' position. The A and C vectors are calculated as where a is decreasing from 2 to 0, and the vectors of r 1 , r 2 have values random in [0, 1]. a can control the processes exploitation and exploration. The a value is calculated as follows.
where M t indicated the number of iterations.
The X α , X β , and X δ , the best solutions, can guide the rest of solutions (X ω ) for updating their positions to be near the prey's position. The following equations indicate the updating process of the estimated positions.
. The population updated positions X (t + 1), average of the solutions of X 1 , X 2 , and X 3 , can be calculated as follows.

Sine Cosine Algorithm
Basic Sine Cosine Algorithm (SCA) was firstly proposed in [24] for optimization problems. Initially, the algorithm is based on sine and cosine oscillations' functions for updating candidate solutions' location. SCA uses a series of random variables to denote the direction of motion, how far the movement should be, emphasize/deemphasize the destination effect, and switch between the components of cosine and sine components [25,26]. For updating positions of various solutions, SCA uses the following mathematical method.
where X t i is the position of current solution in the ith dimension, P t i represents the current position of the best solution in the ith dimension. The parameters r 2 -r 4 are random values in [0; 1]. Eq. (7) illustrates that the positions of the agents are changed using the optimal solution position. The parameter of the SCA algorithm is to provide a balance between the extraction and discovery processes. r 1 can be updated during iterations as where t represents current iteration, t max is the maximum number of iterations, and a is a constant.
The initial population positions with n agents in the SCA algorithm are randomly set up as shown in Algorithm (1). The objective function is computed, Step (5), for all agents to find the best solution's position. P in Step (6) indicates the best solution. The parameter r 1 is updated according to Eq. 2 in Step (7). The positions of different agents are updated by Eq. (1) in Steps (8)(9)(10)(11)(12)(13). Steps from Steps 4-16 are repeated according to the number of iterations. The best solution P will be updated until the end of iterations.
Compared to a wide variety of other meta-heuristics, the original SCA algorithm demonstrates robust manipulation due to using a single, best approach to direct other candidate solutions. In terms of memory use and speed of convergence, this makes the helpful algorithm. However, on issues with many locally optimal solutions, this algorithm might show slightly degraded efficiency. The proposed Sine Cosine Grey Wolf Optimizer (SCGWO) algorithm inspired our attempt to mitigate this downside.

Multilayer Perceptron
Artificial Neural Networks (ANN) follow the biological nervous system principles for information processing and communication among the distributed nodes. The Synapse (the connection between neurons) is used to transmit signals from one neuron to other neurons. Speech recognition, regression, and machine learning algorithms are the most common areas of application of ANN [27,28]. The learning process and optimization of parameters have a significant impact on the performance of ANN. One of the most commonly applied ANN is MLP. The MLP structure is shown in Fig. 2.
where I i represent input variable i and w ij indicates connection weight between I i and neuron j in the hidden layer. β j is bias value for this layer. By applying the mostly recommended sigmoid activation function, output of node j is defined as The following equation can define the network output based on the value of f j S j for all hidden layer neurons.
where w jk indicates weights between neuron j in the hidden layer and output node k and β k is the bias value for the output layer.

Proposed Sine Cosine Grey Wolf Optimizer
The proposed Sine Cosine Grey Wolf Optimizer (SCGWO) algorithm is shown in Algorithm 3. The SCGWO algorithm takes advantage of the Sine Cosine algorithm and the Grey Wolf Optimizer in exploitation and exploration processes. SCGWO starts with initializing the population X (i = 1, 2, . . ., n) with size n. Then the parameters of a, A, and C are initializing as in Eqs. (3) and (4). The objective function F n is then calculated for each agent X i in the population. During the iterations, the parameter p d is randomly initialized between 0 and 1. One of the two GWO or SCA algorithms will be employed to update the agents' positions based on the selected value. From Algorithm 3, steps from 6-15 will get the best position X best based on the GWO algorithm.
Steps from 17-26, the best position X best will be updated based on the SCA algorithm.

Experimental Results
In this section, the SCGWO algorithm is evaluated for optimizing the double T-shaped monopole antenna problem's parameters. The SCGWO algorithm is applied in the experiments to optimize the weights of the MLP network. The input layer consists of five nodes representing a single parameter of the design and a single node output layer representing the FOM as shown in Tab. 1. The results, Tab. 1, of the proposed algorithm show that it is more precise than using the KNN and MLP techniques with a minimum time of 272.13 s in optimizing the design parameters of l 21 , l 22 , w 1 , w 2 , and w.
Descriptive statistics, shown in Tab. 2, are short, summarizing, descriptive coefficients that can either represent the results. The descriptive data is broken down into measurements of central propensity and measures of uncertainty. Tests are for the mean, median, and mode, while the uncertainty is measured for the standard deviation, variance, minimum and maximum variables. Tab. 2 shows the superiority of the proposed SCGWO MLP algorithm.

Algorithm 3: Proposed Sine Cosine Grey Wolf Optimizer (SCGWO)
A statistical methodology of ANOVA and t-test are applied to compare the populations to determine if there is a major difference between the proposed and compared techniques. Tab. 3 shows the two-way ANOVA test results. For this test, the statistical hypothesis can be formulated as follows.
• Null hypothesis (H0): mean the difference between groups are not significant.
• Alternative hypothesis (H1): significant difference between means of populations, which is the distinction. As also seen in Tab. 4, the statistical hypothesis for the one-sample t-test can be formulated as follows.
• Null hypothesis (H0): mean the difference between two groups is not significant.
• Alternative hypothesis (H1): the considerable difference between the two means of the population is the distinction.
Tab. 3 for the two-way ANOVA test and Tab. 4 for the one-sample t-test indicate the superiority of the proposed (SCGWO + MLP) algorithm. The histogram of the compared techniques performance vs. the proposed algorithm is investigated in Fig. 3. The (SCGWO + MLP) algorithm shows better behavior in both curves of histogram smooth and normalize. The QQ plot shown in Fig. 3 indicates that the proposed algorithm's actual and predicted values are almost fit.   The ROC analysis is also done to a standard of ranking and continuous diagnostic test results. Derived accuracy indexes, particularly the area under the curve (AUC), have a meaningful understanding of classification shown in Fig. 4. Tab. 5 shows that the AUC value of the proposed algorithm is much better than other techniques, close to 1.

Discussion
The proposed SCGWO algorithm is used for optimizing the double T-shaped monopole antenna problem's parameters. Results show that the algorithm is precise than the KNN and MLP techniques with a minimum time of 272.13 s to optimise the design parameters of l21, l22, w1, w2, and w. Descriptive statistics show the superiority of the proposed SCGWO MLP algorithm. The two-way ANOVA test and the one-sample t-test indicate the worth of the proposed (SCGWO + MLP) algorithm. The (SCGWO + MLP) algorithm also shows better behavior in both curves of histogram smooth and normalize. The QQ plot indicates the proposed algorithm's actual and predicted values. From the ROC curves analysis, the AUC value of the proposed algorithm is much better than other techniques, and it is close to 1. It is also noted that the minimum, maximum, and average values based on the proposed algorithm are almost the same vs. objective function, which indicated the stability of the proposed (SCGWO + MLP) algorithm.  Multilayer Perceptron (MLP) weights are optimized through the proposed advanced Meta-Heuristic Optimization, based on the Sine Cosine Algorithm (SCA) and the Grey Wolf Optimizer (GWO). The experimental results have shown that the machine learning techniques, based on the proposed SCGWO algorithm, can allow a double T-shaped monopole antenna to be scalable and theoretically autonomous architecture, which will be useful for many applications, including the Internet of Things. The proposed algorithm showed a comparative and statistical analysis of the ROC curve and the T-Test that indicated the superiority and validation stability evaluation of the predicted results to verify the procedures' accuracy.