A New Algorithm for Finding the Roots of Nonlinear Algebraic Equations

In this paper, the algorithm (Stochastic Gradient Descent) SGD, which is one of the most famous optimization algorithms, was hybridized with genetic algorithms in finding the roots of non-linear equations, which is one of the most important mathematical problems due to its application in all sciences. Genetic algorithms are used here to find the optimal primary root of SGD algorithm and its application in reducing the studied objective function. Some famous algorithms need initial point to reach the solution in terms of stability . The proposed algorithm is tested on several standard functions and the results are compared with the famous algorithms, and the results show the efficiency of the proposed algorithm through tables and figures.


Introduction
In most of the problems related to engineering and applied sciences, the problems come to a non-linear equation to be studied in the form: () = 0 1 Such as problems that require finding critical values and problems that search for eigenvalues by minimizing the objective function 1 .Most of the numerical analysis methods depend in their development on Newton's iterative method, which is to give a starting point to find the root of the studied function 2,3 .
Algorithms that search for the roots of a nonlinear algebraic equation are divided into two parts the first section is: the algorithms that are made with a certain number of steps and start with an initial value within the scope of the solution and with a number of iterations, the solution is reached, but inaccurately and with a large error.
The second section of algorithms: that depend on classification, which is the fastest in finding roots, and this method was proposed in this research based on genetic algorithms that choose the best element to be a candidate as the root of the studied function based on generating an initial population.The selection process is carried out according to the proposed steps with the SGD algorithm 2 .
In this paper, the genetic algorithm and the SGD algorithm were hybridized to solve Eq.1 by determining the appropriate starting point by generating an initial population of the genetic algorithm in the first step of its steps, which most other iterative algorithms suffer from in accurately reaching the desired solution.Then the appropriate studied function of the SGD algorithm was configured and worked to reduce it and improve its learning rate by suggesting an update relationship in each iteration.Some numerical examples were also presented that confirm the theoretical results that allow to compare this method with other standard methods.

Genetic Algorithms:
The genetic algorithm is one of the general search algorithms based on the natural selection mechanism and the natural gene system that is used to solve complex problems.It was used by the scientist John Holland in 1975 at the University of Michigan 8 as he published many researches in this field.The main goal included the development of many algorithms, software and systems using this algorithm, and a genetic algorithm is known as a smart algorithm that depends carefully on the ideas of genetic engineering, which is characterized by the intended production of new individuals with desirable (good) characteristics through the intended switch and modification of inherited groups (adding certain genetic materials or replacing them) with the aim of forming individuals with good qualities.On this basis, the genetic algorithm selects the preferred solutions from a large number of solutions and makes some overlaps and alterations between these solutions in order to create better solutions.As for genetic research, it is the process of choosing a quality scale so that the genetic processes generate the required goals that to find.
The genetic algorithm shortened a lot of effort and time required by the designers of systems and programs, by finding a general algorithm that is reliable in various types of issues, instead of building a special algorithm for each issue, taking into account the necessary changes that are commensurate with the specificity of each issue in terms of the size, type and nature of the data used, objective function, and constraints for each problem.

The Algorithm:
In the genetic algorithm process is as follows 8 : • Selection: The process of selecting parents from the community in order to intersect and produce a new generation.The SGD algorithm is considered one of the most important optimization algorithms, which is also used in training artificial neural networks, which depends on the first derivative of the studied objective function and is considered the beginning of the development of other optimization algorithms.
But it is related to the learning rate, which takes a fixed value during the algorithm iteration process, so it makes the algorithm slow and may not reach the required solution, and many researchers are working on developing it, such as the Adagrad, Adam algorithms 9 .
2. Make a guess  0 for .

Learning Rate:
The learning rate 10 is the most important measure in the algorithms for searching for the desired solution, which expresses the amount of the step in each search process or the transition from one solution to another to be more accurate, so if the step amount is relatively large, the exact solution is passed along the graph of the studied function.The amount of the step is rather small, the solution is reached accurately, but the algorithm takes more steps and more time, so many researchers work to estimate the learning rate, either by inference or by giving it an appropriate and fixed value in each iteration.
In this research, an appropriate function has been proposed to generate an appropriate value for the learning rate in each iteration, its value between zero and one, while preserving the value of the learning rate so that it does not converge to zero, because that in turn stops the algorithm without reaching the required solution.
The appropriate relationship for the learning rate has been proposed, thus taking advantage of the increase in the exponential function, as follows: where : represents the iteration counter and m is learning rate.
The learning rate values over several iterations can be illustrated in Table  1 shows the increasing values of the learning rate(lr) in each iteration(iter) by a small amount, which in turn leads to an acceleration of the algorithm to reach the solution, and this is better than the fixed value for it from the beginning of the algorithm.
The learning rate graph can be plotted during the working phase of the algorithm as shown in Fig. 2.

Proposed Algorithm:
To find the roots of a function f(x), assuming the cost function () and work to reduce this function as follows: () = (() −  0 ) 2 3 where  0 is the root of the function and in our case its value is equal to zero and try from the genetic algorithms to find all the roots of the studied function by generating an elementary community and evaluating this community through the matching function of genetic algorithms and thus get the optimal initial roots to get rid of randomness in giving primitive values to it that are far from the solution and then apply the SGD algorithm to reduce the cost function to the smallest possible and thus get the exact root.

Proposed Algorithm
Where:   is the population, and n is the number of the population.
7-Determining an initial value for counter I.
8-The selection process was carried out based on the matching function, and the selection was applied in the form of Tournament Selection.9-Crossover was performed and the two-point crossover was applied.10-Mutation was performed and the bit-inversion mutation was applied.11-Increasing the value of the counter I by one.12-Checking counter I if it is less than (Popsize) Return to step 8. 13-Increasing the value of the g counter by one.

14-Checking stop criterion as g is compared with
Genno (number of cycles entered) if g is less than Genno Return to step 6. 15-end of the algorithm.

Compare Results:
The programs were written using MATLAB 2016 program and the proposed method was applied to some test functions 11 shown in Table 2 with the initial point and exact root of each standard function, which most researchers adopt in testing their new methods.The results were compared with the most popular standard algorithms such as with the Newton's method (NM) 11 , the Weerakoon-Fernando method (WFM) 11 , Glis'ovic'et al. method (GOM) 12 , the Kou-Li-Wang method (KLWM) 13 , Wang's method (WM) 6 , Zalinescu Method 14 .The stopping criteria was used |f(x n+1 )| < ε , where ε = 10 −15 .

Table 2. The test functions and their initial point and root
Finding the initial root of the function  1 (x) within the range [0,4] in the Table 3 using genetic algorithm: The table shows a set of initial values for the initial population whose values range within the scope of the solution with the function values for each value and the third column represents the probability value for each value where the value with the greatest probability is chosen using the selection step of the genetic algorithm.The selection process according the roulette wheel method in   takes a new value that improves the required value. The genetic algorithm helps in finding the optimal and appropriate initial solution to start the proposed method away from randomness in taking the initial root.
 It is shown that this new method is more efficient than these existing methods and this method has lowest number of iteration and converges faster than the other methods.

Conclusion
Genetic algorithms and the SGD algorithm are among the most important algorithms used in artificial intelligence applications in optimization applications.Therefore, they were proposed to improve some numerical analysis algorithms in finding the roots of some functions by modifying the frequency relationship of these methods, as was done in this work.

Published
Online First: August, 2024 https://doi.org/10.21123/bsj.2023.7481P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal • Crossover: This process is represented by a switch between the corresponding values of the two syllables of the elected parents in order to form the new syllable.• Mutation: Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity in the population to avoid premature convergence.• Solution (Best Chromosomes) The flowchart of algorithm can be seen in Fig 1.

Steps: 1 -
Entering the objective function, Popsize, determining the solution field [a, b], learning rate , mutation probability Pm (value between zero and one), number of Genno iterations.2-Populating the population with random values with values in the binary system.3-Converting the elements of society into values in the decimal system within the scope of the solution.Published Online First: August, 2024 https://doi.org/10.21123/bsj.2023.7481P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal 4-Determining an initial value for the number of cycles represented by the variable g = 1.5-Calculating the elements of the new society in relation to:  =  −  *  ′ () 4 Where:  represents the learning rate, L' is the derivative of the studied objective function.6-Computing the fitness function:

Figure 3 .
Figure 3.The selection process by the roulette wheel method for the function f 1 (x) of 10 primary chromosomes, where each sector represents the proportion corresponding to the selection process.
Which it's called stochastic gradient descent (SGD) 9 .If  =   get standard GD instead.https://doi.org/10.21123/bsj.2023.7481P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal At the beginning of the algorithm the parameter values are defined which are the learning rate ∆ and batch size  and in each batch of training data the learning rate value is improved.

Table 1 . Values of the learning rate in 15 iterations
1:

Table 3 ,
shows the comparison of the proposed algorithm with the standard algorithms in terms of the number of iterations, which amounted to 4 iterations, and the amount of error is very small for the standard algorithms, likewise for Tables5,6,7.

Table 7 . The numerical results of the function
The proposed relationship to the learning rate that controls the behavior of the proposed method reduces the number of iterations and the amount of error because it does not take a fixed value during the iteration process, rather, it increases and https://doi.org/10.21123/bsj.2023.7481P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal