Solution of Nonlinear Systems of Equations via Metaheuristics

A framework devoted to the solution of nonlinear systems of equations using grey wolf optimization algorithm (GWO) and a multi-objective particle swarm optimization algorithm (MOPSO) is presented in this work. Due to several numerical issues and very high computational complexity, it is hard to find the solution of such a complex nonlinear system of equations. It then explains that the problem of solution to a system of nonlinear equations can be simplified by viewing it as an optimization problem and solutions can be obtained by applying a nature inspired optimization technique. The results achieved are compared with classical as well as new techniques established in the literature. The proposed framework also seems to be very effective for the problems of system of non-linear equations arising in the various fields of science. For this purpose, the problem of neurophysiology application and the problem of combustion of hydrocarbons are considered for testing. Empirical results show that the presented framework is bright to deal with the high dimensional equations system.


Introduction
A fundamental task in reliability engineering (Ram, 2013) and sciences, is finding the solutions of the system of nonlinear equations. The system of nonlinear equations can also arise in many domains like medicine, engineering, chemistry, robotics etc., which have their practical importance in human life. The task of finding the solutions is sometimes further complicated by adding a number of inequality and/or variable bound constraints. The problem of finding all solutions of a system of equation which is nonlinearly constrained is having very high computational complexity due to several numerical concerns and so it is non-deterministic polynomial-time hard (NP-hard) in nature. At the same time, there may be exponentially many such solutions (Horst et al., 1995). There exists a large frame of literature on methods to the solving systems of equations. According to Hentenryck et al. (1997), these methods fall within the following two broad classes: (i) Interval methods, though they are robust, but at the same time they are slow; (ii) Continuation methods, which are in effect for the problems in which, the total degree is not too high. For many years, and even today, the problems of solution to system of nonlinear equations were solved by these methods, though these approaches are inadequate for hard problems with unusual convergence pattern.  (Denis, 1967a(Denis, , 1968b(Denis, , 1971cOrtega and Rheinboldt, 1970;Denis and Wolkowicz, 1993;Brezinski, 1997;Denis et al., 1999, Grosan et al., 2006. Some of the popular techniques are Secant method (Denis and Wolkowicz, 1993), Broyden's method (Broyden, 1965;Press et.al., 2002), Effati and Nazemi method (Effati and Nazemi, 2005), Newton's method. These nonlinear algebraic problems can also be solved by converting them into optimization problems. Thus; numerical optimization procedures such as nature inspired algorithm can be used as a tool for the robust solutions of this kind of problem (Maranas and Floudas, 1995;Grosan and Abraham, 2008;Platt, 2014Platt, , 2015. The journey of a modern man from a troglodyte is due to human nature to try to unfold the mysteries of nature to improve the lives of human beings. Few years back we even can't think that school of fish, genes, nature of bat or ant can be used to design optimization algorithms but the nature has the solution of every problem. Researchers working on optimization theory are developing optimization techniques, which are inspired by nature and could be utilized as optimization tools for engineering problems. Metaheuristic optimization techniques are universal nowadays. They have been applied with great success to many real life problems of different domains including reliability optimization, bioinformatics, automotive design engineering (Kumar and Singh, 2008;Park and Pham., 2012;Ram et al., 2013;Singh et al., 2013;Pant et al., 2015a;Yang et al., 2015;Wang et al., 2015;Kumar et al. 2016;Kumar et al., 2017a;Kumar et al., 2017c;Pant et al., 2017a;Pant et al., 2017b;Kumar et al., 2018a;Kumar et al., 2018b;Chaube et al. 2018;Kumar et al., 2019a;Kumar et al., 2019b). Surprisingly, some of them such as the particle swarm optimization (PSO) (Eberhart and Kennedy, 1995); the cuckoo search algorithm (CSA) (Yang and Deb, 2009) and the grey wolf optimizer (GWO) algorithm (Mirjalili et al. 2014) have become very popular among not only mathematicians, but also scientist from different field.

Evolutionary Nonlinear Equations System
For getting the solution of system of equations (f), we can transform it into a multi-objective or a single objective optimization problem. For transforming it into a multi objective optimization problem, we treat each equation of it as an objective function. The goal of this optimization is the minimization of the difference between the left hand side and the right hand side of the equation in absolute term. Since the right hand side is zero, the objective function is to be given by the absolute value of the left hand side. In single objective optimization problem there is only one objective function to optimize while in multi-objective optimization problem there is more than one objective function which have to be optimized simultaneously. In single-objective optimization, solutions can be compared easily. For example, for maximization problem solution 'X' is better than 'Y' iff X > Y. Due to multicriterion comparison matrix, the solutions in a multi-objective space cannot be compared as the way they compared in case of single-objective optimization. In multi-objective optimization, a solution dominates (better) on another solution iff it shows better or equal objective value on all of the objectives and provide a better value in at least one of the objective functions. The mathematical definition of Pareto dominance Pareto optimal front for a maximization problem is as follows (Coello et al., 2004): Definition 2. Pareto optimal front: Pareto front can be viewed as the images of the Pareto optimum points in the criterion space.

Grey Wolf Optimizer Algorithm (GWO)
behavior of the pack of grey wolves can be defined with the help of the following mathematical equations (Mirjalili et al., 2014): Here P X  and X  denoted the position vector of the pray and the position vector of a grey wolf.

't' indicates the current iteration and
are the coefficient vectors which can be calculated as follows: Hunting behavior of grey wolves can be modelled with the help of the alpha (best candidate solution), beta and delta wolves. It is supposed that they have better knowledge about the location of the prey. That is why the other search agent, including the omegas oblige to update their positions according to the position of the best search agents as per the following formulas (Mirjalili et al., 2014): In order to propagate exploration and exploitation the parameter 'a' is decreasing from 2 to 0. The pseudo code of the GWO Algorithm is provided in Figure 1 (Mirjalili et al., 2014).

Multi-Objective Particle Swarm Optimization
The PSO algorithm, proposed by Eberhart and Kennedy (1995), is used for solving optimization problems (Pant and Singh, 2011;Pant et al. 2015b, Adewumi andArasomwan, 2016). It is inspired from the food searching behavior of birds flocking or fish schooling. The initialization process in PSO is started with an initial swarm. Initial swarm is a randomly generated population of particles. After that, a random velocity is assigned to each particle so that it can propagate in the search space and move towards optima over a number of iterations. Each particle has its own memory, so that it can remember the best position attained by it in the past. This best position is called the personal best position and denoted by Pbest. Each particle has its Pbest and the particle with the best value of fitness is called global best particle (Gbest). Suppose that the search space is D dimensional, the i th particle of the population can be represented by a D-dimensional vector   12 , ,..., . The velocity of this particle can be represented by another D-dimensional vector   12 , ,..., . The previously best visited position of i th particle is denoted by 12 ( , ,..., ) DT i i i i P p p p  and the best particle in the swarm is denoted by 12 ( , ,..., ) Particles update their position in the search space with the help of the Equation (8) and Equation (9). On the basis of a particle previous velocity Equation (8) calculates a new velocity for each particle and Equation (9) updates each particle's position in the search space.
where k denotes the iteration number and d=1, 2, 3……, D. w denotes the inertia weight which controls the momentum of particle by weighing the contribution of previous velocity. i= 1, 2, 3….N; N is the swarm size. r1 and r2 random numbers uniformly distributed between [0,1] and c1 and c2 are positive constants called acceleration coefficients.
PSO has been extended for solving the multi objective problems (MOP), which is generally known as the multi-objective particle swarm optimization (MOPSO) (Coello et al, 2004;Ghodratnama et al., 2015). The MOPSO-CD is swarm based evolutionary multi objective optimization technique which uses the crowding distance technique (Raquel and Naval, 2005;Chen et al., 2011). A flowchart showing the procedure of the MOPSO-CD is provided in Figure 2 ( Hassan et al. 2005;Zhao et al., 2007). More details of MOPSO can be consulted in (Raquel and Naval, 2005;Ghodratnama et al. 2015).

Description of the Problems
This section presents the brief description of the problems investigated in this work.

(a) Two Equations, Systems with Numerical Comparisons
The following systems are considered: Problem 1: In this example, a simple system of equations, having only two equations, is used. It has more than one solution.
The ability of the evolutionary approaches is illustrated to detect several solutions in a single run.

(c) Neurophysiology Application
A system of equation having more than two equations is considered (Verschelde et.al., 1994), it consists of the following equations.
The constant j c can be randomly chosen. We have considered 0 1, 2,3, 4 j c for j .

(d) Hydrocarbon Combustion
The equilibrium of the products of a hydrocarbon combustion process is addressed by this example. The problem is reformulated in the 'element variables' space (Maranas and Floudas, 1995;Platt, 2015).

Experiments and Results
In this section, we have presented some numerical results for the problems describe in the previous section: The parameters values used by evolutionary approaches in various problems are given in Table 1 and Table 2.

(a) Two Equations Systems With Numerical Comparisons
The results for Problem 1 and Problem 2 so obtained by applying Newton's, secant, Broyden's and Effati's methods (Grosan and Abraham, 2008) and the Grey wolf optimizer algorithm are presented in Table 3 and Table 4 respectively.  Figure 3 and Figure 4 depicted the grey wolf optimization algorithm search history for Problem 1 and Problem 2 respectively.

(b) Illustrative Example
After applying the evolutionary approach (MOPSO) on problem 3, nondominated solutions so obtained are tabulated in Table 5 and the corresponding pareto front is presented in Figure 5. The summation of objective functions absolute values so obtained for all nondominated solutions is plotted in Figure 6.

(c) Neurophysiology Application
Few nondominated solutions obtained by MOPSO for neurophysiology application are presented in Table 6 along with the values of the objectives for these values. Figure 8 represent the summation of objective functions absolute values so obtained for all nondominated solutions corresponds to neurophysiology application problem.  Figure 9 depicts the single solution obtained by grey wolf optimization algorithm with 30 grey wolves for neurophysiology application problem, after 200 iterations. Figure 9. GWO Search history for neurophysiology application problem

(d) Hydrocarbon Combustion
Some of the nondominated solutions obtained by MOPSO for hydrocarbon combustion problem are given in Table 6 along with the values of the objectives for these values. Summation of objective functions absolute values, so obtained for all nondominated solutions correspond to this problem is depicted in Figure 10.

Conclusion
In this research, the authors have presented a framework purely based upon metaheuristics (Gogna and Tayal, 2013) for solving systems of complex nonlinear equations. The proposed framework seems to be very effective even for the problems of system of non-linear equations arises in the various fields of engineering and science. Firstly, we compared our approach for some simple equation systems having only two equations. The results obtained; presented in Table 3 and Table 4; are very competitive and clearly outperforming some classical as well as new techniques established in the literature. Then, this structure is applied to an illustrative example, having more than one solution. Several solutions which are nondominated in nature are obtained and few of them are reported in Table 5. This structure is also applied to the problem of neurophysiology application and the problem of combustion of hydrocarbons. These real life problems consist of systems having a higher number of equations. Some nondominated solutions for those problems are reported in Table 6 and Table 7. Since we transformed system of equations into a single objective/ multi objective optimization problems, our goal is to obtain values as close to zero as possible for each of the involved objective. Summation of objective functions absolute values can be considered as a measure of quality for the solution obtained. The closer the value of this summation to zero, the better the solution is. It can be clearly deducted from the graphical illustration provided in the article that the proposed framework could obtain excellent results even for some systems which are quite complicated such as neurophysiology application and combustion of hydrocarbons.