Optimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical Engineering

A new iterative technique for nonlinear equations is proposed in this work. The new scheme is of three steps, of which the first two steps are based on the sixth-order modified Halley’s method presented by the authors, and the last is a Newton step, with suitable approximations for the first derivatives appeared in the new scheme. The eighth-order of convergence of the new method is proved via Mathematica code. Every iteration of the presented scheme needs the evaluation of three functions and one first derivative. Therefore, the scheme is optimal in the sense of Kung-Traub conjecture. Several test nonlinear problems are considered to compare the performance of the proposed method according to other optimal methods of the same order. As an application, we apply the new scheme to some nonlinear problems from the field of chemical engineering, such as a chemical equilibrium problem (conversion in a chemical reactor), azeotropic point of a binary solution, and volume from van der Waals equation. Comparisons and examples show that the presented method is efficient and comparable to the existing techniques of the same order.


Introduction
Searching for a solution of g x ð Þ ¼ 0, when g x ð Þ is nonlinear is highly significant in mathematics. Newton's iterative technique for solving such equations is defined as It was shown by Traub [1] that the scheme given by (1) has the second-order of convergence. Many researchers have improved the method of Newton to attain better results and to increase the convergence order, for instance see [2][3][4][5] and the references therein. Petković [6] has presented a general class of multipoint root finding methods of arbitrary order 2 n . Because of the huge number of iterative techniques that appear in the literature, Petković et al. [7] have presented a review for the most efficient iterative methods and developed techniques in a general sense. Also, Cordero et al. [8] have presented a general survey on optimal iterative schemes and how to design optimal methods of different orders.
One of the most famous improvements of Newton's scheme is the technique of order three given by Halley [9]: z nþ1 ¼ z n À 2g z n ð Þg 0 z n ð Þ 2ðg 0 z n ð ÞÞ 2 À g z n ð Þg 00 z n ð Þ : Halley's method has been studied widely and improved in different ways. For example, a two-step Halley's scheme with order of convergence equals six was implemented by Noors et al. [10] using a predictor-corrector technique. But finding the second derivative is not always an easy task. Because of that Noor et al. [11] have improved the previous technique with the help of the finite difference and implemented a new second derivative-free scheme of order five. Very recently, Said Solaiman et al. [12] have established two sixth-order modifications of Halley's method, with one of them without second derivative.
One of the most common ways to compare the efficiency of iterative methods is the efficiency index which can be determined by q 1=r , where q is convergence order of the iterative scheme and r represents number of functions needed to be found at each iteration. Kung et al. [13] mentioned in a conjecture that the iterative scheme with the number of functional evaluations equals r is optimal if its order of convergence equals 2 rÀ1 . Many authors have constructed optimal iterative methods of different orders. The default way for constructing optimal method is the composition technique together with the usage of some interpolations and approximations to minimize the number of functional evaluations. Different optimal fourth-order iterative methods have been constructed, see for examples [14][15][16]. Optimal eighth-order of convergence methods have been presented by many authors, see [2,17,[18][19][20][21][22]. A comparison using the dynamics of different families of optimal eighth-order of convergence methods was proposed by Chun et al. [23].
We propose in this work a new optimal eighth-order iterative technique for nonlinear equations. The new method is a modification of the modified Halley method (MH2) introduced by Said Solaiman et al. [12]. We use the composition technique with Hermite's interpolation for the first derivative to reach the eighth-order of convergence with optimality, which can be considered as the major motivation of this research. The work in this paper is distributed as follows. In Section 2, below the new scheme is illustrated. In Section 3, the order of convergence of the new scheme is determined. In Section 4, four chemical engineering problems in addition to six nonlinear examples are used to demonstrate the efficiency of the proposed scheme, and tables are used to illustrate the comparison between our optimal method with other techniques having equal order. Lastly, in Section 5 the conclusion is given.

The New Method
Let g x ð Þ ¼ 0 be an equation such that g x ð Þ is a nonlinear function defined on some open interval A and sufficiently differentiable. Let a 2 A be a simple root of g x ð Þ, and consider x 0 as an initial guess which is sufficiently close to a. Said Solaiman and Hashim [12] obtained the following iterative scheme using Taylor's expansion of g x ð Þ with Newton's and Halley's methods.
Said Solaiman et al. [12] named Algorithm 1 as MH1. They proved that MH1 is of order of convergence equals six. MH1 needs at each iteration the evaluation of two functions, two first derivatives, and one second derivative evaluation. So, the efficiency index of MH1 is ð6Þ 1 5 % 1:431, which is not as good as Halley's method.
To make the efficiency index of Algorithm 1 better, Hermite's approximation of the second derivative is used to produce a second derivative-free method given by: where the second derivative g 00 y n ð Þ is approximated by R x n ; y n ð Þ¼ 3 g y n ð Þ À g x n ð Þ y n À x n À 2g 0 y n ð Þ À g 0 x n ð Þ ! 2 x n À y n : Algorithm 2 is called MH2. Said Solaiman et al. [12] proved that it is of order six. MH2 needs at each iteration the computation of two functions, and two first derivatives only. So, MH2 has efficiency index equals ð6Þ 1 4 % 1:565, which is better than ð6Þ In order to reach the optimality, we reduce the number of functions needed to be evaluated at each iteration by using divided differences, Hermite's interpolation, and the composition of Algorithm 2 with Newton's method. Now, by using Algorithm 2 as a predictor, and Newton's technique as a corrector one obtains the following algorithm: Algorithm 1: Let x 0 be an initial guess of the solution of g x ð Þ ¼ 0. Then we can approximate x nþ1 by the iterative method defined by: x nþ1 ¼ y n À g y n ð Þ g 0 y n ð Þ À 2 g y n ð Þ ð Þ 2 g 0 y n ð Þg 00 y n ð Þ 4 g 0 y n ð Þ ð Þ 4 À 4g y n ð Þ g 0 y n ð Þ ð Þ 2 g 00 y n ð Þ þ g y n ð Þ ð Þ 2 g 00 y n ð Þ ð Þ 2 : Algorithm 2: Let x 0 be an initial guess of the solution of g x ð Þ ¼ 0. Then we can approximate x nþ1 by the iterative method defined by: x nþ1 ¼ y n À g y n ð Þ g 0 y n ð Þ À 2 g y n ð Þ ð Þ 2 g 0 y n ð ÞR x n ; y n ð Þ Algorithm 3: Let x 0 be an initial guess of the solution of g x ð Þ ¼ 0. Then we can approximate x nþ1 by the iterative method defined by: w n ¼ y n À g y n ð Þ g 0 y n ð Þ À 2 g y n ð Þ ð Þ 2 g 0 y n ð ÞR x n ; y n ð Þ 4 g 0 y n ð Þ ð Þ 4 À 4g y n ð Þ g 0 y n ð Þ ð Þ 2 R x n ; y n ð Þþ g y n ð Þ ð Þ 2 R x n ; y n ð Þ ð Þ 2 ; Algorithm 3 has order of convergence equals twelve with error term e nþ1 ¼ c 5 2 ðc 4 À c 2 c 3 Þ 2 e 12 n þ O e 13 n À Á . At each iteration, Algorithm 3 requires three function evaluations and needs three first derivatives. Our goal is to rewrite g 0 y n ð Þ and g 0 w n ð Þ by using a combination of already evaluated functions.
Using the second-order polynomial interpolation of the function g y n ð Þ one simply obtains g 0 y n ð Þ % g y n ; x n ½ þ y n À x n ð Þg y n ; x n ; x n ½ ; where g y n ; x n ; x n ½ ¼ g y n ; x n ½ Àg 0 x n ð Þ ð Þ = y n À x n ð Þ. Simplifying (7) and using q y n ð Þ ¼ g 0 y n ð Þ gives g 0 y n ð Þ ¼ q y n ð Þ % 2g y n ; x n ½ Àg 0 x n ð Þ: Now, we will use the technique which proposed before by Petković [6] and Petković et al [7] to approximate g 0 w n ð Þ, consider Hermite's interpolating polynomial of order 3 where c 1 ; c 2 ; c 3 ; and c 4 need to be found. With the conditions and by solving the system of linear equations resulted from the above conditions, we get Substituting these into Eq. (9) and using the approximation g 0 w n ð Þ ¼ k 0 w n ð Þ, one can write k 0 w n ð Þ ¼ g w n ; x n ½ 2 þ x n À w n y n À w n À ðx n À w n Þ 2 x n À y n ð Þ y n À w n ð Þ g x n ; y n ½ þ g 0 x n ð Þ y n À w n x n À y n % g 0 w n ð Þ: Replacing g 0 y n ð Þ and g 0 w n ð Þ in Algorithm 3 and in Eq. (5) with the approximations (8) and (10) respectively, the following algorithm is obtained.
We call the above scheme the third modified Halley's method MH3, which has convergence order equals eight as we will see in the next section. Each iteration in Algorithm 4 requires the evaluation of three functions, and one first derivative only. Based on the conjecture of Kung et al. [13], MH3 attains optimality and has efficiency index ð8Þ Algorithm 4: Let x 0 be an initial guess of the solution of g x ð Þ ¼ 0. Then we can approximate x nþ1 by the iterative method defined by:

Order of Convergence
In this section we establish the order of convergence of the presented method MH3 given by Algorithm 4 by using Mathematica codes to prove the order of convergence. Consider for the next theorem that a is a root of g x ð Þ, and let e n ¼ x n À a be the error at the n-th iteration. Using Taylor's series expansion of g x ð Þ about In [2]:= g[x_,y_]:= g x ½ À g y ½ x À y ; (*This is the finite difference*).

Applications and Numerical Examples
To show the efficiency of the new optimal eighth-order method MH3 , several examples will be tested including some chemical engineering problems. Comparison will be done against the following schemes of optimal eighth-order of convergence: the method proposed by Kung et al. [13], the method presented by Cordero et al. [2] with b ¼ 1, the second case of the first family with b ¼ 1 proposed by Sharma et al. [19], the method presented by Behl et al. [21] with b ¼ 1, and the special case 2 with b ¼ 1 from the method presented by Behl et al. [18]. We denote the methods by the following abbreviations respectively: KT, CLMT, SA, BGMM, and BAM.
We consider x n À x nÀ1 j j< 10 À30 and f x n ð Þ À f x nÀ1 ð Þ j j< 10 À30 at the same time as a stopping criterion of the computer programs. Mathematica 9 was used to carry out all computations with 10000 significant digits.
Tabs. 1-5 illustrate the comparisons between the iterative methods, where n indicates number of the iterations such that the stopping criterion is affirmed, x n is the approximate root, x n À x nÀ1 j j is the absolute difference between two successive approximations of the root such that x n À x nÀ1 j j< 10 À30 and f x n ð Þ À f x nÀ1 ð Þ j j< 10 À30 , f x n ð Þ is the value of the approximate root, the approximated computational order of convergence (ACOC) given by Cordero et al. [24], which can be estimated as follows and finally, the time in seconds required to satisfy the stopping criterion using the built-in function "TimeUsed" in Mathematica 9 software. All calculations have been performed under the same conditions on Intel Core i7-3770 CPU @3.40 GHz with 4GB RAM, with Microsoft Windows 10, 64 bit based on X64-based processor.

Consider the following test examples:
Example 1 (A chemical equilibrium problem) Consider the equation from [25] which describes the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is called fractional conversion). Also, consider that we have pressure of 250 atm and temperature of 500 C, the original problem consists of finding the root of the function f 1 x ð Þ ¼ 8ð4 À xÞ 2 x 2 ð6 À 3xÞ 2 2 À x ð Þ À 0:186; which can be reduced in polynomial form as: The four roots of this function are: x 1 ¼ 0:27776; x 2 ¼ À0:384094; x 3 ¼ 3:94854 þ 0:316124i and x 4 ¼ 3:94854 þ 0:316124i. By the definition, the factional conversion must be between 0 and 1. So, only the first real root x 1 ¼ 0:27776 is acceptable and physically meaningful. We started by x 0 ¼ 0:3 as an initial guess. The results are concluded in Tab. 1.

Example 2 (Azeotropic point of a binary solution)
Consider the problem obtained by Shacham et al. [26] to determine the azeotropic point of a binary solution: where x is the fractional conversion of species in a chemical reactor. Therefore, x should be bounded between 0 and 1.
The solution of this equation is x ¼ 0:7573962463. As an initial solution, we selected x 0 ¼ 0:77. Check the results in Tab. 3.

Example 4 (Volume from Van Der Waals equation) Van Der Waals' equation is given by
where p; V ; T ; n are the pressure, volume, temperature in Kelvin and number of moles of the gas. R is the gas constant equals 0:0820578. Finally, a and b are called Van Der Waals constants and they depend on the gas type. Its clear that the above equation is nonlinear in V . It can be reduced to the following function of V .
For instance, if one has to find the volume of 1:4 moles of benzene vapor under pressure of 40 atm and temperature of 500 C, given that Van Der Waals constants for benzene are a ¼ 18 and b ¼ 0:1154, then the problem arises is to find roots of this polynomial f 4 x ð Þ ¼ 40x 3 À 95:26535116x 2 þ 35:28x À 5:6998368: The above equation has three roots: x ¼ 1:97078 and x ¼ 0:205425 AE 0:173507i. As V is a volume, therefore only the positive real roots are physically meaningful, that is the first root. We considered the initial approximation x 0 ¼ 2 for this problem. The results and comparisons are concluded in Tab. 4.

Example 5
To study the proposed method on some nonlinear functions, consider the following six test functions: Comparisons' results of Example 5 are presented in Tab. 5.
It is clear from Tabs. 1-5 that MH3 needs less iterations to satisfy the stopping criterion than the other tested methods, or in some cases it needs the same number of iterations. Based on the numerical experiments, the iterative scheme given by MH3 is comparable to the tested schemes of equal order. Note that even if MH3 has the same number of iterations needed to satisfy the convergence criterion, it is still superior to the other schemes considered in this study since x n À x nÀ1 j jand f x n ð Þ are less for MH3 than the other tested methods of the same order. Also, in the last column of Tabs. 1-5, the CPU time required to satisfy the convergence condition of MH3 is less in nine out of 10 functions than that of the other tested methods. Overall, based on either the number of iterations or CPU time needed to satisfy the convergence criterion, the new method would be preferable as compared to the tested methods.
For the test functions in Example 5, we test another convergence condition, that is number of required iterations such that x n À x nÀ1 j j< 10 À200 . It is obvious from Tab. 6 that MH3 requires number of iterations which is fewer or equal to the number needed by the tested methods of equal order of convergence to satisfy the convergence criterion. Overall, MH3 is comparable to the other tested methods if we want to take in account the accuracy of the approximate zero with the CPU time needed to satisfy the stopping criterion.

Conclusion
A new optimal root finding scheme for nonlinear equations has been established in this work. The optimality of the proposed method was reached by using composition technique with Hermite's polynomial and finite differences. The software Mathematica has been used to show that the optimal technique is convergent with convergence order equals eight. Several numerical examples with four real life problems from the field of chemical engineering were examined, demonstrating the strength of the proposed method. Overall, the implemented method is comparable to the tested iterative schemes of equal order of convergence.
Funding Statement: We are grateful for the financial support from UKM's research Grant GUP-2019-033.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.