A New High-Order and Efficient Family of Iterative Techniques for Nonlinear Models

In this paper, we want to construct a new high-order and eﬃcient iterative technique for solving a system of nonlinear equations. For this purpose, we extend the earlier scalar scheme [16] to a system of nonlinear equations preserving the same convergence order. Moreover, by adding one more additional step, we obtain minimum 5th-order convergence for every value of a free parameter, θ ∈ R , and for θ � − 1, the method reaches maximum 6-order convergence. We present an extensive convergence analysis of our scheme. The analytical discussion of the work is upheld by performing numerical experiments on some applied science problems and a large system of nonlinear equations. Furthermore, numerical results demonstrate the validity and reliability of the suggested


Introduction
Solution of nonlinear equations of the following form: where F : D ⊆ R n ⟶ R n , is a more challenging and broader problem as compared to the scalar equations. Several issues like neurophysiology, kinematics syntheses, transport theory, chemical equilibrium, reactor and steering combustion, and economics modeling problems can be phrased in the terms of (1) and details can be found in [1][2][3][4][5][6][7][8][9]. ere are different ways to construct iterative schemes for nonlinear systems. We center in the natural way methods constructed for scalar equations and then extend to the multidimensional case. Some researchers like Cordero et al. [10], Abad et al. [11], Cordero et al. [12], and Wang and Zhang [13] adopted this way to construct schemes for nonlinear systems.
ere is no doubt that some scalar schemes can be extended to a system of nonlinear equations with the help of some transformations or substitutions. But, the most important and crucial task is to retain the same convergence order and lower computational cost. erefore, researchers also tried some other approaches and procedures in order to construct new and higher-order iterative methods for systems of nonlinear equations. Recently, Sharma et al. [14] proposed fourth-order and sixorder iterative methods based on weighted-Newton iteration. Very recently, Artidiello et al. [15] provided fourthorder methods based on the weight function approach. Some researchers have also used the approaches like quadrature formulae, Adomian polynomial, divided difference approach for constructing iterative schemes to solve nonlinear systems. For the details of the other approaches, one can refer some standard text books [16,17,18].
In this paper, our main objective is to construct a new high-order family of iterative methods for a system of nonlinear equations. e development of our scheme is based on the scalar scheme proposed by Kou et al. [19]. First of all, we extend their scheme for a system of nonlinear equations retaining the same convergence order. en, we increase the convergence order up to 6 by adding one additional substep to the same scheme. e efficiency of the suggested iterative techniques is demonstrated on several applied science problems and academic numerical examples.
We found that our proposed methods perform better than the existing ones in terms of residual error, difference between two consecutive iterations, and asymptotic error constants.

Higher-Order Scheme for Multidimensional Case
Here, we consider a higher-order scheme for simple zeros on univariate function proposed by Kou et al. [19], which is defined as follows: where θ ∈ R is a free disposable parameter. Expression (2) has at least cubic convergence for every θ except θ ≠ − 1. It has maximum fourth-order convergence for θ � − 1. It is a linear combination of well-known Potra-Pták and Newton-Steffensen methods. More details can be found in Kou et al.'s study [19]. We can easily obtain Potra-Pták [20] and Newton-Steffensen [21], for θ � 1 and θ � 0, respectively, in scheme (2). We can rewrite scheme (2) in the following way: or where [y i , x i ; f] is the well-known divided difference of first order. But, scheme (2) does not work for a system of nonlinear equations. So, our foremost aim is to extend this scheme for a system of nonlinear equations and retain the same convergence order. erefore, we rewrite expression (4) in the following way: where θ is a free disposable parameter and [y (j) , x (j) ; F] is a finite difference of order one.
Now, by adding one extra substep, we have following higher-order scheme: In eorem 1, we demonstrate that the minimum convergence orders of methods (5) and (6) are three and five, respectively. In the proof of this result, we use the tools and procedure introduced in [10] which we recall briefly. Let From the above properties, we can use the following notation: On the other hand, for ξ + h ∈ R n lying in a neighborhood of a solution ξ of F(x) � 0, we can apply Taylor's expansion, and assuming that the Jacobian matrix F ′ (ξ) is nonsingular, we have where We observe that C q h q ∈ R n since F (q) (ξ) ∈ L(R n × · · · × R n , R n ) and In addition, we can express F ′ as where I is the identity matrix. erefore, qC q h q− 1 ∈ L(R n ). From (8), we obtain where X 2 � − 2C 2 , We denote e (j) � x (j) − ξ as the error in the jth-iteration in the multidimensional case. e equation e (j+1) � Me (j)p 2 Complexity where M is a p-linear function, M ∈ L(R n × · · · × R n , R n ), is called the error equation and p is the order of convergence. Let us observe that e (j)p is (e (j) , e (j) , . . . , e (j) ).

Theorem 1.
Let F: D ⊆ R n ⟶ R n be a sufficiently differentiable function in an open neighborhood D of its zero ξ. Let us consider that F ′ (x) is continuous and nonsingular in the neighborhood of ξ. In addition, we assume the initial guess x (0) is close enough to ξ for the guaranteed convergence. en, the iterative schemes (5) and (6) have minimum third and fifth order of convergence, respectively.
Proof. Let e (j) � x (j) − ξ be the error of the jth iteration. Developing F(x (j) ) and F ′ (x (j) ) in a neighborhood of ξ, we write and where I is the identity matrix of size n × n and With the help of expression (12), we have where Ω i � Ω i (C 2 , C 3 , . . . , C 6 ), given by From expressions (11) and (2), we yield where Λ j � Λ j (C 2 , C 3 , . . . , C 6 ), i.e., By inserting expression (15) in the first substep of (5), we obtain which further produces and By using expressions (11)- (19) in scheme (5), we have where By using the values of Λ 0 � − C 2 and Λ 1 � 2C 2 2 − 2C 3 from (15) in Γ 0 and Γ 1 , we have Complexity Γ 0 � 0, and adopting (22) in (21), we yield From expression (23), it clear that scheme (5) has the least third-order convergence for every θ. Now, expanding the F(z (j) ) in a neighborhood of ξ, we have which further produces, with the help of expression (18), By using equations (24) and (25) in the final substep of (6), we obtain □ It is clear from expression (26) that we have minimum fifth-order convergence for every θ ∈ R − − 1 { }. In addition, we have maximum sixth order of convergence for θ � − 1. Hence, schemes (5) and (6) have minimum third and fifth order convergence, respectively, for every θ ≠ − 1 ∈ R.

Numerical Experiments
Here, we checked the efficiency and effectiveness of our methods on 4 real life problems. Some of them have been taken from the paper of R. Behl et al. (see [22]). Additionally, we also consider a large system of equations of 150 × 150 to verify the theoretical results. e details of all numerical problems can be seen in examples (1)(2)(3)(4)(5). Furthermore, we also mentioned the starting points and solutions of the considered problems in the examples. Now, we employ our fourth and sixth-order schemes (5) and (6), called OS1 4 and OS2 6 , respectively, for θ � − 1 in order to verify the computational performance of them with the existing methods. Now, we compare (2) with sixth-order methods suggested by Mona et al. [23] and Lotfi et al. [24]. We consider methods (27) (for λ � 1, β � 2, p � 1, and q � 3/2) and (5), respectively, called MS and LS. In addition, we also compared our scheme with sixth-order family of iterative methods proposed by Sharma and Arora [25]. From them, we choose method (13), called SS. Finally, we compare (2) with a sixth-order family of iterative method that is recently designed by Abbasbandy et al. [26] and Hueso et al. [27].
In Tables 1-5, we depicted the iteration indexes (k), residual error of the corresponding function (‖F(x (j) )‖), error between the two consecutive iterations where pp is either 4 or 6), and η is the last calculated value.
During the current numerical experiments with programming language Mathematica (Version 9), all computations have been performed with multiple precision arithmetic with 1000 digits of mantissa, which minimize round-off errors. e meaning of a( ± b) is a × 10 (±b) in all tables. We adopted the command "AbsoluteTiming" in order to calculate the CPU time. We run our programs three times and depicted the average CPU time in Table 6. Also, one can observe the times used for each iterative method and each problem in Figure 1, where we want to point out that for big size problems, the method OS2 6 uses the minimum time, so it is being very competitive. e configuration of the used computer is given as follows: Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz Made of: HP RAM: 8:00 GB System type: 64-bit-Operating System, x64-based processor Example 1. Let us consider a boundary value problem described in the book of Ortega and Rheinbolt [16], which is defined as follows: In addition, we consider the following partition of the interval [0, 1], which is given by Moreover, we also consider the notation, y i � y(x i ), for i � 0, 1, . . . , n. We discretize problem (27) with the help of following numerical approximation for first and second derivatives Hence, we yield a nonlinear system of (n − 1) × (n − 1), which is given as follows: Let us consider initial approximation y (0) i � (1.8, . . . , 1.8, 1.8) T . In particular, we will solve this problem for n � 7 so that we can obtain a 6 × 6 system of nonlinear equations. e solution of this problem is Table 1: Convergence behavior of different methods on differential equation (27). e lowest residual errors, lowest difference between two consecutive iterations, and lowest asymptotic error constants belong to our scheme OS2 6 . So, we can say that OS2 6 has better convergence behavior than other mentioned methods.
e numerical results appeared in Table 1. Moreover, we represent in Figure 2 the difference between two consecutive iterations but in logarithmic scale and taking absolute value, ‖log(‖x (j+1) − x (j) ‖)‖, so the biggest value corresponds to the minimum distance error, by the properties of logarithmic function, that is, the OS2 6 iterative method presents the best results. e scheme OS2 6 has the lowest residual error and asymptotic error constant as compared to the other mentioned methods. In addition, lowest difference between two consecutive iterations also belongs to OS2 6 .  From this table, it is clear that there is a big difference of residual errors among OS2 6 and other existing methods. In addition, OS2 6 has the lowest difference between two consecutive iterations and asymptotic error constants.
6 Complexity Table 5: Convergence behavior of different methods on example 5. Our scheme OS2 6 has minimum residual error constant among other mentioned methods. It also has the lowest ‖x (4) − x (3) ‖ and asymptotic error constants as compared to other existing methods. Our scheme OS1 4 has the lowest average CPU time. Since, OS1 4 has convergence order fourth and others have sixth. en, it is quite obvious that OS1 4 has lower convergence behavior as compared to other mentioned methods in terms of residual error, asymptotic error constant, etc. However, it is better than the others in regard of CPU timing. On the other hand, the second lowest average CPU time belongs to our sixth-order convergent scheme OS2 6 and also illustrates better convergence performance as compared to other mentioned methods. Example 2. Consider another typical nonlinear problem, that is, Fisher's equation [28] with homogeneous Neumann's boundary conditions and D the diffusion coefficient: Again using a finite difference discretization, equation (32) reduces to a system of nonlinear equations. Consider u i,j � u(x i , t j ) to be its approximate solution at the grid points of the mesh. Let M and N be the number of steps in x and t directions, and h and k be the respective step size. Apply For the solution of the system, we consider M � 5 and N � 5 which reduces to a nonlinear system of size 25, with the initial vector u 0 (x i , t j ) � (1 + i + j/M 2 ) T , i, j � 1, 2, . . . , M convergence towards the following solution: and the numerical results of comparing the behavior different methods are shown in Table 2.
Example 3. e 2D Bratu problem [29,30] e approximated solution of a nonlinear partial differential equation can be found using finite difference discretization which reduces to solving a system of nonlinear equations. Let u i,j � u(x i , t j ) be its approximate solution at the grid points of the mesh. Let M and N be the number of steps in x and t directions and h and k be the respective step size. To solve the given PDE, apply central difference to u xx and u tt , i.e., u xx (x i , t j ) � (u i+1,j − 2u i,j + u i− 1,j )/h 2 , C � 0.1, t ∈ [0, 1]. We look for the solution of the system for M � 11 and N � 11 of size 100, with the initial sinusoidal approximation u 0 (x i , t j ) � 0.1(sin(πih)sin (πjk); by using different methods, one can see the obtained result in Table 3. e solution of this problem can be seen in Figure 3.
Example 4. In this example, we consider the following Hammerstein integral equation (see [pp. 19, 20]) Ortega to check the effectiveness and applicability of our proposed methods, and compared with other existing methods, the nonlinear integral equation is given by where x(s) ∈ C[0, 1], s, t ∈ [0, 1] and the kernel F can be written as To transform the above equation into a finite-dimensional problem by using Gauss Legendre quadrature formula given as 1 0 f(t)dt≃ n j�1 w j f(t j ), where the abscissas t j and the weights w j are determined for n � 8. Denoting the  Complexity approximations of x(t i ) by x i , one gets the system of nonlinear equations: where the abscissas t j and the weights w j are known and depicted in Table 7 We choose n � 150 in order to check the theoretical results mentioned with a large size system. We take the initial guess x (0) � (1/2, . . . , 1/2) T for this problem. e obtained solution of this problem is (39) e obtained results can be observed in Table 5.

Concluding Remarks
In this paper, we proposed a new high-order and efficient family of iterative techniques for a system of nonlinear equations. e suggested methods are the extension of the study by Kou et al. [19] for multidimensional case. In addition, our scheme has minimum 5th-order convergence by adding additional substep and for particular value θ � − 1, it reached the maximum 6th order. From the obtained numerical results in Tables 1-5, it is confirmed that the lowest residual error, difference between two consecutive iterations and asymptotic error constant, belongs to our method OS2 6 . Finally, our schemes OS1 6 and OS2 6 , respectively, also consume the first and second lowest average CPU time as compared to the other existing methods in examples 1-5. In the future work, we will try to propose higher-order version of the same or new methods with lower computational cost and that employ more accurate solution.

Data Availability
In this paper, we want to construct a new high-order and efficient iterative technique for solving a system of nonlinear

Complexity
equations. For this purpose, we extend an earlier scalar scheme to a system of nonlinear equations preserving the same convergence order. Moreover, by adding one additional step, we obtain minimum 5th-order convergence for every value of a free parameter, and in some specific cases, the method reaches maximum 6-order convergence. We present an extensive convergence analysis of our scheme. e analytical discussion of the work is upheld by performing numerical experiments on some applied science problems and a large system of nonlinear equations. Further, numerical results demonstrate the validity and reliability of the suggested methods.

Conflicts of Interest
e authors declare that they have no conflicts of interest.