A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations

In this paper, we present a new family of methods for finding simple roots of nonlinear equations. The convergence analysis shows that the order of convergence of all these methods is three. The originality of this family lies in the fact that these sequences are defined by an explicit expression which depends on a parameter p where p is a nonnegative integer. A first study on the global convergence of these methods is performed. The power of this family is illustrated analytically by justifying that, under certain conditions, the method convergence’s speed increases with the parameter p. This family’s efficiency is tested on a number of numerical examples. It is observed that our new methods take less number of iterations than many other third-order methods. In comparison with the methods of the sixth and eighth order, the new ones behave similarly in the examples considered.


Introduction
Many problems in science and engineering [1][2][3] can be expressed in the form of the following nonlinear scalar equations: where f ðxÞ is a real analytic function. To approximate the solution α, supposed simple, of Equation (1), we can use a fixed-point iteration method in which we find a function F, called an iteration function (I.F.) for f , and from a starting value x 0 [4][5][6], we define a sequence x n+1 = F x n ð Þfor n = 0, 1, 2 ⋯ : A point α is called a fixed point of F if FðαÞ = α: By respecting some conditions, we can guarantee the convergence of the sequence ðx n Þ towards α.
The purpose of this paper is to construct, from Halley's method and Taylor's polynomial, a new family of methods for finding simple roots of nonlinear equations with cubical convergence. We will show that the weight functions of his methods have particular expressions which depend on a parameter p, where p is a nonnegative integer, and that, if certain conditions are verified, the speed's convergence of these sequences improves by increasing p. Moreover, from this study, we will study the global convergence of these methods. Finally, the efficacy of some methods of the proposed family will be tested on a number of numerical examples. A comparison with many third-order methods will be realized.

Development of New Family of Halley's Method
One of the best-known third-order methods is Halley's method, given by where W 0 L n ð Þ = 2 2 − L n , Using second-order Taylor's polynomial of f at x n , we obtain where x n is an approximate value of α. The graph of ðyÞ intersects the x-axis at some point ðx n+1 , 0Þ, which is the solution of the equation: Factoring x n+1 − x n from the last two terms, we obtain so This schema is implicit because it does not directly find x n+1 as a function of x n . It can be changed to make it explicit by replacing ðx n+1 − x n Þ, remaining in THE right-hand side of Equation (9), by Halley's correction given in Equation (4), we get where W 1 ðL n Þ = 2 − L n /2ð1 − L n Þ. It is simply known as super-Halley's method. By repeating the same scenario many times and by replacing ðx n+1 − x n Þ, each time, with the last correction found, we derive the following iterative process which represents a general family of Halley's method ðHpÞ for finding simple roots: where S 0 L n ð Þ = 1 and S 1 L n ð Þ = 1 − L n 2 , where p is a parameter, which is a nonnegative integer. We can show that S p can be explicitly expressed as a function of L n on the interval I = −∞, 1/2½ as follows: where Journal of Applied Mathematics Finally, the general family of Halley's method ðHpÞ is generated by This schema is simple and interesting because it regenerates both well-known and new methods. For example, For p = 0, the formula (11) corresponds to the classical Halley method ðH0Þ For p = 1, the formula (11) corresponds to the famous super-Halley method ðH1Þ, For p = 2 ⋯ 5,, we obtain the methods ðH2ÞðH3Þ, ðH4Þ, and ðH5Þ given, respectively, by the following sequences: 3. Analysis of Convergence

Order of Convergence
Theorem 1. Let p be a parameter where p is a non-negative integer. We suppose that the function f ðxÞ has at least two continuous derivatives in the neighborhood of a zero, α. Further, assuming that f ′ ðαÞ ≠ 0 and x 0 is sufficiently close to α, the methods defined by Equation (16)  Proof. Let α be a simple root of f ðxÞ and e n = x n − α be the error in approximating α by x n . We use the Taylor expansions [22] about α : f ′ x n ð Þ = f ′ α ð Þ 1 + 2c 2 e n + 3c 3 e 2 n + 4c 4 e 3 n + Ο e 4 f ′ ′ x n ð Þ = f ′ α ð Þ 2c 2 + 6c 3 e n + 12c 4 e 2 n + Ο e 3 where Using (23), we obtain Using Taylor's series expansion [22] of W p ðL n Þ about LðαÞ leads to Knowing that LðαÞ = 0, so We have Thus, formula (30) becomes Using Equation (27), we get Substituting Equation (28) and Equation (33) in formula (11), we obtain the error's equation: 3 Journal of Applied Mathematics Finally, using Equation (31), we obtain which completes the proof of the theorem.

Global Convergence of the Halley Families' Methods.
We will make a first study of the global convergence of six selected methods from the proposed family (Hp), in the case where they converge towards the root in a monotone fashion [6, 11, 13-15, 17, 19, 20].

Lemma 2.
Let us write the I.F. of f , from the sequence ðH0Þ to ðH5Þ: Then, the derivatives of the I.F. of f are given by

Monotonic Convergence of the Sequences ðHpÞ.
Let p be a parameter where p is an integer between 0 and 5. We consider the functions g p of a real variable defined on interval I = −∞,1/2½ by: where β ∈ α, x 0 ½and p ∈ f0, 1, ⋯, 5g: Knowing that L f ′ ðxÞ ≤ g p ðL f ðxÞÞ and that the derivatives F p ′ of I.F. are given by (39), Journal of Applied Mathematics we deduce that F p ′ ðxÞ ≥ 0 in ½α, b. So, x 1 ≥ α. By induction, we obtain that x n ≥ α for all n ∈ ℕ: containing the root α of f . Then, the sequences ðHpÞ given by Equation (16), are decreasing (resp., increasing) and converge to α from any point Proof. Let us consider the case where ðx 0 Þf ′ ðx 0 Þ > 0, then x 0 > α. The application of the mean value theorem gives: Furthermore, from Equation (11) we have where As L 0 ∊−∞, 1/2½, then S p ðL 0 Þ > 0, so W p ðL 0 Þ > 0 for all p ∈ f0, 1, ⋯, 5g.
In addition, we have f ðx 0 Þ/f ′ðx 0 Þ > 0, we deduce that x 1 < x 0 . Now, it is easy to prove by induction that x n+1 ≤ x n for all n ∈ ℕ: Thereby, the sequences (16) are decreasing and converge towards a limit r ∈ ½a, b, where r ≥ α. So, by calculating the limit of Equation (16), we obtain We have L f ðrÞ < 1/2 and S p ðL f ðrÞÞ > 0, for every real L f ðrÞ∊ −∞,1/2½. So, W p ðL f ðrÞÞ ≠ 0 and consequently f ðrÞ = 0: As α is the unique root of f in ½a, b , therefore r = α: This completes the proof of Theorem 3.

Principal Advantage of the New Family
As the family ðHpÞ is governed by formula (16), depending on the parameter p, where p is a nonnegative integer, it would be interesting to look for which p values, and under which conditions, the convergence is faster.
On the other hand, we have We deduce that F p+1 ðu n Þ ≤ F p ðv n ÞÞ. So u n+1 ≤ v n+1 and the induction is completed. The case f ðx 0 Þf ′ ðx 0 Þ < 0 is similar to the previous one.
Consequently, the power of the proposed family has been shown analytically by justifying that, under certain conditions, the convergence's speed of its methods increases with the parameter p, where p is a nonnegative integer. As the methods of Halley and super-Halley are particular cases of this family, in which the parameters p are the smallest, their convergence rates are smaller than the ones of the other new higher parameter methods.

Numerical Results
For the numerical results, we use a fixed stopping criterion of ε = 10 −15 and j f ðx n Þj ≤ 10 −15 . The computations were performed using MATLAB R2015b.
In order to compare two methods, we give the number of iterations (N) required to satisfy the stopping criterion, the number ðdÞ of function (and derivatives) evaluations per step, and the order of convergence ðqÞ of the method. Based on ðqÞ and ðdÞ, there is an efficiency measure defined by E = q 1/d (efficiency index).
Unfortunately, for the methods of the same order ðqÞ and demanding the same number of function evaluations ðdÞ, the efficiency index ðEÞ is the same for these methods. In this case, the comparison is made on the basis of the number of iterations (N). This number depends on how far the starting point x 0 is from α and the value of the asymptotic constant. For two methods of the same order ðqÞ, the one having the smallest asymptotic constant will converge faster than the other having the higher asymptotic constant, for a starting point x 0 sufficiently close to α. But if x 0 is too far from α (and x 0 in the basin of attraction of α), it is possible that a method with a higher asymptotic constant converge faster [18]. Thus, in order to make the comparison more realistic and fairer, it is preferable to use an approximate asymptotic constant at the step n, defined by where x n and x n+1 are two consecutive iterations. In general, choosing x 0 close enough to α and a very high precision (300 significant digits or more), then, taking x n and x n+1 closer to the root, ðCÞ will tend towards the theoretical asymptotic constant. Furthermore, we cannot compare two methods of different q-order and demanding the same number of function evaluations ðdÞ, on the basis of asymptotic constant. It is quite obvious that the method with the highest q is the fastest, for a starting point x 0 sufficiently close to the solution. But, if x 0 is too far from α (and x 0 in the basin of attraction of α), the"order" of convergence is not necessarily q, especially for the first iterations of the method [10]. Thus, it is more correct and judicious to use the computational order of convergence ðρÞ at the step n, which can be approximated using the formula [37]: where x n−1 , x n , and x n+1 are three consecutive iterations. In general, choosing x 0 close enough to α and a very high precision (300 significant digits or more), then, taking x n−1 , x n , and x n+1 closer to the root, ðρÞ will tend towards the theoretical order of convergence ðqÞ.
Here, the values of ρ andC will be calculated by using the same total number of function evaluation iterations (or, if not possible, the same total number of iterations) for all methods.
The tests functions, used in Tables 1-7, and their roots α are displayed in Table 2.
In this case, it is not easy to compare the chosen methods because they have the same order of convergence (q = 3), the same number of function evaluations (d = 3) by step, the same efficiency index E = ffiffi ffi 3 3 p , and the same theoretical asymptotic constant (see formula (21)). In Table 1, the comparison will be, thus, made on the basis of the number of iterations (N), the approximate asymptotic constants ðCÞ defined by (53), and/or the computational order of convergence ðρÞ defined by (54). As we know that the method with the smallest values of ðNÞ and ðCÞ and/or the higher values of ðρÞ will be locally faster.
We note in Table 1 that The given example illustrates the great importance of the Theorem 6 which stipulates that, under certain conditions, the higher the parameter p is (p is a nonnegative integer), the faster the convergence of methods ðHpÞ becomes. Table 3, we shall present the numerical results obtained by employing various cubically convergent iterative and Newton's methods. Comparing Newton's method (NM) defined by formula (3), Chebyshev's method (CB) defined by (16) in [16], Chun's method (CH) defined by Equation (30) with a n = 1 in [16], Sharma's method (SM) defined by Equation (26) with a n = 1 in [17], Halley's method (H0) defined by Equation (4) given before, and super-Halley's method (H1) given by Equation (10) above, with the four new methods designated as H2, H3, H4, and H5 defined above, respectively, by Equations (17), (18), (19), and (20).

Comparison with Other Third-Order Methods. In
In Table 3, all the methods have the same order of convergence (q = 3) and require the same number of function evaluations (d = 3). Consequently, they have the same efficiency index E = ffiffi ffi 3 3 p . Thus, the comparison in Table 3 can be made on the basis of the number of iterations (N) and the approximate asymptotic constants ðCÞ defined by (53). We know that the method with the smallest values of ðNÞ and ðCÞ is locally faster.
From the numerical results given in Table 3, we see that the four proposed methods ðH2, H3, H4, and H5Þ of the new family appear more interesting and effective than the other chosen third-order methods, because in the majority of the selected examples, our methods converge with fewer iterations and smaller approximate asymptotic constants.
In Table 5, we exhibit the absolute values of the error e n = jx n − αj, the error in the consecutive iterations jx n+1 − x n j, the absolute values of the function j f ðx n Þj, the computational order of convergence ðρÞ, and the efficiency index ðEÞ: In Table 4, the comparison, above, with several fifth-, sixth-, and even eighth-order methods, confirms the efficiency and power of the new proposed family. In fact, in most of the considered examples, Table 4 shows that our methods behave in a similar way as the higher order ones, as they require an equal or smaller number of function evaluations Table 5 confirms the power of higher order methods which shows, generally, higher values of computational order of convergence (ρ) if the starting point x 0 is sufficiently close

Test functions
Root (α) 3.000000000000000 in Tables 6 and 7 that show that, for our methods ðH2, H3, H4, and H5Þ, the choice of an x 0 , far from the root α only results a small variation of ðρÞ comparing to its theoretical value (q = 3). Contrary to high-order methods, the further away from x 0 , the more the value of ðρÞ decreases during the first iterations. This leads us to think that these high-  order methods would start the first iteration with a speed lower than the normal one; then, in the line of iterations, would progressively regain speed to reach its maximum in the last iteration. Thus, the delay that would be in the first iterations could lead to a decrease in the average speed of convergence, and consequently to an increase in the number of iterations ðNÞ: This would explain why, in some cases (such as the example f 6 ), our methods, which are of order 3, show numbers ðNÞ and ðdÞ similar or even smaller than higher order methods, contrary to the predictions. Having

Conclusion
In this paper, we built a new family of Halley's method with third-order convergence for solving nonlinear equations with simple roots. The proposed scheme is interesting because it regenerates Halley's method, super-Halley's method, and infinity of new methods. The originality of this family lies, on the one hand, in the fact that these sequences are gov-erned by a single formula depending on a parameter p, where p is a nonnegative integer, and on the other hand, under certain conditions, the convergence speed of its methods improves when the p value increases. In order to reveal the quality of the new family, we focused on four of its methods. A first study on the global convergence of these selected methods was carried out. To test the new techniques, several digital examples were produced. The performance of our methods is compared with wellknown methods of similar or higher order. The numerical results clearly illustrated the efficiency of the techniques of the new family proposed in this article.

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that they have no conflicts of interest.