Convergence analysis for a fast class of multi-step Chebyshev-Halley-type methods under weak conditions

Abstract: In this study a convergence analysis for a fast multi-step Chebyshe-Halley-type method for solving nonlinear equations involving Banach space valued operator is presented. We introduce a more precise convergence region containing the iterates leading to tighter Lipschitz constants and functions. This way advantages are obtained in both the local as well as the semi-local convergence case under the same computational cost such as: extended convergence domain, tighter error bounds on the distances involved and a more precise information on the location of the solution. The new technique can be used to extend the applicability of other iterative methods. The numerical examples further validate the theoretical results.


Introduction
I n this study, we consider the problem of approximating a locally unique solution x * of the equation where F : Ω ⊆ B 1 −→ B 2 be a Fréchet-differentiable operator, B 1 and B 2 are Banach spaces and Ω is a nonempty convex subset of B 1 . Numerous problems in Mathematics and computational sciences are written in the form of (1) using mathematical modeling [1][2][3][4][5][6][7][8][9]. Most solution methods for these equations are iterative, since closed form solutions can rarely be found. We study the local convergence of the multi-step Chebyshev-Halley-type [9] method defined for each n = 0, 1, 2, . . . by where x 0 is an initial points, γ, δ ∈ S, S = R or S = C, u n = x n − 2 3 F (x n ) −1 F(x n ), K(x n ) = F (x n ) −1 F (x n )F (x n ) −1 F(x n ) and v n = x n − F (x n ) −1 F(x n ).
Similar methods have been consider by other authors using conditions (C 1 )-(C 4 ) but the convergence order is smaller [4][5][6][7][8]. The convergence order was shown to be 4 + 3a using recurrence relations [9]. Notice that method (2) uses the second Fréchet-derivative which is expensive in general. However, there are cases e.g., when F is a bilinear operator or other cases [1][2][3]7,8], when method (2) is very use full, since it is very fast. Condition (C 4 ) may be hard to verify or may not hold even in simple cases. Let us consider ϕ(t) = bt for some b > 0. Consider a motivational example. Define function F on Ω = [− 1 2 , 5 2 ] by Choose x * = 1. Then, condition (C 4 ) is not satisfied since the third Fréchet derivative does not exists at x * = 1.
In this study, we present the local convergence not given in [9] and drop condition (C 4 ). This way we expand the applicability of method (2). Moreover, we refine Theorem 1 in [9] leading to a new semi-local convergence for method (2), a wider convergence region, tighter error bounds on the distances x n − x * and an at least as precise information on the location of the solution. These advantages are obtained, since we find a more precise region where the iterate lie resulting to tighter Lipschitz constants as well as the aforementioned advantages. The new constants are special cases of the old ones, so the advantages are obtained under the same computational cost.
The study is structured as follows; Section 2 contain the local convergence followed by the semi-local convergence in Section 3. The numerical examples in Section 4 conclude this study.

Remark 1. (a)
Let w 0 (t) = L 0 t, w(t) = Lt and w * (t) = L * t (w * replacing w in (a 4 )). In [9], authors used instead of (a 4 ) the condition But using (a 4 ) and (33) we get that holds, since Ω 0 ⊆ Ω. In case L < L * . Then, the new convergence analysis is better than the old one.
Notice also that we have by (a 3 ) and (33) The advantages are obtained under the same computational cost as before, since in practice the computation of constant L * requires the computation of L 0 and L as special cases. In the literature (with the exception of our works) (33) is only used for the computation of the upper bounds of the inverses of the operators involved. (b) The radius r A = 2 2L 0 +L was obtained in [1][2][3] as the convergence radius for Newton's method under condition (a 1 )-(a 4 ). Notice that the convergence radius for Newton's method given independently by Rheinboldt (see [1]) and Traub (see [2]) is given by As an example, let us consider the function f (x) = e x − 1. Then x * = 0. Set Ω = U(0, 1). Then, we have that L 0 = e − 1 < L * = e, L = e 1 e−1 , so ρ = 0.24252961 < r A = 0.3827.
Moreover, the new error bounds [1][2][3] are whereas the old ones [7,8] Clearly, the new error bounds are more precise, if L 0 < L. Moreover, the radius of convergence of method (2) given by r is smaller than r A (see (6)) . (c) The local results can be used for projection methods such as Arnoldi's method, the generalized minimum residual method(GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy [1][2][3]7,8]. (d) The results can be also be used to solve equations where the operator F satisfies the autonomous differential equation [1][2][3]: where p is a known continuous operator. Since F (x * ) = p(F(x * )) = p(0), we can apply the results without actually knowing the solution x * . Let as an example F(x) = e x − 1. Then, we can choose p(x) = x + 1 and x * = 0. (e) It is worth noticing that convergence conditions for method (2) are not changing if we use the new instead of the old conditions [9]. Moreover, for the error bounds in practice we can use the computational order of convergence (COC) , for each n = 1, 2, . . . or the approximate computational order of convergence (ACOC) , for each n = 0, 1, 2, . . . , instead of the error bounds obtained in Theorem 1. (f) In view of (a 3 ) and the estimate the second condition in (a 4 ) can be dropped and can be replaced by

Semi-local convergence analysis
Let us modify the (C) conditions ( given in a non-affine invariant form in [9]) so as to be given in affine invariant form as well as introduce the notion of the restricted convergence region. The conditions (C) are; Definition 1. The set T = T(F, x 0 , y 0 ) belong to class K = K(L 0 , L, L 1 , L 2 , η 0 , η), if for each x, y ∈ Ω 0 , whereφ is as ϕ.
We shall compare, the old condition (C) assuming they are given in affine invariant form with the new conditions (C). Clearly,

Numerical examples
We present the following example to test the convergence criteria.

Conclusion
Major concerns in the study of the convergence for iterative methods (local or semilocal) are; the size of the convergence domain, the selection of the initial point and the uniqueness of the solution. We address these problems using method (2) under sufficient convergence conditions which are weaker than the ones in [9] for the semilocal convergence case. This way we extend the convergence domain require fewer iterates to achieve a desired error tolerance and provide a better location on the location of the solution. We also examine the local convergence case not studied in [9]. In the future we will employ our technique to extend the applicability of other iterative methods too.