Ball convergence of a novel Newton-Traub composition for solving equations

We present a local convergence analysis of a Newton-Traub composition method in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. The method was shown to be of convergence order five, if defined on the m−dimensional Euclidean space using Taylor’s expansion and hypotheses reaching up to the fifth derivative (Hueso, Martinez, & Tervel, 2015; Sharma, 2014). We expand the applicability of this method using contractive techniques and hypotheses only on the first Fréchet-derivative of the operator involved. Moreover, we provide computable radius of convergence and error estimates on the distances involved not given in the earlier studies (Hueso et al., 2015; Sharma, 2014). Numerical examples illustrating the theoretical results are also presented in this study. Subjects: Advanced Mathematics; Analysis Mathematics; Functional Analysis; Mathematics & Statistics; Operator Theory; Science


PUBLIC INTEREST STATEMENT
The most commonly used solution methods for finding zeros of the equation F(x) = 0 are iterative-i.e. starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation. In the present paper, we present a local convergence analysis of a Newton-Traub composition method in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. The method was shown to be of convergence order five, if defined on the m−dimensional Euclidean space using Taylor's expansion and hypotheses reaching up to the fifth derivative. We expand the applicability of this method using contractive techniques and hypotheses only on the first Fréchet-derivative of the operator involved. Moreover, we provide computable radius of convergence and error estimates on the distances involved not given in the earlier studies

Introduction
In this paper, we are concerned with the problem of approximating a solution x * of the equation where F is a Fréchet-differentiable operator defined on a convex subset D of a Banach space X with values in a Banach space Y.
These results show that if the initial point x 0 is sufficiently close to the solution x * , then the sequence {x n } converges to x * . But how close to the solution x * the initial guess x 0 should be? These local results give no information on the radius of the convergence ball for the corresponding method.
Moreover, notice that the convergence ball of high convergence order methods is usually very small and in general decreases as the convergence order increases. Our approach establishes the local convergence result under hypotheses only on the first derivative. Our approach can give a larger convergence ball than the earlier studies, under weaker hypotheses. The same technique can be used to other methods.
The rest of the paper is organized as follows. In Section 2, we present the local convergence analysis of method (Equation 1.2). The numerical examples are given in the concluding Section 3.

Local convergence
We present the local convergence analysis of method (Equation 1.2) in this section. Let L 0 > 0, L > 0, M ≥ 1, and , , ∈ S be given parameters with ≠ 0. It is convenient for the local convergence analysis of method (Equation 1.2) that follows to define some scalar functions and parameters. Define functions g 1 , g 2 , h 2 , g 3 , h 3 on the interval [0, 1 L 0 ) by and parameters r 1 and r A by Then, we have by (Equation 2.1) that 0 < r 1 < r A < 1 L 0 , 0 ≤ g 1 (t) < 1 for each t ∈ [0, r 1 ). Using the definition of functions g 1 , g 2 , h 2 , g 3 , and h 3 we obtain that h 2 (0) = h 3 (0) = −1 < 0 and and Denote by U(v, ), U(v, ) the open and closed ball in X , respectively, with center v ∈ X and of radius > 0. Next, using the above notation we present the local convergence result for method (Equation 1.2) using the preceding notation.

Remark 2.2 (a) The radius r A was obtained by Argyros in
As an example, let us consider the function f (x) = e x − 1. Then x * = 0. Set D = U(0, 1). Then, we have that L 0 = e − 1 < l = e, so = 0.24252961 < r A = 0.324947231.
Moreover, the new error bounds (Argyros, 2004(Argyros, , 2007Argyros & Hilout, 2012, 2013 are: whereas the old ones (Kantorovich & Akilov, 1982;Petkovic et al., 2013) Clearly, the new error bounds are more precise, if L 0 < L. Clearly, we do not expect the radius of convergence of method (Equation 1.2) given by r to be larger than r A (see Equation 2.3).
(b) The local results can be used for projection methods such as Arnoldi's method, the generalized minimum residual method(GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy (Argyros, 2004(Argyros, , 2007Argyros & Hilout, 2012, 2013. (c) The results can be also be used to solve equations where the operator F ′ satisfies the autonomous differential equation (Argyros, 2007;Argyros & Hilout, 2013;Kantorovich & Akilov, 1982;Petkovic et al., 2013): (2.23) where P is a known continuous operator. Since F � (x * ) = P(F(x * )) = P(0), we can apply the results without actually knowing the solution x * . Let as an example F(x) = e x − 1. Then, we can choose P(x) = x + 1 and x * = 0.
(d) It is worth noticing that method (Equation 1.2) are not changing if we use the new instead of the old conditions (Hueso et al., 2015;Sharma, 2014