Local convergence for deformed Chebyshev-type method in Banach space under weak conditions

We present a local convergence analysis for deformed Chebyshev methods in order to approximate a solution of a nonlinear equation in a Banach space setting. Our methods include the Chebyshev and other high-order methods under hypotheses up to the first Fréchet derivative in contrast to earlier studies using hypotheses up to the second or third Fréchet derivative. The convergence ball and error estimates are given for these methods. Numerical examples are also provided in this study. Subjects: Applied Mathematics; Computer Mathematics; Mathematics & Statistics; Non-Linear Systems; Science


PUBLIC INTEREST STATEMENT
A large number of problems in applied mathematics, mathematical physics, mathematical economics, engineering and other areas are formulated as equations using mathematical modelling. The unknowns of these equations can be functions, vectors or real or complex numbers. In the present paper, we study the convergence of a fast sequence to the solution of an equation. The new results improve the results of earlier works. Solutions of these equations are very important, since they have a physical meaning.
The semilocal convergence analysis of DCM was given in Wu and Zhao (2007) under Lipschitz continuity conditions on up to the second Fréchet derivative in the special case when = 1 and > 0. In particular, the third order of convergence of DCM was shown in Wu and Zhao (2007) under these values of the parameters and . (1.2) The usual conditions for the semi-local convergence of these methods are (): There exist constants , , 1 , 2 such that ( 1 ) There exists Γ 0 = F � (x 0 ) −1 and ‖Γ 0 ‖ ≤ ; The local convergence conditions are similar but x 0 is x * in ( 1 ) and ( 2 ). There is a plethora of local and semi-local convergence results under the () conditions (Amat et al., 2003;Argyros, 1985Argyros, , 2004Argyros, , 2007Argyros & Hilout, 2012, 2013Candela & Marquina, 1990a, 1990bChun et al., 2011;Gutiérrez & Hernández, 1997, 1998Hernández, 2001;Hernández & Salanova, 2000;Kantorovich, 1982;Ortega & Rheinboldt, 1970;Parida & Gupta, 2008;Wu & Zhao, 2007). The conditions ( 3 ) and ( 4 ) limit the applicability of these methods although only the first Fréchet derivative appears in these methods. Therefore, these usefull methods cannot be applied according to the earlier results. Therefore, the motivation for this study is to use these usefull methods (Wu & Zhao, 2007) in cases when ( 3 ) and ( 4 ) are not satisfied.
As a motivational example, let us define function f on D = − 1 2 , 5 2 by Choose x * = 1. We have that Notice that f �� (x) does not satisfy ( 4 ) on D. Hence, the results depending on ( 4 ) cannot apply in this case. However, using Equations 2.7-2.10 that follow we have f � (x * ) = 3 and f (x * ) = 0, p = 1, L 0 = L = 146.6629073 and M = 101.5578008. Hence, the results of our Theorem 2.1 that follows can apply to solve equation f (x) = 0 using DCM. Hence, the applicability of DCM is expanded under our new conditions.
In the rest of this study, U(w, q) and U(w, q) stand, respectively, for the open and closed ball in X with center w ∈ X and of radius q > 0.
The paper is organized as follows: In Section 2, we present the local convergence of these methods. The numerical examples are given in the concluding Section 3.

Local convergence
In this section, we present the local convergence analysis of DCM. Let L 0 > 0, L > 0, M > 0, ∈ ℝ, ∈ (0, 1] and p ∈ [0, 1] be given parameters. It is convenient for the local convergence analysis that follows to introduce some functions and parameters.
Define functions on the interval [ 0, 1 Then, it follows from the intermediate value theorem that function ḡ 4 has zeros in 0, 1 Denote by r 4 the smallest such zero. Set Then, we have that Next, we present the local convergence result for DCM.
Proof We shall show estimates Equations 2.12-2.15 using mathematical induction. Firstly, we shall show that y 0 , z 0 , x 1 exist and lie inside U(x * , r). In order for us to achieve this, we must show that the inverses appearing in method DCM exist for n = 0. By hypothesis x 0 ∈ U(x * , r). Using the definition of radius r and Equation 2.8, we get that It follows from Equation 2.16 and the Banach Lemma on invertible operators (Argyros, 2007;Argyros & Hilout, 2013;Kantorovich, 1982;Ortega & Rheinboldt, 1970) that F � (x 0 ) −1 ∈ L(Y, X) and Moreover y 0 , z 0 are well defined by first and second substep of DCM for n = 0. Using the first substep of DCM for n = 0, we also get that Then, by the definition of function g 1 , Equations 2.3, 2.9, 2.17 and 2.18, we obtain that which shows Equation 2.12 for n = 0 and y 0 ∈ U(x * , r). Similarly, using the second substep of DCM for n = 0, we get that Then, by Equations 2.4, 2.10, 2.17, 2.19 the definition of function g 2 and Equation 2.12 (for n = 0), we obtain, since F( which shows Equation 2.13 for n = 0 and z 0 ∈ U(x * , r). We have by the definition of and Equations 2.12, 2.13 (for n = 0) that which shows that x 0 + (z 0 − x 0 ) ∈ U(x * , r) and H 0 is well defined. We need an estimate on ‖H 0 ‖. Using the definition of H 0 , g 3 , Equations 2.17 and 2.9, we get in turn that which shows Equation 2.14 for n = 0. Then, using the last substep of DCM for n = 0, we get (2.18) which shows Equation 2.15 for n = 0 and x 1 ∈ U(x * , r). By simply replacing x 0 , y 0 , z 0 , x 1 by x k , y k , z k , x k+1 in the preceding estimates we arrive at estimates Equations 2.12-2.15. Finally, using the estimate To show the uniqueness part, let B = ∫ can be arbitrarily large (Argyros, 1985(Argyros, , 2004(Argyros, , 2007Argyros & Hilout, 2012, 2013) (see also the examples) .
(b) In view of condition Equation 2.8 and the estimate condition (2.10) can be dropped and M can be replaced by (c) The convergence ball of radius r 1 was given by us in Argyros (1985Argyros ( , 2004Argyros ( , 2007 and Argyros and Hilout (2013) for Newton's method under conditions Equations 2.8 and 2.9. Estimate r 2 < r 1 shows that the convergence ball of higher than two DCM methods are smaller than the convergence ball of radius r 1 of the quadratically convergent Newton's method. The convergence ball given by Ortega and Rheinboldt (1970) for Newton's method is Hence, we do not expect r to be larger than r 1 no matter how we choose L 0 , L, M and .
The local results can be used for projection methods such as Arnoldi's method, the generalized minimum residual method, the generalized conjugate method for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy (Argyros, 1985(Argyros, , 2004(Argyros, , 2007Argyros & Hilout, 2013;Kantorovich, 1982;Ortega & Rheinboldt , 1970).
(e) The results can also be used to solve equations where the operator F ′ satisfies the autonomous differential equation (Argyros, 1985(Argyros, , 2004(Argyros, , 2007Argyros & Hilout, 2013;Kantorovich, 1982;Ortega & Rheinboldt , 1970): where T is a known continuous operator. Since , we can apply the results without actually knowing the solution x * . Let as an example F(x) = e x − 1. Then, we can choose T(x) = x + 1 and x * = 0.
(f) It is worth noticing that DCM is not changing when we use the condition of Theorem 2.1 instead of the stronger () conditions used in the earlier mentioned works. We can also use the computational order of convergence (COC) (Argyros & Hilout, 2013;Hernández & Salanova, 2000) defined by or the approximate COC since the bounds given in Theorem 2.1 may be very pessimistic. This way we also avoid the computation of the error bounds as given in the earlier studies where bounds on the Frécherderivatives higher than one are used.

Numerical examples
We present numerical examples where we compute the radii of the convergence balls.
Then, the Fréchet derivative is given by:  (Argyros, 2007;Argyros & Hilout, 2013;Kantorovich, 1982;Ortega & Rheinboldt, 1970) given by where w is a given continuous function such that w(s) > 0 for each s ∈ [a, b], ∈ ℝ, p ∈ (0, 1] and the kernel G is a nonnegative continuous function on [a, b] × [a, b]. Equation 3.4 appears in many studies in mathematical physics (radiative transfer, kinetic theory of gases, neuron transport) and other applied areas (Argyros, 1985(Argyros, , 2007Argyros & Hilout, 2013;Gutiérrez & Hernández, 1997, 1998Hernández & Salanova, 2000;Hernández, 2001;Kantorovich, 1982;Ortega & Rheinboldt, 1970;Wu & Zhao, 2007). The kernel G is defined by It is well known that Equation 3.4 is equivalent to the following two-point boundary value problem; Next, we shall solve Equation 3.5 by first discretizing it and using DCM for = 0, p = 1 2 , a = 0, b = 1 and y(0) = y(1) = 0. We divide the interval [0, 1] into m subintervals and let l = 1 m . Let us denote the points of subdivision by u 0 = 0 < u 1 < ⋯ < u m = 1 with the corresponding values of the function (3.6) y 0 = y(u 0 ), y 1 = y(u 1 ), ⋯ , y m = y(u m ). A simple approximation for the second derivative at these points is given by Notice that y 0 = o and y m = 0. Therefore, we obtain the following system of nonlinear equations Hence, we have an operator F : ℝ m−1 → ℝ m−1 whose Fréchet differential can be written as: Notice that we may not be able to find the second Fréchet differential since it would involve terms of the form y 1 2 i and they may not exist. Therefore, the famous Chebyshev method Equation 1.3 cannot be used. However, DCM method Equation 1.2 obtained from method Equation 1.3 can be used, since only the first Fréchet differential appears in this method. Moreover, the theory introduced in this paper applies. Indeed, let x ∈ ℝ m−1 . Define the norm to be ‖x‖ = max 1≤j≤m−1 ‖x j ‖. The corresponding norm on A ∈ ℝ m−1 × ℝ m−1 is ‖A‖ = max 1≤j≤m−1 ∑ m−1 i=1 �a ji �. Then, for all y, z ∈ ℝ m−1 for which |y i | > 0, |z i | > 0, i = 1, 2, … , m − 1: That is we can choose L 0 = L = 3 2 l 2 ‖F � (x * ) −1 ‖. We choose m = 10 leading to nine equations. Since a solution would vanish at the endpoints and be positive in the interior a reasonable choice of initial approximation seems to be 130 sin x. Hence, we obtain the following initial vector for solving system Equation 3.7