On some iterative methods with frozen derivatives for solving equations

We determine a radius of convergence for an efficient iterative method with frozen derivatives to solve Banach space defined equations. Our convergence analysis use ω− continuity conditions only on the first derivative. Earlier studies have used hypotheses up to the seventh derivative, limiting the applicability of the method. Numerical examples complete the article.


Introduction
W e consider solving equation where F : D ⊂ X −→ Y is continuously Fréchet differentiable, X, Y are Banach spaces, and D is a nonempty convex set. Iterative methods are used to generate a sequence converging to a solution x * of Equation (1) under certain conditions [1][2][3][4][5][6][7][8][9][10][11][12]. Recently a surge has been noticed in the development of efficient iterative methods with frozen derivatives. The convergence order is obtained using Taylor expansions and conditions on high order derivatives not appearing on the method. These conditions limit the applicability of the methods. For example, let X = Y = R, D = − 1 2 , 3 2 . Define f on D by Then, we have t * = 1, and f (t) = 3t 2 log t 2 + 5t 4 − 4t 3 + 2t 2 , Obviously f (t) is not bounded on D. So, the convergence of these methods is not guaranteed by the analysis in these papers. Moreover, no comparable error estimates are given [6,8,10,11] on the distances involved or uniqueness of the solution results. That is why we develop a general technique that can be used on iterative methods, and address these problems by using only the first derivative which only appears on these methods.
We demonstrate this technique on the 3(i − 1), convergence order method defined for all n = 0, 1, 2, . . . , by where k a fixed natural number and α = 3F y The efficiency, convergence order and comparisons with other methods using similar information was given in [6,8,10,11] when X = Y = R k . The convergence was shown using the seventh derivative. We include computable error bounds on x n − x * and uniqueness results that are not given in [6,8,10,11]. Our technique is so general that it can be used to extend the usage of other methods [1][2][3][4][5][6][7][8][9][10][11][12]. The method was developed in [10], where the comparisons to other methods were well stretched.
The motivation of this paper is not to do the same, but to introduce a technique that expands the applicability of this and other methods using high order derivatives not appearing in these methods. The first derivative has only been used in the convergence hypotheses. Notice that this is the only derivative appearing in the method. We also provide a computable radius of convergence which is not given in [10]. This way we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show convergence of the method. Our results significantly extends the applicability of these methods and provide a new way of looking at iterative methods. The article contains local convergence analysis in Section 2 and the numerical examples in Section 3.
Suppose that the equation has a least solution r 3 ∈ (0, ρ 2 ). We shall show that r is a radius of convergence, where It follows that for each t ∈ [0, r) and Let U(x, β),Ū(x, β) denote the open and closed balls, respectively in X with center x ∈ X and of radius β > 0.
The following hypotheses (A) shall be used: There exists a continuous and nondecreasing function ω 0 : T −→ T such that for each x ∈ D, There exist continuous and nondecreasing functions ω : (A5) There exists r * ≥ r such that 1 0 ω 0 (θr * )dθ < 1.
In the next theorem, the local convergence of method (2) is given using the hypotheses (A) and the preceding notation. Theorem 1. Suppose the hypotheses (A) hold. Then, for any starting point x 0 ∈ U(x * , r) − {x * }, sequence {x n } generated by method (2) is well defined in U(x * , r), remains in U(x * , r) and converges to x * . Moreover, the following items hold for all i = 3, 4, . . . , k, n = 0, 1, 2, . . . , Furthermore, x * is the only solution of equation F(x) = 0 in the set D 1 given in (A5).

Remark 1.
1. In view of (11) and the estimate the condition (14) can be dropped and M can be replaced by M(t) = 1 + L 0 t or M(t) = M = 2, since t ∈ [0, 1 L 0 ). 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form where P is a continuous operator. Then, since F (x * ) = P(F(x * )) = P(0), we can apply the results without actually knowing x * . For example, let F(x) = e x − 1. Then, we can choose: P(x) = x + 1. 3. Let ω 0 (t) = L 0 t, and ω(t) = Lt. In [2,3] we showed that r A = 2 2L 0 +L is the convergence radius of Newton's method: x n+1 = x n − F (x n ) −1 F(x n ) for each n = 0, 1, 2, · · · (33) under the conditions (12) and (13). It follows from the definition of r in (9) that the convergence radius r of the method (2) cannot be larger than the convergence radius r A of the second order Newton's method (33). As already noted in [2,3] r A is at least as large as the convergence radius given by Rheinboldt [9] where L 1 is the Lipschitz constant on D. The same value for r R was given by Traub [10]. In particular, for L 0 < L 1 we have that That is the radius of convergence r A is at most three times larger than Rheinboldt's. 4. It is worth noticing that method (2) is not changing when we use the conditions of Theorem 1 instead of the stronger conditions used in [6,8,11]. Moreover, we can compute the computational order of convergence (COC) defined by or the approximate computational order of convergence This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F. Note also that the computation of ξ 1 does not require the usage of the solution x * .