Ball convergence for Steffensen-type fourth-order methods

— We present a local convergence analysis for a family of Steffensen-type fourth-order methods in order to approximate a solution of a nonlinear equation. We use hypotheses up to the first derivative in contrast to earlier studies such as [1], [5]-[28] using hypotheses up to the fifth derivative. This way the applicability of these methods is extended under weaker hypotheses. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study.


I. Introduction
I n this study we are concerned with the problem of approximating a locally unique solution * x of equation 0, = ) (x F (1) where is a nonlinear function, D is a convex subset of S and S is R or .
C Artificial intelligence and e-learning are two of the emerging needs of the information age.Authors from various other areas can follow these techniques to serve another scientific communities.Newton-like methods are famous for finding solution of (1), these methods are usually studied based on: semi-local and local convergence.The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [3,4,20,21,22,24,26].
Third order methods such as Euler's, Halley's, super Halley's, Chebyshev's [1]- [28] require the evaluation of the second derivative F ′ ′ at each step, which in general is very expensive.That is why many authors have used higher order multipoint methods [1]- [28].In this paper, we study the local convergence of fourth order Steffensen-type method defined for each 0,1,2, = n by where 0 x is an initial point.Method (2) was studied in [11] under hypotheses reaching upto the fifth derivative of function .F Other single and multi-point methods can be found in [2,3,20,25] and the references there in.The local convergence of the preceding methods has been shown under hypotheses up to the fifth derivative (or even higher).These hypotheses restrict the applicability of these methods.As a motivational example, let us define function f on We have that Then, obviously, function f ′ ′ ′ is unbounded on .D In the present paper we only use hypotheses on the first Fréchet derivative.This way we expand the applicability of method (2).
The rest of the paper is organized as follows: Section 2 contains the local convergence analysis of methods (2).The numerical examples are presented in the concluding Section 3.

II. Local convergence for method (2)
We present the local convergence analysis of method (2) stand for the open and closed balls in S , respectively, with center S v ∈ and of radius 0. > ρ and 0 > α be given parameters.It is convenient for the local convergence analysis of method(2) that follows to define some function on the interval Notice that if: We have that 0, = ) ( A r g and We get that 0 < 1 = (0) Then, we get that for each Next, using the above notation we present the local convergence analysis of method (2).
be a differentiable function.
Suppose that there exist where r is defined by (1).Then, the sequence for each 0,1,2, = n and converges to .* x Moreover, the following estimates hold for each , 0,1,2, = n where the " " g functions are defined above Theorem 2.1.

Furthermore, if that there exists
then the limit point * x is the only solution of equation Proof.We shall use induction to show estimates (12) and (13).

= n
We can also write that where and Using ( 9), ( 17), ( 18), ( 24) and ( 28), we get that Then, using ( 5), ( 15), ( 20), ( 22) and ( 26)-(30), we get that in the preceding estimates we arrive at estimates (12) and (13) Using (6) we get that θ θ It follows from (30) and the Banach Lemma on invertible functions that Q is invertible.Finally, from the identity 1.In view of ( 8) and the estimate under the conditions (8) and (9).It follows from the definition of r that the convergence radius r of the method (2) cannot be larger than the convergence radius A r of the second order Newton's method (31) if .
still r may be smaller than .A r As already noted in [3,4] A r is at least as large as the convergence ball given by Rheinboldt [25] .
That is our convergence ball A r is at most three times larger than Rheinboldt's.The same value for R r was given by Traub [26].

III. numerical examples
We present numerical examples in this section.The parameters are given in Table 1.The parameters are given in Table 3.

EXAMPLE 3 . 3
Returning back to the motivational example at the introduction of this study, we have ||  1condition (10) can be dropped and M can be replaced by