Local Convergence for a Regula Falsi-Type Method under Weak Convergence

where F : D ⊆ S → S is a nonlinear function, D is a convex subset of S and S is R or C. Newton-like methods are used for finding solution of (1.1), these mathods are usually studied based on: semi-local and local convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [1-2]. The classical regula falsi method [3-7] is an efficient way of generating a sequence approximating x∗. However, there are some disadvantages, since one end-point is kept after step, if a concave or convex region of F(x) has been reached. Moreover, the asymptotic convergence rate of iterative sequence


Introduction
In this study we are concerned with the problem of approximating a locally unique solution x * of equation where F : D ⊆ S → S is a nonlinear function, D is a convex subset of S and S is R or C. Newton-like methods are used for finding solution of (1.1), these mathods are usually studied based on: semi-local and local convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [1][2]. The classical regula falsi method [3][4][5][6][7] is an efficient way of generating a sequence approximating x * . However, there are some disadvantages, since one end-point is kept after step, if a concave or convex region of F(x) has been reached. Moreover, the asymptotic convergence rate of iterative sequence is low in general. To overcome these problems, [8] proposed the regula falsi-type method defined for each n = 0, 1, 2, · · ·by where x 0 is an initial point and The quadratic convergence of method (1.2) was shown in [6] under the assumption that F′′ is bounded. However, this is a restrictive condition in many cases. As a motivational example, let us define function f on D = (−1, 2) by Then, we have that 3 2 ( ) 2 ln Hence, function f′′ is unbounded on D. Therefore, the results in [9][10][11][12] cannot apply to solve equation (1.1). In the present study, we use hypotheses only on the first derivative in our local convergence analysis. Moreover, we provide a radius of convergence and computable error bounds on the distances | | n x x * -not given in [13,14].
In order to include more general methods, we shall study instead of (1.2), method defined for each n = 0, 1, 2, · · · by where x 0 is an initial point and where ∈ S −{−1} and _ ∈ S are given parameters. Notice that if α =ɤ = 1, we obtain Chen's method (1.2) and if _ = = 0, we obtain Steffensen's method [15]. Notice in particular that the method in (27) is a special case of method (1.2). Other choices of _, are possible [16][17][18][19][20][21]. The rest of the paper is organized as follows: Section 2 contains the local convergence analysis of method (1.3). The numerical examples are given in the concluding Section 3.

Local Convergence
We present the local convergence analysis of method (1.3) in this section. Let U(v, r ), U (v, r ) the open and closed balls in S, respectively, with center v ∈ S and of radius r > 0.
It is convenient for the local convergence analysis of method (1.3) that follows to define some functions and parameters. Let L 0 > 0, L > 0,M > 0, α ≥ 0, g ∈ R − {−1} and p ∈ R be given parameters. Define Then, we have by (2.1) that h 0 (0) = |α| M − 1 < 0. We also get that h 0 (t) → +∞ as t → +∞. Hence, function h 0 has zero in the interval (0, +∞) by the Intermediate value theorem. Denote by r 0 the smallest such zero. We have that which is a contradiction. Moreover, define functions on the interval [0, r0] by and g(r0) > 0. Hence function h1 has zeros in the interval (0, r0). Denote by r1 the smallest such zero. Define parameter Then, we have that L0r < 1 and g(r) > 0. Hence, we obtain that Then, we conclude that for each hold.
Next, using the above notation we can show the local convergence result for method (1.3).

Theorem 2.1 Let
: where r 1 is given by (2.1). Then, sequence {x n } generated for 3) is well defined, remains in U(x * , r 1 ) for each n = 0, 1, 2, · · · and converges to x * . Moreover, the following estimates hold where the "g" functions are defined above Theorem 2.1. Furthermore, suppose that there exists It follows from (2.17) and the Banach Lemma on invertible functions (2, 3, 21, 25) that 0 (x ) F ¢ is invertible and Notice that by (2.9), (2.12) and (2.14) we get that where we also used that We can write Using (2.11), the definition of function g, (2.13) and (2.19), we get that As in (2.20), we can get using (2.6) It follows from (2.21) that A 0 is invertible and which shows (2.15) for n = 0 and that x1 is well defined. Using method (1.3) for n = 0 we can write Then, we have by (2.11) and (2.18) the estimate which shows (2.16) for n=0. By simply replacing x0, x1 by x k , x k+1 in the preceding estimates we arrive at (2.15) and (2.16). Using the estimate |x k +1 − x * | < |x k − x * | < r 1 , we deduce that with F(y * ) = 0. Using (2.6) we get that It follows from (2.23) that Q is invertible. Finally, from the identity 0 = F(x * )− F(y * ) = Q(x * − y * ), we deduce that x * = y * .

Remark 2.2 1.
In view of (2.10) and the estimate