Convergence Analysis of an Accelerated Iteration for Monotone Generalized α-Nonexpansive Mappings with a Partial Order

In this paper, we introduce a new accelerated iteration for finding a fixed point of monotone generalized α-nonexpansive mapping in an ordered Banach space. We establish some weak and strong convergence theorems of fixed point for monotone generalized α-nonexpansive mapping in a uniformly convex Banach space with a partial order. Further, we provide a numerical example to illustrate the convergence behavior and effectiveness of the proposed iteration process.


Introduction
Let (, ≤) be an ordered Banach space endowed with the partial order ≤ and  be a nonempty closed convex subset of .A mapping  :  →  is called monotone if  ≤  whenever  ≤  for all ,  ∈ .Moreover,  is said to be as follows: (1) Monotone nonexpansive if  is monotone and such that      −      ≤      −      , ∀ ≤ .
(3) Monotone -nonexpansive if  is monotone and there exists a constant  < 1 such that      − which is an interesting generalization of nonexpansive mapping because it is weaker than nonexpansiveness and stronger than quasinonexpansiveness [1].
In 1965, Browder [3] proved that every nonexpansive selfmapping of a closed convex and bounded subset has a fixed point in a uniformly convex Banach space.Since then, a number of iteration methods have been developed to approximate fixed point of nonexpansive mappings and some other relevant problems; see [4][5][6][7][8][9][10][11][12][13][14][15][16] and the references therein.In these algorithms, Mann iteration is a fundamental method approximating fixed points of nonexpansive mappings, which is defined by where   ∈ (0, 1) and  is a nonexpansive mapping.The other important iteration widely used to approximate fixed point of nonexpansive mapping is Ishikawa iteration, which is defined by where   ,   ∈ (0, 1).Note that Ishikawa iteration (7) improves the rate of convergence of Mann iteration process for an increasing function due to Ishikawa [17] and Rhoades [18].In 2007, Agrawal et al. [19] modified (7) and considered the following two-step iteration process: for an arbitrary  1 ∈ , the sequence of {  } is defined in the following manner: where   ,   ∈ (0, 1) and  is a nearly asymptotically nonexpansive mapping.They claimed that this iteration process converges faster than the Mann iteration for some contractions.Recently, Noor [20] modified (7) and further studied a three-step iteration process to solve the general variational inequalities: for an arbitrary  1 ∈  defined a sequence {  } by where   ,   ,   ∈ (0, 1) and  is a strong monotone mapping involved variational inequalities.Very recently, Abbas-Nazir [21] and Thakur et al. [22] modified Noor iteration (9) and introduced a new faster iteration process for solving the constrained minimization and feasibility problems and for finding the fixed point of Suzuki's generalized nonexpansive mappings, respectively.
On the other hand, in 2004, Ran-Reurings [23] firstly introduced a fixed point theorem in a partially ordered metric space and some applications to matrix equations.They developed a new field only for comparable elements instead of the nonexpansive (or Lipschitz) condition in a partially ordered metric space, which has been successfully applied to solve not only the existence of fixed points but also a positive or negative solution of ordinary differential equations [24].
In 2015, Bin Dehaish-Khamsi [25] applied the Mann iteration (6) to the case of a monotone nonexpansive mapping in a Banach space endowed with a partial order.Moreover, they proved that {  } generated by (6) weakly converges to  * ∈ () and  * and  1 are comparable.
In 2016, Song et al. [26] further extended the Mann iteration (6) to monotone -nonexpansive mappings and obtained some weak and strong convergence theorems in an ordered Banach space, which complemented the fixed point results of -nonexpansive mappings in Aoyama-Kohsaka [27].However, in general, the monotone condition on comparable elements is a weaker assumption.In particular, the continuity property probably is not valid, which not only reduces the efficiency of numerical approach but also increases the difficulty of convergence analysis.This is also the main reason why Mann iteration has become popular in approximating the fixed point of monotone-type mappings [2,25,26].Therefore, it is important and interesting to construct an iterative accelerator method for finding the fixed points problem of such class of monotone-type mappings.
Inspired and motivated by research going on in this area, we modify the iteration process ( 6), (8), and (9) to the case of monotone generalized -nonexpansive mappings and introduce a new accelerated iteration: for an arbitrary  1 ∈ , sequence {  } is defined by Our purpose is not only to extend Mann iteration of Bin Dehaish-Khamsi [25] and Song et al. [26] to an accelerated iteration for monotone generalized -nonexpansive mappings, but also to establish some weak and strong convergence theorems of fixed point for monotone generalized -nonexpansive mapping in a uniformly convex Banach space with a partial order.Furthermore, we provide a numerical example to illustrate the convergence behavior and effectiveness of the proposed iteration.The method and results presented in this paper extend and improve the corresponding results of [2,17,19,20,25,26] and some others previously.

Preliminaries
Recall that a Banach space  with the norm ‖.‖ is called uniformly convex if, for all  ∈ (0, 2], there exists a constant  > 0 for which ‖‖ ≤ 1, ‖‖ ≤ 1 and ‖ − ‖ ≥  implies A Banach space  is said to satisfy the Opial property [5] if for each weakly convergent sequence {  } in  with weak limit , holds, for all  ∈  with  ̸ = .Let  be a nonempty subset of a Banach space  and {  } be a bounded sequence in .For each  ∈ , we define the following: (ii) Asymptotic radius of {  } relative to  by (, {  }) fl inf{(,   ) :  ∈ }.

Main Results
Lemma 7. Let  be a nonempty closed convex subset of an ordered Banach space (, ≤) and  :  →  be a monotone mapping.Assume that the sequence {  } is defined by the iteration ( ) and  1 ≤  1 .en (ii) {  } has at most one weak-cluster point  ∈ .Moreover,   ≤  for all  ≥ 1 provided {  } weakly converges to a point  ∈ .
from the convex property defined on order intervals.This allows us to focus only on the proof of   ≤   for any  ≥ 1.By  1 ≤  1 , we suppose that   ≤   for  ≥ 2. From (10), we have Since  is monotone, we obtain   ≤   ≤   ≤   .Using (10) again, we obtain en the sequence {  } generated by ( ) weakly converges to a fixed point  ∈ ().
Numerical Results . .To illustrate the convergence of the proposed algorithm, we provide some numerical results of Example 1 and comparison with the other iterations previously.
Firstly, we show the convergence behavior of scheme (10) with different initial points.To do this, we take   = /( + 1),   = 1/( + 5),   = / √ (2 + 9) 3 and set ‖  −  * ‖ < 10 −6 as stop criterion.From given  1 = 0.05, 0.50, 0.75, 0.95, convergence behaviors of scheme (10) are displayed in Figure 1. Figure 1 shows that the given point  1 has a little effect on convergence and scheme (10) is good in strong convergence and operational reliability.Moreover, numerical results show that the increasing of initial point  1 has a little effect on the speed of convergence; that is, the sequence {  } generated by (10) will converge faster to a fixed point of Example 1 when  1 is increased.
Secondly, we further show the stability of scheme (10) based on the different iteration parameters.To complete it, we take   ,   ,   ∈ (0, 1) in the following manner.
Table 1 shows that the different parameters   ,   ,   have an effect on iteration and scheme (10) is good in strong convergence and stability.Moreover, for the same initial point  1 = 0.2, numerical results imply that the sequence {  } generated by (10) will converge faster to a fixed point of Example 1 when parameter   is decreased or   is increased.In addition,  (5) and   (6) imply that parameters   have almost no effect on convergence and iteration.
Finally, we compare the iteration numbers of new proposed method with the others known previously.To make it more obviously, we set ‖  −  * ‖ < 10 −10 as stop criterion.For given  1 = 0.05, 0.20, 0.50, 0.75, 0.95, iteration numbers of scheme (10) and the known method are listed in Table 2 with some different parameters   ,   ,   in Parameter 1, 3, 5.
Table 2 shows that the different parameters   ,   ,   have a little effect on iteration and scheme (10) is good in strong convergence and effectiveness.Moreover, in Parameter 5, the numerical results imply that computing costs of Mann, + means the number of iterations over 1000.
Ishikawa, and Noor are too heavy.However, our scheme (10) is very advantageous for a wide range of parameters.In addition, scheme (10) requires the less number of iteration for the convergence than Agarwal's when the parameters   and   are decreased.The computations are performed by Matlab R2016b running on a PC Desktop Intel(R) Core(TM)i5-5200U CPU @2.20GHz 2.20GHz, 8.00GB RAM.