Introduction

In this paper, \(\mathbb {N}\) denotes the set of all positive integers while F(T) denotes the set of all fixed points of T, i.e., \(F(T)= \{ Tx = x; x \in C\}\).

Let C be a nonempty subset of normed space X and mapping \(T:C \rightarrow C\) is said to be

  1. (i)

    nonexpansive, if \(\Vert Tx - Ty \Vert \le \Vert x - y \Vert\), for all \(x, y \in C\),

  2. (ii)

    quasi-nonexpansive, if \(\Vert T x- p \Vert \le \Vert x - p\Vert\), for all \(x\in C\) and \(p\in F(T)\).

Many nonlinear equations are naturally formulated as fixed point problems,

$$\begin{aligned} x =Tx, \end{aligned}$$
(1.1)

where T, the fixed point mapping, may be nonlinear. A solution \(x^{*}\) of the problem (1.1) is called a fixed point of the mapping T. Consider a fixed point iteration, which is given by

$$\begin{aligned} x_{n+1}= Tx_{n}, \forall n \in \mathbb{N}. \end{aligned}$$
(1.2)

The iterative method (1.2) is also known as Picard iteration or the method of successive substitution. For the Banach contraction mapping theorem, the Picard iteration converges unique fixed point of T, but it fails to approximate fixed point for nonexpansive mappings, even when the existence of a fixed point of T is guaranteed.

Example 1.1

Consider a self mapping T on [0, 1] defined by \(Tx = 1-x\) for \(0 \le x\le 1\). Then T is nonexpansive with unique fixed point at \(x= \frac{1}{2}\). If we choose a starting value \(x=a \ne \frac{1}{2}\), then successive iteration of T yield the sequence \(\{1-a, a, 1-a, \ldots \}\).

Thus, when a fixed point of nonexpansive mappings exists, other approximation techniques are needed to approximate it. In the last fifty years, the numerous numbers of researchers attracted in these direction and developed iterative process has been investigated to approximate fixed point for not only nonexpansive mapping, but also for some wider class of nonexpansive mappings (see e.g., Agarwal et al. [3], Ishikawa [9], Krasnosel’skiǐ [12], Mann [18], Noor [19], Schaefer [23]), and compare which one is faster.

Sahu [21] has introduced Normal S-iteration Process, whose rate of convergence similar to the Picard iteration process and faster than other fixed point iteration processes (see [21, Theorem 3.6]).

\(\mathrm{(NS)}\)   Normal S-iteration process (see Sahu [21]) defined as follows:

For C a convex subset of normed space X and a nonlinear mapping T of C into itself, for each \(x_{1}\in C\), the sequence \(\{ x_{n}\}\) in C is defined by

$$\begin{aligned} \left\{ \begin{array}{l} x_{n+1} = Ty_{n} \\ y_{n} = ( 1- \alpha _{n}) x_{n} + \alpha _{n} Tx_{n},\quad n \in \mathbb {N}, \end{array} \right. \end{aligned}$$
(1.3)

where \(\{\alpha _{n}\}\) is real sequences in (0, 1).

It brings a following natural question.

Question 1.1

Does there exists an iteration process whose rate of convergence is faster than Normal S-iteration process for contraction mappings?

The question have been resolved in affirmative way by Abbas et al. [2], Kadioglu and Yildirim [11, Theorem 5], Thakur et al. [26, Theorem 2.3], developed new iteration processes for approximating the fixed point, as earliest as possible compare Normal S-iteration process. The following iteration process developed by Kadioglu and Yildirim [11] for approximating the fixed point for nonexpansive mapping and establish some strong and weak convergence theorems in uniformly convex Banach spaces.

\(\mathrm{(PNS)}\)   Picard normal S-iteration process (see Kadioglu and Yildirim [11]) defined as follows: With C, X and T as in (NS), for each \(x_{1}\in C\), the sequence \(\{ x_{n} \}\) in C is defined by

$$\begin{aligned} \left\{ \begin{array}{l} x_{n+1} = Ty_{n} \\ y_{n} = (1-\alpha _{n}) z_{n} + \alpha _{n} Tz_{n} \\ z_{n} = ( 1- \beta _{n}) x_{n} + \beta _{n} Tx_{n},\quad n \in \mathbb {N}, \end{array} \right. \end{aligned}$$
(1.4)

where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are real sequences in (0, 1).

Remark 1.1

If \(\beta _{n}=0\) and \(\alpha _{n}= \beta _{n}=0\) in the process (1.4) then it reduces to Normal S-iteration process (1.3) and Picard iteration process (1.2) respectively.

The purpose of this paper is to establish strong and \(\Delta\)-convergence theorems for a new iteration process generated by generalized nonexpansive mappings in uniformly convex hyperbolic spaces. The theorems presented in this paper generalizes corresponding theorems for uniformly convex normed spaces of Kadioglu and Yildirim [11] and CAT(0)-spaces of Abbas et al. [1] and many others in this directions (see Itoh [8], Kim et al. [14], Sahu [21] etc.).

Preliminaries

Let (Xd) be a metric space and C be a nonempty subset of X. Suzuki [24] introduced a class of single valued mappings called Suzuki-generalized nonexpansive mappings (or condition (C)), satisfying a condition

$$\begin{aligned} \frac{1}{2} d( x, T x) \le d( x, y )\; \Longrightarrow \; d( Tx, Ty) \le d (x, y ), \end{aligned}$$

which is weaker than nonexpansiveness and stronger than quasi nonexpansiveness. The following examples make obvious this fact.

Example 2.1

[24] Define a mapping T on [0, 3] by

$$\begin{aligned} Tx = \left\{ \begin{array}{ll} 0, \quad {\mathrm{if}}\; x \ne 3, \\ 1, \quad {\mathrm{if}}\; x = 3. \end{array} \right. \end{aligned}$$

Then T satisfies condition (C), but T is not nonexpansive.

Example 2.2

[24] Define a mapping T on [0, 3] by

$$\begin{aligned} Tx = \left\{ \begin{array}{ll} 0, \quad {\mathrm{if}}\; x \ne 3, \\ 2, \quad {\mathrm{if}}\; x = 3. \end{array} \right. \end{aligned}$$

Then \(F(T)= \{0\} \ne \varnothing\) and T is quasi-nonexpansive, but T does not satisfy condition (C).

In [10], Karapinar and Tas introduced some new definitions which are modifications of Suzuki’s-generalized nonexpansive mappings (or condition (C)) as follows.

Definition 2.1

Let C be a nonempty subset of a metric space X. The mapping \(T:C \rightarrow C\) is said to be

  1. (i)

    Suzuki-Ciric mapping (SCC) [10] if

    $$\begin{aligned} & \frac{1}{2} d(T x, T y)\le d( x, y) \Longrightarrow d( Tx , Ty) \le M(x, y)\\ & \text {where} ~~ M(x, y)= \max \{d(x, y) , d( x, Tx) , d( y, Ty), d( x, Ty), d( y, Tx) \} \end{aligned}$$

    for all \(x, y \in C;\)

  2. (ii)

    Suzuki-KC mapping (SKC) if

    $$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y)\; \Longrightarrow \; d( Tx , Ty) \le N(x, y) \\ \text {where}~~ N( x, y)= & {} \max \bigg \{ d(x, y) ,\frac{ d( x, Tx) + d( y, Ty)}{2}, \frac{d( x, Ty)+ d( y, Tx)}{2} \bigg \} \end{aligned}$$

    for all \(x, y \in C;\)

  3. (iii)

    Kannan Suzuki mapping (KSC) if

    $$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y) \; \Longrightarrow \; d( Tx , Ty) \le \frac{d( x, Tx) + d( y, Ty)}{2} \end{aligned}$$

    for all \(x, y \in C;\)

  4. (iv)

    Chatterjea–Suzuki mappings (CSC) if

    $$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y) \; \Longrightarrow \; d( Tx , Ty) \le \frac{d( y, Tx) + d( x, Ty)}{2} \end{aligned}$$

    for all \(x, y \in C;\)

Theorem 2.1

[10] Let T be a mapping on a closed subset C of a metric space X and T satisfy condition SKC. Then \(d( x, Ty) \le 5 d( Tx, x) + d( x, y)\) holds for \(x, y \in C\).

Remark 2.1

Theorem 2.1 holds if one replaces condition SKC by one of the conditions KSC, SCC, and CSC.

Recently, García-Falset et al. [7] introduced two generalizations of nonexpansive mappings which in turn include Suzuki generalized nonexpansive mappings contained in [24].

Definition 2.2

Let T be a mapping defined on a subset C of metric space X and \(\mu \ge 1\). Then T is said to satisfy condition \((E_\mu )\), if (for all \(x, y \in C\))

$$\begin{aligned} d(x, Ty) \le \mu d(x, Tx) + d(x, y). \end{aligned}$$

Often, T is said to satisfy condition (E) whenever T satisfies condition \((E_\mu )\) for some \(\mu \ge 1\).

Remark 2.2

If T satisfies one of the conditions SKC, KSC, SCC, and CSC, then T satisfies condition \(E_\mu\) for \(\mu =5\).

Definition 2.3

Let T be a mapping defined on a subset C of a metric space X and \(\lambda \in (0, 1)\). Then T is said to satisfy the condition \((C_\lambda )\) if for all \(x, y \in C\)

$$\begin{aligned} \lambda d(x, Tx) \le d(x, y)\quad \Longrightarrow \quad d(Tx, Ty) \le d(x, y). \end{aligned}$$

For \(0< \lambda _1< \lambda _2 < 1\), the condition \((C_{\lambda _1)}\) implies the condition \((C_{\lambda _2})\).

The following example shows that the class of mappings satisfying conditions (E) and \((C_\lambda )\) for some \(\lambda \in (0, 1)\) is larger than the class of mappings satisfying the condition (C).

Example 2.3

[7] For a given \(\lambda \in (0, 1)\), define a mapping T on [0, 1] by

$$\begin{aligned} Tx = \left\{ \begin{array}{ll} \frac{x}{2}, ~~~\quad \mathrm{if}\quad x \ne 1, \\ \frac{1+\lambda }{2+ \lambda }, \quad \mathrm{if}\quad x = 1. \end{array} \right. \end{aligned}$$

The mapping T satisfies the condition \((C_\lambda )\) but it fails the condition \((C_{\lambda _1})\), whenever \(0< \lambda _1 < \lambda\). Moreover, T satisfies the condition \((E_\mu )\) for \(\mu = \frac{2 + \lambda }{2}\).

Throughout, this paper we work in the setting of hyperbolic spaces introduced by Kohlenbach [15].

A hyperbolic space (XdW) is a metric space (Xd) together with a convexity mapping \(W: X^{2} \times [0,1] \rightarrow X\) satisfying

  • \((W_{1})\) \(d( u, W(x, y, \alpha )) \le \alpha d( u, x) + ( 1-\alpha ) d( u, y);\)

  • \((W_{2})\) \(d(W(x, y, \alpha ), W(x, y,\beta )) = \vert \alpha - \beta \vert d( x, y);\)

  • \((W_{3})\) \(W(x, y, \alpha ) = W( y, x, 1-\alpha );\)

  • \((W_{4})\) \(d( W(x, z, \alpha ), W( y, w, \alpha ))\le (1-\alpha ) d( x, y) + \alpha d( z, w),\)

  • for all \(x, y, z, w \in X\) and \(\alpha , \beta \in [0,1]\).

A metric space is said to be a convex metric space in the sense of Takahashi [25], where a triple (XdW) satisfy only \((W_{1}).\) The concept of hyperbolic spaces in [15] is more restrictive than the hyperbolic type introduced by Goebel and Kirk [5] since \((W_{1})\) and\((W_{2})\) together are equivalent to (XdW) being a space of hyperbolic type in [5]. But it is slightly more general than the hyperbolic space defined in Reich and Shafrir [20] (see [15]). This class of metric spaces in [15] covers all normed linear spaces, \(\mathbb {R}\)-trees in the sense of Tits, the Hilbert ball with the hyperbolic metric (see [6]), Cartesian products of Hilbert balls, Hadamard manifolds (see [20]), and CAT(0) spaces in the sense of Gromov (see [4]). A thorough discussion of hyperbolic spaces and a detailed treatment of examples can be found in [15] (see also [5, 6, 20]).

If \(x, y \in X\) and \(\lambda \in [0,1],\) then we use the notation \((1-\lambda )x \oplus \lambda y\) for \(W(x, y, \lambda )\). The following holds even for the more general setting of convex metric space [25]: for all \(x, y\in X\) and \(\lambda \in [0,1],\)

$$\begin{aligned} d( x, (1-\lambda )x \, \oplus \, \lambda y) = \lambda d( x, y) \quad \mathrm{and}\quad d( y, (1-\lambda )x \oplus \lambda y)= ( 1-\lambda ) d( x, y). \end{aligned}$$

As consequence,

$$\begin{aligned} 1x \oplus 0 y = x,\quad 0 x\oplus 1 y = y \end{aligned}$$

and

$$\begin{aligned} (1-\lambda )x\oplus \lambda x = \lambda x \oplus (1-\lambda )x = x. \end{aligned}$$

A hyperbolic space (XdW) is uniformly convex [16] if for any \(r > 0\) and \(\varepsilon \in (0, 2],\) there exists \(\delta \in (0, 1]\) such that for all \(a, x, y \in X,\)

$$\begin{aligned} d\bigg (\frac{1}{2} x \oplus \frac{1}{2} y, a \bigg )\le & {} ( 1- \delta ) r. \end{aligned}$$

provided \(d(x, a) \le r,~ d(y, a) \le r\) and \(d(x, y) \ge \varepsilon r.\)

A mapping \(\eta :(0, \infty ) \times (0, 2] \rightarrow (0,1],\) which providing such a \(\delta = \eta (r, \varepsilon )\) for given \(r>0\) and \(\varepsilon \in (0, 2],\) is called as a modulus of uniform convexity. We call the function \(\eta\) is monotone if it decreases with r (for fixed \(\varepsilon\)), that is \(\eta (r_{2} , \varepsilon ) \le \eta (r_{1}, \varepsilon ), ~~\forall r_{2} \ge r_{1} >0.\)

In [16], Leuştean proved that CAT(0) spaces are uniformly convex hyperbolic spaces with modulus of uniform convexity \(\eta (r, \varepsilon ) = \frac{\varepsilon ^{2}}{8}\) quadratic in \(\varepsilon .\) Thus, the class of uniformly convex hyperbolic spaces are a natural generalization of both uniformly convex Banach spaces and CAT(0) spaces.

Now, we give the concept of \(\Delta\)-convergence and some of its basic properties.

Let C be a nonempty subset of metric space (Xd) and \(\{x_{n}\}\) be any bounded sequence in X while diam(C) denote the diameter of C. Consider a continuous functional \(r_{a}(\cdot , \{x_{n}\}): X \rightarrow \mathbb {R^{+}}\) defined by

$$\begin{aligned} r_{a}( x, \{x_{n}\}) = \limsup _{n \rightarrow \infty } d( x_{n} , x), \quad x \in X. \end{aligned}$$

The infimum of \(r_{a} (\cdot , \{x_{n}\})\) over C is said to be the asymptotic radius of \(\{x_{n}\}\) with respect to C and is denoted by \(r_{a}(C, \{x_{n}\})\).

A point \(z \in C\) is said to be an asymptotic center of the sequence \(\{x_{n}\}\) with respect to C if

$$\begin{aligned} r_{a} ( z , \{x_{n}\}) = \inf \{r_{a} ( x, \{x_{n}\}): x \in C \}, \end{aligned}$$

the set of all asymptotic centers of \(\{x_{n}\}\) with respect to C is denoted by \(AC(C, \{x_{n}\})\). This set may be empty, a singleton, or certain infinitely many points.

If the asymptotic radius and the asymptotic center are taken with respect to X,  then these are simply denoted by \(r_{a}( X, \{x_{n}\}) = r_{a}( \{x_{n}\})\) and \(AC(X, \{x_{n}\})= AC(\{x_{n}\}),\) respectively. We know that for \(x \in X\), \(r_{a}( x, \{x_{n}\}) = 0\) if and only if \(\lim _{n \rightarrow \infty } x_{n} = x\). It is known that every bounded sequence has a unique asymptotic center with respect to each closed convex subset in uniformly convex Banach spaces and even CAT(0) spaces.

The following Lemma is due to Leuştean [17] and ensures that this property also holds in a complete uniformly convex hyperbolic space.

Lemma 2.1

[17, Proposition 3.3] Let (XdW) be a complete uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\). Then every bounded sequence \(\{x_{n}\}\) in X has a unique asymptotic center with respect to any nonempty closed convex subset C of X.

Recall that, a sequence \(\{x_{n}\}\) in X is said to \(\Delta\) -converge to \(x \in X,\) if x is the unique asymptotic center of \(\{u_{n}\}\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\). In this case, we write \(\Delta\)-\(\lim \nolimits _{n} x_{n} = x\) and call x the \(\Delta\)-limit of \(\{x_{n}\}\).

Lemma 2.2

[13] Let (XdW) be a uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\). Let \(x \in X\) and \(\{t_{n}\}\) be a sequence in [ab] for some \(a, b \in (0,1)\). If \(\{x_{n}\}\) and \(\{y_{n}\}\) are sequences in X such that

$$\begin{aligned} \begin{array}{ll} &{} \limsup \limits_{ n \rightarrow \infty }\, d( x_{n} , x) \le c , \quad \limsup \limits _{ n \rightarrow \infty } \, d( y_{n} , x) \le c, \\ &{} \lim \limits _{ n \rightarrow \infty } d(W(x_{n} , y_{n} , t_{n} ), x ) =c, \end{array} \end{aligned}$$

for some \(c \ge 0,\) then \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( x_{n}, y_{n} ) =0.\)

Lemma 2.3

Let (Xd) be complete uniformly convex hyperbolic space with monotone modulus of convexity \(\eta\), C be a nonempty closed convex subset of X and \(T : C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C. Suppose \(\{x_{n}\}\) is bounded sequence in C such that

$$\begin{aligned} \lim _{ n \rightarrow \infty } d( x_{n} , Tx_{n}) =0, \end{aligned}$$

then T has a fixed point.

Proof

Since \(\{x_{n}\}\) is bounded sequence in X, then by Lemma 2.1, has unique asymptotic center in C, i.e., \(AC(C, \{x_{n}\}) = \{x\}\) is singleton and \(\lim _{ n \rightarrow \infty } d( x_{n} , Tx_{n})=0\). Since T satisfies the condition \((E_\mu )\) on C, there exists \(\mu > 1\) such that

$$\begin{aligned} d(x_n, Tx) \le \mu d(x_n, Tx_n) + d(x_n, x). \end{aligned}$$

Taking \(\limsup\) as \(n \rightarrow \infty\) both the sides, we have

$$\begin{aligned} r_{a}(Tx, \{x_{n}\})&= \displaystyle \limsup _{ n \rightarrow \infty } d( x_{n} , Tx) \\ &\le \displaystyle \limsup _{ n \rightarrow \infty } [\mu d( x_{n}, Tx_{n}) + d( x_{n} , x) ] \\ &\le \displaystyle \limsup _{ n \rightarrow \infty } d( x_{n} , x)= r_{a}( x, \{x_{n}\} ). \end{aligned}$$

By using the uniqueness of asymptotic center, \(Tx = x\), so x is fixed point of T. Hence, F(T) is nonempty. \(\square\)

Main results

We begin with the definition of Fejér monotone sequences:

Definition 3.1

Let C be a nonempty subset of hyperbolic space X and \(\{x_n\}\) be a sequence in X. Then \(\{x_n\}\) is Fejér monotone with respect to C if for all \(x \in C\) and \(n \in \mathbb {N}\),

$$\begin{aligned} d(x_{n+1}, x) \le d(x_n, x). \end{aligned}$$

Example 3.1

Let C be a nonempty subset of X, and \(T :C \rightarrow C\) be a quasi-nonexpansive (in particular, nonexpansive) mapping such that \(F(T) \ne \varnothing\) and \(x_0 \in C\). Then the sequence \(\{x_n\}\) of Picard iterates is Fejér monotone with respect to F(T).

We can easily prove the following proposition.

Proposition 3.1

Let \(\{x_n\}\) be a sequence in X and C be a nonempty subset of X. Suppose that \(\{x_n\}\) is Fejér monotone with respect to C then we have the followings:

  1. (1)

    \(\{x_n\}\) is bounded.

  2. (2)

    The sequence \(\{d(x_n, p)\}\) is decreasing and converges for all \(p \in F(T)\).

We now define Picard Normal S-iteration process (PNS) in hyperbolic spaces:

\(\mathrm{(PNS)}\)   Picard normal S-iteration process: Let C be a nonempty closed convex subset of a hyperbolic space X and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). For any \(x_1 \in C\) the sequence \(\{x_{n}\}\) is defined by

$$\begin{aligned} \left\{ \begin{aligned}&x_{n+1} = W(Ty_n, 0, 0)\\&y_{n} = W(z_{n} , Tz_{n}, \alpha _{n})\\&z_{n} = W(x_n, Tx_n, \beta _n),\quad n \in \mathbb {N}, \end{aligned} \right. \end{aligned}$$
(3.1)

where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are in \([\epsilon , 1- \epsilon ]\) for all \(n \in \mathbb {N}\) and some \(\epsilon \in (0,1)\).

Lemma 3.1

Let C be a nonempty closed convex subset of a hyperbolic space X and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). If \(\{x_n\}\) is a sequence defined by (3.1), then \(\{x_n\}\) is Fejér monotone with respect to F(T).

Proof

Since T satisfies the condition \((C_ \lambda )\) for some \(\lambda \in (0, 1)\) and \(p \in F(T),\) we have

$$\begin{aligned} \lambda d(p, Tp) = 0 \le d(p, z_n) \end{aligned}$$
$$\begin{aligned} \lambda d(p, Tp) = 0 \le d(p, y_n) \end{aligned}$$

and

$$\begin{aligned} \lambda d(p, Tp) = 0 \le d(p, x_n), \end{aligned}$$

for all \(n \in \mathbb {N}\) so that, we have

$$\begin{aligned} d(Tp, Tz_n) \le d(p, z_n) \end{aligned}$$
$$\begin{aligned} d(Tp, Ty_n) \le d(p, y_n) \end{aligned}$$

and

$$\begin{aligned} d(Tp, Tx_n)\le d(p, x_n). \end{aligned}$$

Using (3.1), we have

$$\begin{aligned} d( z_{n}, p)&= d( W(x_n, Tx_n, \beta _n), p) \nonumber \\ &= d( (1-\beta _{n}) x_{n} \oplus \beta _{n} Tx_{n}, p) \nonumber \\ &\le ( 1- \beta _{n}) d( x_{n}, p) + \beta _{n} d(Tx_{n}, p) \nonumber \\ &\le d(x_{n}, p). \end{aligned}$$
(3.2)

From (3.1) and (3.2), we have

$$\begin{aligned} d( y_{n}, p)&= {} d( W(z_n, Tz_n, \alpha _n), p) \nonumber \\& = d((1-\alpha _{n}) z_{n} \oplus \alpha _{n} Tz_{n}, p) \nonumber \\ &\le ( 1- \alpha _{n}) d( z_{n}, p) + \alpha _{n} d(Tz_{n}, p) \nonumber \\&\le ( 1- \alpha _{n}) d( z_{n}, p) + \alpha _{n} d(z_{n}, p) \nonumber \\ &\le d(z_{n}, p) \nonumber \\ &\le d( x_{n} , p). \end{aligned}$$
(3.3)

Again, using (3.2) and (3.3), we have

$$\begin{aligned} d( x_{n+1}, p )&= d(W(Ty_{n} , 0, 0 ), p) \nonumber \\&= d( Ty_{n}, p) \nonumber \\ &\le d( y_{n} , p) \nonumber \\ &\le d(x_{n}, p), \end{aligned}$$
(3.4)

that is, \(d( x_{n+1}, p ) \le d( x_{n}, p )\) for all \(p \in F(T).\) Thus, \(\{x_n\}\) is Fejér monotone with respect to F(T). \(\square\)

Lemma 3.2

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). If \(\{x_n\}\) is a sequence defined by (3.1), then F(T) is nonempty if and only if the sequence \(\{ x_{n} \}\) is bounded and \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( x_{n} , Tx_{n}) =0\).

Proof

Suppose that the fixed point set F(T) is nonempty and \(p \in F(T).\) Then by Lemma 3.1, \(\{x_n\}\) is Fejér monotone with respect to F(T) and hence by Proposition 3.1, \(\{x_n\}\) is bounded and \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, p)\) exists, let \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, p) = c \ge 0.\)

  1. (i)

    If \(c =0\), we obviously have

    $$\begin{aligned} d( x_{n} , Tx_{n}) &\le \,d( x_{n}, p) + d( Tx_{n} , p) \\ &\le \,2 d( x_{n}, p), \end{aligned}$$

    by taking \(\lim\) as \(n \rightarrow \infty\) on both the sides above inequality, we have

    $$\begin{aligned} \lim _ {n \rightarrow \infty } d( x_{n} , Tx_{n}) = 0. \end{aligned}$$
  2. (ii)

    If \(c > 0\), since T satisfies the condition \((C_ \lambda )\) for some \(\lambda \in (0, 1)\) and \(p \in F(T),\) we have

    $$\begin{aligned} d( Tx_{n} , p) \le d( x_{n} , p), \end{aligned}$$

    taking \(\limsup\) as \(n \rightarrow \infty\) both the sides, we get

    $$\begin{aligned} \displaystyle \limsup _{ n \rightarrow \infty } d( Tx_{n}, p) \le c. \end{aligned}$$

    Taking \(\limsup\) as \(n \rightarrow \infty\) both the sides in (3.2), we have

    $$\begin{aligned} \displaystyle \limsup _{ n \rightarrow \infty } d( z_{n}, p) \le c. \end{aligned}$$
    (3.5)

Since

$$\begin{aligned} d( x_{n+1}, p) \le d( z_{n} , p), \end{aligned}$$

therefore, we take \(\liminf\) as \(n \rightarrow \infty\) both the sides, we get

$$\begin{aligned} \displaystyle \liminf _{ n \rightarrow \infty } d( x_{n+1} , p)\le & {} \displaystyle \liminf _{ n \rightarrow \infty } d( z_{n}, p) \nonumber \\ c\le & {} \displaystyle \liminf _{ n \rightarrow \infty } d( z_{n}, p) \end{aligned}$$
(3.6)

From (3.5) and (3.6), we have

$$\begin{aligned} \displaystyle \lim _{ n \rightarrow \infty } d( z_{n}, p) = c, \end{aligned}$$

it implies that

$$\begin{aligned} c= & {} \displaystyle \limsup _{ n \rightarrow \infty } d( z_{n} , p) \\= & {} \displaystyle \limsup _{ n \rightarrow \infty } [ d( W ( x_{n} , Tx_{n} ,\beta _{n}), p ) ] \\= & {} \displaystyle \limsup _{ n \rightarrow \infty } [ d( (1-\beta _{n}) x_{n} \oplus \beta _{n} Tx_{n}, p ) ] \\\le & {} \displaystyle \limsup _{ n \rightarrow \infty } [( 1- \beta _{n}) d( x_{n}, p) + \beta _{n} d( Tx_{n} , p) ]\\\le & {} ( 1- \beta _{n}) \displaystyle \limsup _{ n \rightarrow \infty } d( x_{n} , p) + \beta _{n} \displaystyle \limsup _{ n \rightarrow \infty }d( Tx_{n} , p)=c. \end{aligned}$$

Hence, it follows from Lemma 2.2, we have

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } d(x_{n}, Tx_{n})= & {} 0. \end{aligned}$$

Conversely, suppose that sequence \(\{ x_{n} \}\) is bounded and \(\lim _{ n \rightarrow \infty } d( x_{n} , Tx_{n}) =0.\) Hence, it holds all the assumption of Lemma 2.3, so we have \(T x=x\), i.e., F(T) is nonempty. \(\square\)

Theorem 3.1

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (3.1), then the sequence \(\{x_n\}\) \(\Delta\)-converges to a fixed point of T.

Proof

From Lemma 3.2, we observe that \(\{x_n\}\) is a bounded sequence therefore, \(\{x_{n}\}\) has a \(\Delta\)-convergent subsequence. We now prove that every \(\Delta\)-convergent subsequence of \(\{x_{n}\}\) has unique \(\Delta\)-limit F(T). For this, let u and v \(\Delta\)-limits of the subsequences \(\{u_{n}\}\) and \(\{v_{n}\}\) of \(\{x_{n}\}\) respectively. By Lemma 2.1, \(AC(C, \{u_{n}\})= \{u\}\) and \(AC(C, \{v_{n}\})= \{v\}\). By Lemma 3.2, we have \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( u_{n}, Tu_{n}) = 0\).

We claim that u and v are fixed points of T and it is unique.

By Lemma 2.3, u and v are fixed points of T. Now we show that \(u =v\). If not, then by uniqueness of asymptotic center

$$\begin{aligned} \limsup _{n \rightarrow \infty } d(x_{n},u)= & {} \displaystyle \limsup _{n \rightarrow \infty } d(u_n, u) \\<& {} \displaystyle \limsup _{n \rightarrow \infty } d(u_n, v) \\= & {} \displaystyle \limsup _{n \rightarrow \infty } d(x_n, v) \\= & {} \displaystyle \limsup _{n \rightarrow \infty } d(v_n, v) \\< & {} \displaystyle \limsup _{n \rightarrow \infty } d(v_n, u) \\= & {} \displaystyle \limsup _{n \rightarrow \infty } d ( x_{n} , u), \end{aligned}$$

which is a contradiction. Hence \(u =v\), the sequence \(\{x_n\}\) \(\Delta\)-converges to a fixed point of T. \(\square\)

Theorem 3.2

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1))\) and (E) on C with \(F(T) \ne \varnothing\). Then the sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits_{n \rightarrow \infty } D(x_n, F(T)) = 0\), where \(D(x_{n} , F(T))= \inf _{x \in F(T)} d( x_{n} , x)\).

Proof

Necessity is obvious, we have to prove only sufficient part. First, we show that the fixed point set F(T) is closed, let \(\{x_n\}\) be a sequence in F(T) which converges to some point \(z \in C\). As

$$\begin{aligned} \lambda d(x_n, Tx_n) = 0 \le d(x_n, z), \end{aligned}$$

in view of the condition \((C_\lambda )\), we have

$$\begin{aligned} d(x_n, Tz) = d(Tx_n, Tz) \le d(x_n, z). \end{aligned}$$

By taking the limit of both sides we obtain

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty }d(x_n, Tz) \le \displaystyle \lim _{n \rightarrow \infty }d(x_n, z) = 0. \end{aligned}$$

In view of the uniqueness of the limit, we have \(z = Tz\), so that F(T) is closed. Suppose

$$\begin{aligned} \displaystyle \liminf _{n \rightarrow \infty } D(x_n, F(T)) = 0. \end{aligned}$$

From (3.4)

$$\begin{aligned} D(x_{n+1}, F(T)) \le D(x_{n}, F(T)), \end{aligned}$$

it follows from Lemma 3.1 and Proposition 3.1 that \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, F(T))\) exists. Hence we know that \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } D( x_{n} , F(T))=0\).

Consider a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that

$$\begin{aligned} d(x_{n_k}, p_{k}) < \frac{1}{2^k}, \end{aligned}$$

for all \(k\ge 1\) where \(\{p_k\}\) is in F(T). By Lemma 3.1, we have

$$\begin{aligned} d(x_{n_{k+1}}, p_{k}) \le d(x_{n_k}, p_{k}) < \frac{1}{2^k}, \end{aligned}$$

which implies that

$$\begin{aligned} d(p_{k+1}, p_k)\le & {} d(p_{k+1}, x_{n_{k+1}}) + d(x_{n_{k+1}}, p_k) \\<&\frac{1}{2^{k+1}} + \frac{1}{2^k} \\< & {} \frac{1}{2^{k-1}}. \end{aligned}$$

This shows that \(\{p_k\}\) is a Cauchy sequence. Since F(T) is closed, \(\{p_k\}\) is a convergent sequence. Let \(\displaystyle \lim \nolimits _{k \rightarrow \infty }p_k = p\). Then we know that \(\{x_n\}\) converges to p. In fact, since

$$\begin{aligned} d(x_{n_k} , p) \le d(x_{n_k} , p_k) + d(p_k, p) \rightarrow 0,~~ \mathrm{as}~~ k \rightarrow \infty , \end{aligned}$$

we have \(\displaystyle \lim \nolimits _{k \rightarrow \infty } d(x_{n_k} , p) = 0\). Since \(\displaystyle \lim \nolimits _{n \rightarrow \infty } d(x_{n} , p)\) exists, the sequence \(\{x_{n}\}\) is convergent to p.

We recall the definition of condition (I) due to Senter and Doston [22], define as follows:

Definition 3.2

[22] Let C be a nonempty subset of a metric space X. A mapping \(T :C \rightarrow C\) with nonempty fixed point set F(T) in C is said to satisfy Condition (I) if there is a nondecreasing function \(f:[0, \infty ) \rightarrow [0, \infty )\) with \(f(0)=0, f(t)>0\) for all \(t \in (0, \infty )\), such that \(d(x, Tx) \ge f(D(x, F(T)))\) for all \(x \in C\), where \(D (x, F(T))) = \inf \{d(x, p) : p \in F(T)\}\).

Theorem 3.3

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C. Moreover, T satisfies the condition (I) with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (3.1), then the sequence \(\{x_n\}\) converges strongly to some fixed point of T.

Proof

As in proof of Theorem 3.2, it can be shown that F(T) is closed. Observe that by Lemma 3.1, we have \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, Tx_n) = 0\). It follows from the condition (I) that

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } f(D(x_n, F(T)) \le \displaystyle \lim _{n \rightarrow \infty }d(x_n, Tx_n) = 0. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty } f(D(x_n, F(T))) = 0. \end{aligned}$$

Since \(f :[0, \infty ] \rightarrow [0, \infty )\) is a nondecreasing mapping satisfying \(f (0) = 0\) and \(f(t)>0\) for all \(t \in (0, \infty )\), we have \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, F(T)) = 0\). Rest of the proof follows in lines of Theorem 3.2. \(\square\)

In the view of the Remark 1.1 the following Corollaries are trivially true.

Corollary 3.1

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (for each \(x_{1} \in C\) )

$$\begin{aligned} \left\{ \begin{array}{l} x_{n+1} = W(Ty_n, 0, 0)\\ y_{n} = W(x_{n} , Tx_{n}, \alpha _{n}) ,\quad \quad n \in \mathbb {N}, \end{array} \right. \end{aligned}$$
(3.7)

then the sequence \(\{x_n\}\) \(\Delta\)-converges to a fixed point of T.

Corollary 3.2

Under the assumption of Corollary 3.1 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.7), converges strongly to some fixed point of T if and only if \(\displaystyle \lim \nolimits _{n \rightarrow \infty } \inf D(x_n, F(T)) = 0,\) where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)

Corollary 3.3

Under the assumption of Corollary 3.1 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.7), converges strongly to some fixed point of T.

In the view of the Remark 2.2, we have the following Corollaries:

Corollary 3.4

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a SKC mapping with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) defined by (3.1), \(\Delta\) -converges to a fixed point of T.

Corollary 3.5

Under the assumption of Corollary 3.4 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits_{n \rightarrow \infty } D(x_n, F(T)) = 0\), where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)

Corollary 3.6

Under the assumption of Corollary 3.4 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T.

Corollary 3.7

Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a SKC mapping with \(F(T) \ne \varnothing\). Then the sequence \(\{x_n\}\) defined by (3.7), \(\Delta\)-converges to a fixed point of T.

Corollary 3.8

Under the assumption of Corollary 3.7 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits _{n \rightarrow \infty } D(x_n, F(T)) = 0,\) where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)

Corollary 3.9

Under the assumption of Corollary 3.7 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T.