Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds

Abstract: In this article, we suggest and analyze the splitting type viscosity methods for inclusion and fixed point problem of a nonexpansive mapping in the setting of Hadamard manifolds. We derive the convergence of sequences generated by the proposed iterative methods under some suitable assumptions. Several special cases of the proposed iterative methods are also discussed. Finally, some applications to solve the variational inequality, optimization and fixed point problems are given on Hadamard manifolds.


Introduction
Let M : H ⇒ H be a set-valued maximal monotone mapping and K be a nonempty closed convex subset of Hilbert space H. The inclusion problem: Find x ∈ K such that x ∈ M −1 (0), (1.1) Therefore, in the recent past, many authors have been extended and generalized the inclusion problem (1.1) in different directions using novel and innovative techniques, see for example [1, 4, 7, 9, 11-13, 20, 24] and references cited therein. The fixed point problem of a nonexpansive self mapping S : K → K is defined as: Find x ∈ K such that x ∈ Fix(S ). (1.2) Most of the iterative methods to find the fixed point of nonexpansive mappings are due to Mann [14].
Moudafi [16] proposed the viscosity method by combining the nonexpansive mapping S with a given contraction mapping ϕ over K. For an arbitrary x 0 ∈ K, compute the sequence {x n } generated by where β n ∈ (0, 1) goes slowly to zero. The sequence {x n+1 } achieved from this iterative method converge strongly to a fixed point of S . Common solution of fixed point problem (1.2) of a nonexpansive self mapping S and variational inclusion problem studied by Takahashi et al. [22] in Hilbert spaces, which is defined as: where F is single valued monotone mapping and M, S are same as defined above. Recently, Ansari et al. [1] extended the problem (1.3) to Hadamard manifolds and studied the Halpern and Mann type algorithms to solve problem (1.3) and discussed several applications on Hadamard manifold. Very recently, Al-Homidan et al. [2] extended the viscosity method for hierarchical variational inequality problems and discussed its several special cases on Hadamard manifolds. Konrawut et al. [10] studied the splitting algorithms for common solutions of equilibrium and inclusion problems on Hadamard manifolds. In this article, encouraged and inspired by the work of [1,10,16], our motive is to introduce and study a splitting type viscosity method to find the common solution of inclusion problem (1.1) and fixed point problem (1.2) on Hadamard manifolds, that is, Find x ∈ K such that x ∈ Fix(S ) ∩ (M) −1 (0), (1.4) where K is a nonempty closed convex subset of Hadamard manifold D. Our suggested method is like a double back-ward method for inclusion and fixed point problems and can be seen as the refinement of the work studied in [1]. The article is organized as follows: The next section consists of preliminaries and some useful results of Riemannian manifolds. Section 3 deals with the main results explaining the splitting type viscosity method and convergence of the sequences obtained from it. In the last section, some applications of the proposed method and its convergence theorem to solve variational inequality, optimization and fixed point problems are given.

Preliminaries and auxiliary results
Let D be a finite dimensional differentiable manifold and for a vector field p ∈ D, the tangent space of D at p is denoted by T p D and the tangent bundle by T D = ∪ p∈D T p D. The tangent space T p D at p is a vector space and has the same dimension as D. An inner product p (·, ·) on T p D is the Riemannian metric on T p D. A tensor p (·, ·) is called a Riemannian metric on T p D, if for each p ∈ D, the tensor (·, ·) is a Riemannian metric on D. We assume that D is endowed with the Riemannian metric p (·, ·) with the corresponding norm . p . The angle between 0 x, y ∈ T q D, denoted by ∠ p (x, y) is defined as cos ∠ p (x, y) = p (·,·) x y . For the sake of simplicity, we donote . p = . , p (·, ·) = (·, ·) and ∠ p (x, y) = ∠(x, y).
For a given piecewise smooth curve γ : [a, b] → D joining p to q (i.e.γ(a) = p and γ(b) = q), the length of γ is defined as The Riemannian distance d(p, q) induces the original topology on D, minimize the length over the set of all such curves joining p to q.
Let be the Levi-Civita connection corresponding to Riemannian manifold D. A vector field U is said to be parallel along a smooth curve γ if γ (s) U = 0. If γ is parallel along γ, i.e., γ (s) γ (s) = 0, then γ is called geodesic and in this case γ is constant and if γ = 1, then γ is said to be normalized geodesic. A geodesic joining p to q in D is called minimal geodesic if its length is equal to d(p, q). A Riemannian manifold is called (geodesically) complete if for any p ∈ D, all geodesics emanating from p are defined for all s ∈ (−∞, ∞). We know by Hopf-Rinow Theorem [21] that if D is Riemannian manifold then following are equivalent: Assuming D is a complete Riemannian manifold, the exponential mapping exp p : T p D → D at p is defined by exp p (ϑ) = γ ϑ (1, p) for each ϑ ∈ T p D, where γ(·) = γ ϑ (·, p) is the geodesic starting at p with velocity ϑ (i.e., γ(0) = 0 and γ (0) = ϑ). We know that exp q (sϑ) = γ ϑ (s, p) for each real number s. One can easily see that exp p 0 = γ ϑ (0; p) = p, where 0 is the zero tangent vector. The exponential mapping exp p is differentiable on T p D for any p ∈ D. It is known to us that the derivative of exp p (0) is equal to the identity vector of T p D. Therefore by inverse mapping theorem there exists an inverse exponential mapping exp −1 : D → T p D. Moreover, for any p, q ∈ D, we have d(p, q) = exp −1 p q . A complete, simply connected Riemannian manifold of non-positive sectional curvature is called a Hadamard manifold. (2.2) A subset K ⊂ D is said to be convex if for any two points p, q ∈ K, the geodesic joining p to q is contained in K, that is, if γ : [a, b] → D is a geodesic such that p = γ(a) and q = γ(b), then γ((1 − s)a + sb) ∈ K for all s ∈ [0, 1]. From now on, K ⊂ D will denote a nonempty, closed and convex subset of a Hadamard manifold D. The projection onto K is defined by , for all s ∈ [0, 1] and for all a, b ∈ R.
In particular, for each p ∈ D, the function d(·, p) : D → R is a convex function.
If D is a finite dimensional manifold with dimension n, then Proposition 2.1 shows that D is diffeomorphic to the Euclidean space R n . Thus, we see that D has the same topology and differential structure as R n . Moreover, Hadamard manifolds and Euclidean spaces have several similar geometrical properties. We describe some of them in the following results.
Recall that a geodesic triangle ∆(q 1 , q 2 , q 3 ) of Riemannian manifold is a set consisting of three points q 1 , q 2 and q 3 and the three minimal geodesics γ j joining q j to q j+1 , where j = 1, 2, 3 mod (3).
The points q 1 , q 2 , q 3 are called the comparison points to q 1 , q 2 , q 3 , respectively. The triangle ∆(q 1 , q 2 , q 3 ) is called the comparison triangle of the geodesic triangle ∆(q 1 , q 2 , q 3 ), which is unique upto isometry of D.
. Then the following inequality holds: (ii) Let p be a point on the geodesic joining q 1 to q 2 and p be its comparison point in the interval [q 1 , q 2 ]. Suppose that d(p, q 1 ) = p − q 1 and d(p, q 2 ) = p − q 2 . Then

Proposition 2.4. [23]
Let K be a nonempty closed convex subset of a Hadamard manifold D. Then for any p ∈ D, P K (p) is a singleton set and the following inequality holds: The set of all single-valued vector fields M :   x n+1 ≤ (1 − a n )x n + a n b n , ∀n ∈ N, then lim n→∞ x n = 0.

Main results
We propose the following splitting type viscosity method for problem (1.4) on Hadamard manifold.
For the convergence of Algorithm 3.1, we require the following conditions on the sequences {α n } and {β n } : If S = I, the identity mapping on K, then Algorithm 3.1 reduces to the following algorithm to find the solution of problem (1.1).
Algorithm 3.2. Suppose that K be nonempty closed and convex subset of Hadamard manifold D. Let M : K ⇒ D be a set-valued vector field and ϕ : K → K be self mapping. For an arbitrary x 0 ∈ K, compute the sequences {y n } and {x n } as follows where α n , β n ∈ (0, 1) and λ > 0 are same as given in Algorithm 3.1.
We can obtain the the following proposition by substituting A = 0, zero vector field in Proposition 3.2 of [3].
Proposition 3.1. For any x ∈ K, the following assertions are equivalent: , for all λ > 0.
Remark 3.1. It can be easily seen that for a nonexpansive mapping S , the set Fix(S ) is geodesic convex, for more details, (see [1,12]). Since J M λ is nonexpansive, by Proposition 3.1, it follows that Proof. We divide the proof into following five steps. Step (3.1) Since x n+1 = γ n (1 − β n ), then by convexity of Riemannian distance, we have which implies that the sequence {x n } is bounded and using (3.1), {y n } is also bounded. Since S is nonexpansive, ϕ is a contraction, we conclude that the sequences {S (y n )} and {ϕ(x n )} are also bounded.
Step II. We show that lim n→∞ d(x n+1 , x n ) = 0. Since S is nonexpansive and ϕ is a contraction, then using (2.1), (2.4) and Proposition 2.2, we have Again, by Algorithm 3.1 and nonexpansive property of J M λ , we have and Taking limit n → ∞, we have Step III. Next, we show that lim n→∞ d(x n , y n ) = 0. Since ϕ is a contraction, then by using Algorithm 3.1 and (3.1), we obtain Let m ≤ n, then we have By taking limit n → ∞, we have (1 −β i ) = 0. Hence by letting limit m → ∞, we get lim n→∞ d(x n , y n ) = 0. (3.10) Step IV. Since {x n } is bounded, so there exists a subsequence {x n k } of {x n } such that x n k → w as k → ∞. Let u n = J M λ (x n ), by Algorithm 3.1, y n = exp x n (1−α n )exp −1 x n J M λ (x n ). Then we have d(y n , u n ) = α n d(x n , u n ) and d(y n , u n ) → 0 as n → ∞. Thus d(x n , u n ) ≤ d(x n , y n ) + d(y n , u n ) → 0 as n → ∞. (3.11) By the contuinuity of J M λ , as k → ∞, we have This implies that J M λ (w) = w, by Proposition 2.1, we get w ∈ (M) −1 (0). Again, by using the convexity of Riemannian manifold, we have Since {x n } is bounded and ϕ is a κ-contraction, we get (3.14) This together with the condition A 1 , implies that lim n→∞ d(x n+1 , S (y n )) = lim n→∞ β n C 6 = 0. that is, {y n k } converges to w as k → ∞. Then, we obtain d(S (w), w) ≤ d(S (w), S (y n k )) + d(S (y n k ), x n k +1 ) + d(x n k +1 , w) ≤ d(w, y n k ) + d(S (y n k ), x n k +1 ) + d(x n k +1 , w) → 0, as k → ∞, (3.17) and so, w ∈ Fix(S ). Thus, we have w ∈ Fix(S ) ∩ (M) −1 (0). We obtain the following convergence result for Algorithm 3.2, by replacing S = I, the identity mapping in Theorem 3.1.  To illustrate the convergence of our algorithms, we extend the example which was also considered in [4].
Then M is a Riemannian manifold with Riemannian metric ·, · defined by u, v := g(x)uv for all u, v ∈ T x D, where g : R ++ → (0, +∞) is given by g(x) = x −2 . It directly follows that the tangent plane T x D at x ∈ D is equal to R for all x ∈ D. The Riemannian distance d : D × D → R + is given by d(x, y) := ln x y , ∀x, y ∈ D.
Therefore, (R ++ , ·, · ) is a Hadamard manifold and the unique geodesic γ : R → D starting from In other words, γ(t), in terms of initial point γ(0) = x and terminal point γ(1) = y, is defined as γ(t) := x 1−t y t . The inverse of exponential mapping is given by Note that M is a monotone vector field and the resolvent of M is given by Let ϕ be a contraction and S be a nonexpansive mapping, defined by ϕ(x) = 1 2 x and S (x) = x for all x ∈ D, respectively. Clearly, the solution set of the problem (1.4) is {0}. Choose any initial guess x 0 = 1, λ = 1 2 , α n = β n = 1 √ n+1 , and α n = β n = 1 (n+1) 1 3 . Then all the conditions of Theorem 3.2 are satisfied, and hence, we conclude that the sequence {x n } ∞ n=0 generated by Algorithm 3.1 converges to a solution of the problem (1.4). The convergence of the sequence is shown in Figure 1.

Applications
By adopting the techniques and methodologies of [1][2][3][4][5][6], we drive the algorithm and convergence results for variational inequality and optimization problems using the proposed iterative methods.

Variational inequality
Let K be a nonempty, closed and convex subset of Hadamard manifold M and A : K → T M be a single-valued vector field. Németh [18], introduced the variational inequality problem V I(A, K) to find x ∈ K such that It is known to us that x ∈ K is a solution of V I(A, K) if and only if x satisfies (for more details, see [11]) where N K (x) denotes the normal cone to K at x ∈ K, defined as N K (x) = {w ∈ T x M : (w, exp −1 x y) ≤ 0, ∀ y ∈ K}. Let I K be the indicator function of K, i.e., Since I K is proper, lower semicontinuous, the differential ∂I K (x) of I K is maximal monotone, defined by Let J ∂I K λ be the resolvent of ∂I K , defined as , ∀x ∈ M, λ > 0. Thus, for A : K → M and for all for x ∈ K, we have ϕ(x n ) (y n ), converge to the solution of V I(A, K), which is a fixed point of the mapping P (M) −1 (0) ϕ.

Optimization
For a proper lower semicontinuous and geodesic convex function h : D → (−∞, +∞], the minimization problem is min p∈D h(p). (4.5) We know that, the subdifferential ∂h(p) at p is closed and geodesic convex [1] and is defined as ∂h(p) = {q ∈ T p D : (q, exp −1 p q) ≤ h(q) − h(p), ∀ q ∈ D}. If the solution set of minimization problem (4.5) is Ω, then it can be easily seen that p ∈ Ω ⇔ 0 ∈ ∂h(p). Now, we can state some results for minimization problem (4.5), using Algorithm 3.1 and Algorithm 3.2.
converge to the solution of Ω ∩ Fix(S ), which is a fixed point of the mapping P Ω ϕ.

Conclusions
In this paper, we studied the splitting type viscosity methods for inclusion and fixed point problem of nonexpansive mapping in Hadamard manifolds. We prove the convergence of iterative sequences obtained from the proposed method. Our method is new and can be seen as the refinement of methods studied in [1]. Some applications of the proposed method are given for variational inequalities, optimization and fixed point problems. We suppose that the method presented in this paper can be used to study some generalized inclusion and fixed point problems in geodesic spaces.