The modified proximal point algorithm in Hadamard spaces

The purpose of this paper is to propose a modified proximal point algorithm for solving minimization problems in Hadamard spaces. We then prove that the sequence generated by the algorithm converges strongly (convergence in metric) to a minimizer of convex objective functions. The results extend several results in Hilbert spaces, Hadamard manifolds and non-positive curvature metric spaces.


Introduction
Let (X, d) be a metric space and f : X → (-∞, ∞] be a proper and convex function. One of the most important problems in convex analysis is the convex optimization problem to find x * ∈ X such that f x * = min y∈X f (y).
We denote by argmin y∈X f (y) the set of minimizers of f in X.
Convex optimization provides us with algorithms for solving a variety of problems which may appear in sciences and engineering. One of the most popular methods for approximation of a minimizer of a convex function is the proximal point algorithm (PPA), which was introduced by Martinet [1] and Rockafellar [2] in Hilbert spaces. Indeed, let f be a proper, convex and lower semicontinuous function on a real Hilbert space H which attains its minimum. The PPA is defined by x 1 ∈ H and x n+1 = argmin y∈H f (y) + 1 2λ n yx n 2 , λ n > 0, ∀n ≥ 1.
It was proved that the sequence {x n } converges weakly to a minimizer of f provided ∞ n=1 λ n = ∞. However, as shown by Güer [3], the PPA does not necessarily converges strongly (i.e., convergence in metric) in general. For getting the strong convergence of the proximal point algorithm, Xu [4] and Kamimura and Takahashi [5] introduced a Halperntype regularization of the proximal point algorithm in Hilbert spaces. They proved the strong convergence of Halpern proximal point algorithm under some certain conditions on the parameters.
Recently, many convergence results by PPA for solving optimization problems have been extended from the classical linear spaces such as Euclidean spaces,Hilbert spaces and Banach spaces to the setting of manifolds [6][7][8][9]. The minimizers of the objective convex functionals in the spaces with nonlinearity play a crucial role in the branch of analysis and geometry.
In 2013, Bačák [10] introduced the PPA in a CAT(0) space (X, d) as follows: x 1 ∈ X and Based on the concept of Fejér monotonicity, it was shown that if f has a minimizer and ∞ n=1 λ n = ∞, then {x n } -converges to its minimizer (see also [11]). In 2015 Cholamjiak [12] presented the modified PPA by Halpern iteration and then prove strong convergence theorem in the framework of CAT(0) spaces.
Very recently, Khatibzadeh et al. [13] presented a Halpern-type regularization of the proximal point algorithm, under suitable conditions they proved that the sequence generated by the algorithm converges strongly to a minimizer of the convex function in Hadamard spaces.
It is therefore, in this work, to continue along these lines and by using the viscosity implicit rules to introduce the modified PPA in Hadamard space for solving minimization problems. We prove that the sequence generated by the algorithm converges strongly to a minimizer of convex objective functions. The results presented in the paper extend and improve the main results of Martinet [1], Rockafellar [2] Bačák [10], Cholamjiak [12], Xu [4], Kamimura and Takahashi [5], Khatibzadeh et al. [13,Theorem 4.4].

Preliminaries and lemmas
In order to prove the main results, the following notions, lemmas and conclusions will be needed.
Let (X; d) be a metric space and let x, y ∈ X. A geodesic path joining x to y is an isometry c : [0, d(x; y)] → X such that c(0) = x, c(d(x; y)) = y. The image of a geodesic path joining x to y is called a geodesic segment between x and y. The metric space (X; d) is said to be a geodesic space, if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic space, if there is exactly one geodesic joining x and y for each x, y ∈ X.
for all x, y, z ∈ X and all t ∈ [0, 1] [14]. It is well known that any complete and simply connected Riemannian manifold having non-positive sectional curvature is a CAT(0) space. Other examples of CAT(0) spaces include pre-Hilbert spaces [15], R-trees, Euclidean buildings [16]. A complete CAT(0) space is often called a Hadamard space. We write (1t)x ⊕ ty for the unique point z in the geodesic segment joining from x to y such that d(x, z) = td(x, y) and d(y, z) = (1t)d(x, y). We also denote by [x, y] the geodesic segment joining x to y, that is, For a thorough discussion of CAT(0) spaces, some fundamental geometric properties and important conclusions, we refer to Bridson and Haefliger [15,16].
The following lemmas play an important role in proving our main results.
For all x, y, z ∈ X and t, s ∈ [0, 1], we have the following: Berg and Nikolaev [18] introduced the following concept of quasi-linearization in CAT(0) space X: It is well known [18, Corollary 3] that a geodesically connected metric space is a CAT(0) space if and only if it satisfies the Cauchy-Schwarz inequality.
• By using quasi-linearization, Ahmadi Kakavandi [19] proved that {x n } -converges to x ∈ X if and only if • Let C be a nonempty closed convex subset of a complete CAT(0) space X (i.e., a Hadamard space). The metric projection P C : X → C is defined by Some important examples of convex functions can be found in [15]. For r > 0, define the Moreau-Yosida resolvent of f in CAT(0) spaces as for all x ∈ X (see [20]). The mapping J r is well defined for all r > 0 (see [20]). (1) the resolvent J r is firmly nonexpansive, that is, for all x, y ∈ X and for all λ ∈ (0, 1); (2) the set Fix(J r ) of fixed points of the resolvent J r associated with f coincides with the set argmin y∈X f (y) of minimizers of f .
Remark 2.4 Every firmly nonexpansive mapping is nonexpansive. Hence J r is a nonexpansive mapping.

Lemma 2.5 ([21]
) Let X be a CAT(0) space, C be a nonempty closed and convex subset of X and T : C → C be a nonexpansive mapping. For any contraction φ : C → C and t ∈ (0, 1), let x t ∈ C be the unique fixed point of the contraction x → tf (x) ⊕ (1t)Tx, i.e., Then {x t } converge strongly as t → 0 to a point x * such that which is the unique solution to the following variational inequality:

The main results
Now, we are in a position to give the main results in this paper. Let φ : C → C be a contraction with the contractive coefficient k ∈ [0, 1) and, for arbitrary initial point x 0 ∈ C, let {x n } be the implicit iterative sequence generated by for all n ≥ 0, where α n ∈ (0, 1), β n ∈ [0, 1] satisfy the following conditions: Then the sequence {x n } converges strongly to x * = P Fix(J r ) φ(x * ), which is a fixed point of J r (therefore, by Lemma 2.3, it is a minimizer of f ) and it is also a solution of the following variational inequality: Proof We divide the proof into four steps.
Step 1. First, we prove that the sequence {x n } defined by (3.1) is well defined. In fact, for arbitrarily given u ∈ C, the mapping is a contraction with the contractive constant 1α.
Indeed, it follows from Lemma 2.1 and Lemma 2.3 that, for any x, y ∈ C, This implies that the mapping T u : C → C is a contraction. Hence the implicit iterative sequence {x n } defined by (3.1) is well defined.
Step 2. Next, we prove that {x n } is bounded. In fact, taking p ∈ Fix(J r ), we have By induction, we can prove that for all n ≥ 0. This implies that {x n } is bounded and so are {φ(x n )} and {J r (β n x n ⊕ (1β n )x n+1 )}.
Step 3. Next, we prove that the sequence {x n } converges strongly to some point in Fix(J r ). Letting for all n ≥ 0. By Lemma 2.5, the sequence {z n } converges strongly as n → ∞ to a point x * = P Fix(J r ) φ(x * ), which is the unique solution to the following variational inequality: On the other hand, it follows from (3.1), Lemma 2.3 and Lemma 2.1 that where M = sup n≥1 d(φ(z n-1 ), J r z n-1 ), which implies that By the condition (c), we have Since z n → x * = P Fix(J r ) φ(x * ), this implies that x n → x * ∈ Fix(J r ). By Lemma 2.3, x * ∈ argmin y∈C f (y) and x * is also the unique solution of the variational inequality (3.4). This completes the proof.
Looking at the proof of Theorem 3.1, we only use the fact that the resolvent operator J r is nonexpansive. If we replace the resolvent operator J r with a nonexpansive mapping T : C → C in Theorem 3.1, then we can obtain the following. Theorem 3.2 Let C be a nonempty closed and convex subset of a Hadamard space X. Let T : C → C be a nonexpansive mapping with Fix(T) = ∅. Let φ : C → C be a contraction with the contractive coefficient k ∈ [0, 1) and, for arbitrary initial point x 0 ∈ C, let {x n } be the implicit iterative sequence generated by x n+1 = α n φ(x n ) ⊕ (1α n )T β n x n ⊕ (1β n )x n+1 (3.7) for all n ≥ 0, where α n ∈ (0, 1), β n ∈ [0, 1] satisfy the following conditions: (a) lim n→∞ α n = 0; (b)