Abstract

We introduce a new general hybrid iterative algorithm for finding a common element of the set of solution of fixed point for a nonexpansive mapping, the set of solution of generalized mixed equilibrium problem, and the set of solution of the variational inclusion for a β-inverse-strongly monotone mapping in a real Hilbert space. We prove that the sequence converges strongly to a common element of the above three sets under some mild conditions. Our results improve and extend the corresponding results of Marino and Xu (2006), Yao and Liou (2010), Tan and Chang (2011), and other authors.

1. Introduction

In the theory of variational inequalities, variational inclusions, and equilibrium problems, the development of an efficient and implementable iterative algorithm is interesting and important. The important generalization of variational inequalities called variational inclusions, have been extensively studied and generalized in different directions to study a wide class of problems arising in mechanics, optimization, nonlinear programming, economics, finance, and applied sciences.

Equilibrium theory represents an important area of mathematical sciences such as optimization, operations research, game theory, complementarity problems, financial mathematics, and mechanics. Equilibrium problems include variational inequalities, optimization problems, Nash equilibria problems, saddle point problems, fixed point problems, and complementarity problems as special cases; for example, see the references herein. Let be a closed convex subset of a real Hilbert space with the inner product and the norm . Let be a bifunction of into , where is the set of real numbers, be a mapping and be a real-valued function. The generalized mixed equilibrium problem for finding such that The set of solutions of (1.1) is denoted by , that is If and , the problem (1.1) is reduced into the equilibrium problem [1] for finding such that The set of solutions of (1.3) is denoted by . This problem contains fixed point problems, includes as special cases numerous problems in physics, optimization, and economics. Some methods have been proposed to solve the equilibrium problem, please consult [24].

If and , the problem (1.1) is reduced into the Hartmann-Stampacchia variational inequality [5] for finding such that The set of solutions of (1.4) is denoted by . The variational inequality has been extensively studied in the literature [6].

If and , the problem (1.1) is reduced into the minimize problem for finding such that The set of solutions of (1.5) is denoted by .

A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space: where is a linear bounded operator, is the fixed point set of a nonexpansive mapping , and is a given point in [7].

Recall, a mapping is said to be nonexpansive if for all . If is bounded closed convex and is a nonexpansive mapping of into itself, then is nonempty [8]. We denote weak convergence and strongly convergence by notations and , respectively. A mapping of into is called monotone if for all . A mapping of into is called - inverse-strongly monotone if there exists a positive real number such that for all . It is obvious that any -inverse-strongly monotone mappings is monotone and Lipschitz continuous mapping. A linear bounded operator is strongly positive if there exists a constant with the property for all . A self-mapping is a contraction on if there exists a constant such that for all . We use to denote the collection of all contraction on C. Note that each has a unique fixed point in .

Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence in the development of almost all branches of pure and applied sciences. Let be a single-valued nonlinear mapping and be a set-valued mapping. The variational inclusion problem is to find such that where is the zero vector in . The set of solutions of problem (1.12) is denoted by . The variational inclusion has been extensively studied in the literature. See, for example, [912] and the reference therein.

A set-valued mapping is called monotone if for all and imply . A monotone mapping is maximal if its graph of is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if for for all imply .

Let be an inverse-strongly monotone mapping of into and let be normal cone to at , that is, , and define Then is a maximal monotone and if and only if [13].

Let be a set-valued maximal monotone mapping, then the single-valued mapping defined by is called the resolvent operator associated with , where is any positive number and is the identity mapping. It is worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone, and that a solution of problem (1.12) is a fixed point of the operator for all , see [14], that is, .

In 2000, Moudafi [15] introduced the viscosity approximation method for nonexpansive mapping and proved that if is a real Hilbert space, the sequence defined by the iterative method below, with the initial guess chosen arbitrarily, where satisfies certain conditions, converges strongly to a fixed point of (say ) which is the unique solution of the following variational inequality:

In 2006, Marino and Xu [7] introduced a general iterative method for nonexpansive mapping. They defined the sequence generated by the algorithm , where and is a strongly positive linear bounded operator. They prove that if and the sequence satisfies appropriate conditions, then the sequence generated by (1.17) converges strongly to a fixed point of (say ) which is the unique solution of the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

In 2010, Yao and Liou [16] introduced the following composite iterative scheme in a real Hilbert space: for all , where . Furthermore, they proved and converge strongly to the same point , where is the projection of onto .

In 2011, Tan and Chang [11] introduced the following iterative process for be a sequence of nonexpansive mappings. Let be the sequence defined by where , and . Then, the sequence defined by (1.21) converges strongly to a common element of the set of fixed points of nonexpansive mapping, the set of solution of the variational inequality and the generalized equilibrium problem.

In this paper, we modify the iterative methods (1.17), (1.20), and (1.21) by purposing the following new general viscosity iterative method: , for all , where , , and satisfy some appropriate conditions. Consequently, we show that under some control conditions the sequence strongly converge to a common element of the set of fixed points of nonexpansive mapping, the solution of the generalized mixed equilibrium problem, and the set of solution of the variational inclusion in a real Hilbert space.

2. Preliminaries

Let be a real Hilbert space and be a nonempty closed convex subset of . Recall that the (nearest point) projection from onto assigns to each , the unique point in satisfying the property The following characterizes the projection . We recall some lemmas which will be needed in the rest of this paper.

Lemma 2.1. The function is a solution of the variational inequality (1.4) if and only if satisfies the relation for all .

Lemma 2.2. For a given , , .
It is well known that is a firmly nonexpansive mapping of onto and satisfies Moreover, is characterized by the following properties: and for all ,

Lemma 2.3 (see [17]). Let be a maximal monotone mapping and let be a monotone and Lipschitz continuous mapping. Then the mapping is a maximal monotone mapping.

Lemma 2.4 (see [18]). Each Hilbert space satisfies Opial's condition, that is, for any sequence with , the inequality , hold for each with .

Lemma 2.5 (see [19]). Assume is a sequence of nonnegative real numbers such that where and is a sequence in such that(i).(ii) or .Then .

Lemma 2.6 (see [20]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping. Then is demiclosed at zero, that is, implies .

For solving the generalized mixed equilibrium problem, let us assume that the bifunction , the nonlinear mapping is continuous monotone and satisfies the following conditions: (A1) for all ; (A2) is monotone, that is, for any ; (A3)for each fixed , is weakly upper semicontinuous; (A4)for each fixed , is convex and lower semicontinuous; (B1)for each and , there exist a bounded subset and such that for any , (B2) is a bounded set.

Lemma 2.7 (see [21]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction mapping satisfies (A1)–(A4) and let is convex and lower semicontinuous such that . Assume that either (B1) or (B2) holds. For and , then there exists such that Define a mapping as follows: for all . Then, the following hold: (i) is single-valued; (ii) is firmly nonexpansive, that is, for any ; (iii); (iv) is closed and convex.

Lemma 2.8 (see[7]). Assume is a strongly positive linear bounded operator on a Hilbert space with coefficient and , then .

3. Strong Convergence Theorems

In this section, we show a strong convergence theorem which solves the problem of finding a common element of ,, and of inverse-strongly monotone mappings in a Hilbert space.

Theorem 3.1. Let be a real Hilbert space, be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mappings, is convex and lower semicontinuous function, be a contraction with coefficient , be a maximal monotone mapping and be a strongly positive linear bounded operator of into itself with coefficient , assume that . Let be a nonexpansive mapping of into itself and assume that either (B1) or (B2) holds such that Suppose is a sequences generated by the following algorithm arbitrarily: where such that and with satisfy the following conditions:(C1), and ,(C2) and .
Then converges strongly to , where which solves the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

Proof. Because of condition (C1), we may assume without loss of generality, then for all . By Lemma 2.8, we have . Next, we will assume that .
Step 1. We will show are bounded.
Since are -inverse-strongly monotone mappings, we have In similar way, we can obtain It is clear that if , then are all nonexpansive.
Put . It follows that By Lemma 2.7, we have for all . Then, we have Put for all . From (3.2), we deduce that It follows from induction that Therefore is bounded, so are , , , , and .
Step 2. We claim that . From (3.2), we have We estimate , so we have Substituting (3.12) into (3.11) that We note that Next, we estimate , then we get Substituting (3.15) into (3.14), we obtain that And substituting (3.12), (3.16) into (3.11), we get where is a constant satisfying This together with (C1), (C2), and Lemma 2.5, implies that From (3.15), we also have as .Step 3. We show the following: (i); (ii). For and , then we get It follows that By the convexity of the norm , we have Substituting (3.8), (3.21) into (3.22), we obtain So, we obtain where . Since condition (C1), (C2) and then we obtain that as . We consider this inequality in (3.21) that Substituting (3.20) into (3.25), we have Substituting (3.8) and (3.26) into (3.22), we obtain So, we also have where . Since condition (C1), (C2), and then we obtain that as .Step 4. We show the following:(i); (ii); (iii). Since is firmly nonexpansive, we observe that Hence, we have Since is 1-inverse-strongly monotone, we have which implies that Substituting (3.32) into (3.25), we have Substituting (3.30) and (3.33) into (3.22), we obtain Then, we derive By condition (C1), (C2), , and . So, we have as . It follows that We note that . From , and hence Since So, by (3.37) and , we obtain Therefore, we observe that By condition (C1), we have as . Next, we observe that By (3.39) and (3.40), we have as .Step 5. We show that and . It is easy to see that is a contraction of into itself. Indeed, since we have Since is complete, there exists a unique fixed point such that . By Lemma 2.2, we obtain that for all .
Next, we show that , where is the unique solution of the variational inequality . We can choose a subsequence of such that Since is bounded, there exists a subsequence of which converges weakly to . We may assume without loss of generality that . We claim that , since and by Lemma 2.6, we have .
Next, we show that . Since , we know that It follows by (A2) that Hence, For and , let . From (3.46) we have From , we have . Further, from (A4) and the weakly lower semicontinuity of and , we have From (A1), (A4), and (3.48), we have and hence Letting , we have, for each , This implies that .
Lastly, we show that . In fact, since is a -inverse-strongly monotone, is monotone and Lipschitz continuous mapping. It follows from Lemma 2.3, that is a maximal monotone. Let , since . Again since , we have , that is, . By virtue of the maximal monotonicity of , we have and hence It follows from , we have and that It follows from the maximal monotonicity of that , that is, . Therefore, . It follows that
Step 6. We prove . By using (3.2) and together with Schwarz inequality, we have where and . It is clear that and . Hence, all conditions of Lemma 2.5, we can conclude that . This completes the proof.

Corollary 3.2. Let be a real Hilbert space and be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mappings, let be convex and lower semicontinuous function, be a contraction with coefficient , and be a maximal monotone mapping. Let be a nonexpansive mapping of into itself and assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: where , such that and with satisfy the following conditions:(C1), and ,(C2) and .
Then converges strongly to , where which solves the following variational inequality:

Proof. Putting and in Theorem 3.1, we can obtain desired conclusion immediately.

Corollary 3.3. Let be a real Hilbert space and be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mappings, let be convex and lower semicontinuous function, and be a maximal monotone mapping. Let be a nonexpansive mapping of into itself and assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: where such that and with satisfy the following conditions:(C1), and ,(C2) and .Then converges strongly to , where which solves the following variational inequality:

Proof. Putting in Corollary 3.2, we can obtain desired conclusion immediately.

Corollary 3.4. Let be a real Hilbert space, be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mappings, is convex and lower semicontinuous function, be a contraction with coefficient and be a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a nonexpansive mapping of into itself and assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: where , such that and with satisfing the following conditions:(C1), and ,(C2) and .
Then converges strongly to , where which solves the following variational inequality:

Proof. Taking in Theorem 3.1, we can obtain desired conclusion immediately.

Corollary 3.5. Let be a real Hilbert space, be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mappings, is convex and lower semicontinuous function, be a contraction with coefficient . Let be a nonexpansive mapping of into itself and assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: where and with satisfing the following conditions:(C1), and ,(C2) and .
Then converges strongly to , where which solves the following variational inequality:

Proof. Taking , and in Theorem 3.1, we can obtain desired conclusion immediately.

Remark 3.6. Corollary 3.5 generalizes and improves the result of Yao and Liou [16].

4. Some Applications

In this section, we apply the iterative scheme (1.22) for finding a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping and also apply Theorem 3.1 for finding a common fixed point of nonexpansive mappings and inverse-strongly monotone mappings.

Definition 4.1. A mapping is called strictly pseudocontraction if there exists a constant such that If , then is nonexpansive. In this case, we say that is a -strictly pseudocontraction. Putting . Then, we have Observe that Hence, we obtain Then, is-inverse-strongly monotone mapping.

Using Theorem 3.1, we first prove a strongly convergence theorem for finding a common fixed point of a nonexpansive mapping and a strictly pseudocontraction.

Theorem 4.2. Let be a real Hilbert space and be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mapping, be convex and lower semicontinuous function, be a contraction with coefficient and be a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a nonexpansive mapping of into itself and let be a -strictly pseudocontraction of into itself. Assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: for all , where , and . If for some with and is chosen so that for some with and satisfy the condition (C1)-(C2) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

Proof. Put , then is -inverse-strongly monotone and and . So by Theorem 3.1, we obtain the desired result.

Corollary 4.3. Let be a real Hilbert space and be a closed convex subset of . Let be a bifunction of into satisfying (A1)–(A4) and be -inverse-strongly monotone mapping, is convex and lower semicontinuous function. Let be a contraction with coefficient and be a nonexpansive mapping of into itself and let be a -strictly pseudocontraction of into itself. Assume that either (B1) or (B2) holds such that Suppose is a sequence generated by the following algorithm arbitrarily: for all , where , and . If is chosen for some with and is chosen so that for some with and satisfy the condition (C1)-(C2) in Theorem 3.1.

Then converges strongly to , where which solves the following variational inequality:

Proof. Put and in by Theorem 4.2, we obtain the desired result.

Acknowledgments

The authors would like to express their thanks to the referees for their helpful comments and suggestions. The authers would like to thank the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission for financial support under the Computational Science and Engineering Research Cluster (CSEC) Grant no.54000267. Moreover, the second author was supported by the Commission on Higher Education and the Thailand Research Fund under Grant MRG5380044.