Abstract
The purpose of this article is to suggest a modified subgradient extragradient method that includes double inertial extrapolations and viscosity approach for finding the common solution of split equilibrium problem and fixed point problem. The strong convergence result of the suggested method is obtained under some standard assumptions on the control parameters. Our method does not require solving two strongly convex optimization problems in the feasible sets per iteration, and the step-sizes do not depend on bifunctional Lipschitz-type constants. Furthermore, unlike several methods in the literature, our method does not depend on the prior knowledge of the operator norm of the bounded linear operator. Instead, the step-sizes are self adaptively updated. We apply our method to solve split variational inequality problem. Lastly, we conduct some numerical test to compare our method with some well known methods in the literature.
Similar content being viewed by others
1 Introduction
Let C and Q be two nonempty, convex and closed subsets of two Hilbert spaces \(H_1\) and \(H_2\), respectively. The split equilibrium problem (SEP) is generated as follows: Find \({\bar{x}}\in C\) such that
and \({\bar{y}}=T{\bar{x}}\in Q\) solves
where \(f:C\times C\rightarrow {\mathbb {R}}\) and \(g:Q\times Q\rightarrow {\mathbb {R}}\) are equilibrium bifunctions, and \(T:H_1\rightarrow H_2\) is a bounded linear operator. This problem was introduced and studied by Moudafi [18]. It is important to note that SEP properly includes multiple set split feasibility problem. Thus, it contains the split variational inequality problem, which is a generalization of split zero problemand split feasibility problem [1, 16, 23, 25, 26]. If \(g=0\) and \(T=0\), then SEP becomes the following classical equilibrium problem (EP): Find \({\bar{x}}\in C\) such that
The solution set of the EP (1.3) is denoted by SOL(f, C). It is well known that the EP (1.3) has applications in several mathematical problems, such as variational inequity problem, optimization problem, Nash equilibrium problem, maximax problem, fixed point problem and saddle point problem (see, [4, 12, 19] and the reference therein ). One of the widely used methods for solving EP (1.3), when the bifunction f is monotone, is the maximal point method [17]. It is worthy to note that the weak convergence of this method cannot be guaranteed for a weaker assumption, such as pseudomonotone bifunction. To surmount this limitation, in [28], Tran et al. introduced the extragradient method for EP (1.3) as follows:
where f is a pseudomonotone and Lipschitz-type continuous with positive constants \(L_1\) and \(L_2\) such that \(0<\lambda <\min \{\frac{1}{2L_1}, \frac{1}{2L_2}\}.\) The authors showed that the method (1.4) weakly converges to a solution of the EP (1.3).
In recent years, the inertial-type algorithms have been studied by many researchers. This concept emanated from an implit discretization method (heavy ball method) of the second-order dynamical in time [2, 3] and it has been shown by many authors that the inclusion of the inertial terms in iterative methods enhance their speed of convergence (see [5, 6, 14, 29] and the references therein). In [29], Vinh and Muu, introduced an inertial extragradient for solving EP as follows:
where f is pseudomonotone and Lipschitz-type continuous with positive constants \(L_1\) and \(L_2\) such that \(0<\lambda <\min \{\frac{1}{2L_1}, \frac{1}{2L_2}\}\), and \(\theta _{m}\) is a suitable parameter. It is not hard to see that if \(\theta _{m}=0,\forall m\in {\mathbb {N}}\), then (1.5) reduces to (1.4). Moreover, they proved the strong convergence results of the following algorithm for solving EP (1.3):
where f is pseudomonotone and Lipschitz-type continuous with positive constants \(L_1\) and \(L_2\) such that \(0<\lambda <\min \{\frac{1}{2L_1}, \frac{1}{2L_2}\},\) \(\{\beta _{m}\},\{\gamma _{m}\}\subset (0,1)\) such that \(\lim \limits _{m\rightarrow \infty }\gamma _{m}=0\), \(\sum \limits _{m=0}^{\infty }\gamma _{m}=\infty \), and \(\liminf _{m\rightarrow \infty }\beta _{m}(1-\beta _{m}-\gamma _{m})>0\).
On the other hand, the SEP is a combination of a pair of EPs, and aims at finding a solution \({\bar{x}}\) of an EP such that its image \({\bar{y}} = T{\bar{x}}\) under a given bounded linear operator T also solves another equilibrium problem. The solution set of SEP is denoted by
For some years now, many methods have focused on solving the SEP (see, [10, 21, 26] and the references therein). For instance, He [13] proposed the following proximal point method for solving the SEP (1.1)–(1.2):
where C and Q are two nonempty convex and closed subsets of the real Hilbert spaces \(H_1\) and \(H_2\), respectively, \(f:C\times C\rightarrow {\mathbb {R}}\) and \(g:Q\times Q\rightarrow {\mathbb {R}}\) are two monotone bifunctions, \(\eta \in \left( 0,\frac{1}{\Vert T\Vert ^2}\right) \), \(\{r_m\}\subset (0,+\infty )\) such that \(\liminf _{m\rightarrow \infty }r_m>0\), and \(T^*\) is the adjoint the bounded linear operator \(T:H_1\rightarrow H_2\). The author showed that the sequence (1.8) converges weakly to the solution of the SEP (1.1)–(1.2). In [16], Kim and Dinh introduced the following extragradient method for solving the SEP (1.1)–(1.2):
where the bifunctions \(f:H_1\times H_1\rightarrow {\mathbb {R}}\) and \(g:H_2\times H_2\rightarrow {\mathbb {R}}\) are pseudomonotone and Lipschitz-type continuous with positive constants \(L_1\) and \(L_2\) such that \(\{\lambda _m\}\{\mu _m\}\subset [\xi ,{\bar{\xi }}]\) with \(0<\xi \le {\bar{\xi }}<\min \{\frac{1}{2L_1}, \frac{1}{2L_2}\},\) and \(\eta \in \left( 0,\frac{1}{\Vert T\Vert ^2}\right) \). They proved that their method weakly converges to a solution of the SEP (1.1)–(1.2).
It important to outline the limitations of the above methods (1.4)–(1.9) as follows:
-
(1)
The extragradient methods (1.4)–(1.6) require solving two strongly convex optimization problems in the feasible sets C per iteration, and \(\lambda \) depend on bifunctional Lipschitz-type constants. Also, (1.9) require solving four strongly convex optimization problems in the feasible sets C and Q per iteration, and \(\lambda _m\) and \(\mu _m\) depend on bifunctional Lipschitz-type constants.
-
(2)
The step-size \(\eta \) of the extragradient methods (1.8) and (1.9) depends on the prior knowledge of the operator norm \(\Vert T\Vert \) of the bounded linear operator.
-
(3)
The method (1.9) is not used to solve the problem (1.1)–(1.2) under the setting of \(g:Q\times Q\rightarrow {\mathbb {R}}\), since one can not guarantee if \(Tz_m\) belongs to the considered closed convex set Q.
The above mentioned drawbacks limit the applicability of the concerned methods and also affects their computational efficiency. In order to overcome the difficulty of solving two strongly convex optimization problems in the feasible sets per iteration, the subgradient extragradient method (SEM) in [8, 9] was reformulated by Rehman et al. [22] to solve EP (1.3). The main benefit of the SEM over the extragradient method is the fact that, the second convex optimization problem is onto a half-space and has a closed form solution. So, compared to the extragradient approach, its computational complexity is less expensive.
Motivated by the above results, we will continue with the development of an efficient method for solving SEP (1.1)–(1.2). We suggest a modified subgradient extragradient method that includes double inertial extrapolations and viscosity approach for finding the common solution of SEP and fixed point problem (FPP). The strong convergence result of the suggested method is obtained under some standard assumptions on the control parameters. Our method does not require solving two strongly convex optimization problems in the feasible sets per iteration, and the step-sizes do not depend on bifunctional Lipschitz-type constants. Furthermore, unlike several existing methods, our method does not depend on the prior knowledge of the operator norm of the bounded linear operator. Instead, the step-sizes are self adaptively updated. We apply our method to solve split variational inequality problem. Lastly, we conduct some numerical test to compare our method with some well known methods in the literature.
2 Preliminaries
In this section, we begin by recalling some known and useful results which are needed in the sequel. Let H be a real Hilbert space. We denotes strong and weak convergence by “\(\rightarrow \)” and “\(\rightharpoonup \)”, respectively.
Let \(S:H_1\rightarrow H_1\) be a mapping. Then, a point \(x\in H_1\) is called the fixed point of the mapping S if \(Sx=x\). The theory of fixed point plays vital roles in many areas of applied sciences and engineering such as game theory, image processing problem, physics, chemistry and so on. We denoted by the set of all fixed point of S by \(F(S)=\{x\in H_1:Sx=x\}.\) For any \(x,y \in H\) and \(\alpha \in [0,1],\) it is well-known that
Definition 2.1
[7, 8, 13] Let \(T: H \rightarrow H\) be an operator. Then the operator T is called
-
(a)
L-Lipschitz continuous if there exists \(L>0\) such that
$$\begin{aligned} \Vert Tx-Ty\Vert \le L\Vert x-y\Vert , \end{aligned}$$for all \(x,y\in H.\) If \(L=1,\) then T is called nonexpansive, and T is quasinonexpansive, if
$$\begin{aligned} \Vert x-Ty\Vert =\Vert Tx-Ty\Vert \le \Vert x-y\Vert , \end{aligned}$$for all \(x\in F(T)\) and \(y\in H.\);
-
(b)
monotone if
$$\begin{aligned} \langle Tx-Ty, x-y\rangle \ge 0, ~~\forall x,y\in H; \end{aligned}$$ -
(c)
pseudomonotone if
$$\begin{aligned} \langle Tx, y-x\rangle \ge 0 \Rightarrow \langle Ty, y-x\rangle \ge 0, ~~\forall x,y\in H; \end{aligned}$$ -
(d)
\(\beta \)- strongly monotone if there exists \(\beta >0,\) such that
$$\begin{aligned} \langle Tx-Ty, x-y\rangle \ge \beta \Vert x-y\Vert ^2, ~~\forall ~ x,y\in H; \end{aligned}$$ -
(e)
firmly nonexpansive
$$\begin{aligned} \Vert Tx-Ty\Vert ^2 \le \langle Tx-Ty, x-y\rangle ~~\forall ~ x,y\in H; \end{aligned}$$or equivalently
$$\begin{aligned} \Vert Tx-Ty\Vert ^2 \le \Vert x-y\Vert ^2 -\Vert (I-T)x- (I-T)y\Vert ^2 ~~\forall ~ x,y\in H; \end{aligned}$$ -
(f)
directed (also called to be firmly quasi-nonexpansive) if \(F(T)\ne \emptyset \) and
$$\begin{aligned} \langle x-y, Tx-Ty\rangle \ge \Vert Tx-p\Vert ^2 ~~\forall ~ x, y\in H ~~\text {and}~~p\in F(T); \end{aligned}$$ -
(g)
sequentially weakly continuous if for each sequence \(\{x_n\},\) we obtain \(\{x_n\}\) converges weakly to x implies that \(Tx_n\) converges weakly to Tx.
Definition 2.2
[7, 11, 13, 16, 21] Let \(g: C\times C \rightarrow {\mathbb {R}}\) is said to be:
-
(a)
Strongly monotone on C if there exists a constant \(\tau >0\) such that
$$\begin{aligned} g(x,y) + g(y,x)\le -\tau \Vert x-y\Vert ^2 \end{aligned}$$(2.6)for all \(x,y\in C;\)
-
(b)
Monotone on C if
$$\begin{aligned} g(x,y) + g(y,x)\le 0 \end{aligned}$$(2.7)for all \(x,y\in C;\)
-
(c)
Strongly pseudomonote on C if there exists a constant \(\gamma >0\) such that
$$\begin{aligned} g(x,y)\ge 0\Rightarrow g(x,y)\le -\gamma \Vert x-y\Vert ^2, ~~\forall x,y\in C; \end{aligned}$$ -
(d)
Psuedomonotone on C if
$$\begin{aligned} g(x,y)\ge 0 \Rightarrow g(y,x)\le 0, ~~\forall ~ x,y\in C; \end{aligned}$$ -
(e)
Satisfying a Lipschitz-like condition if there exist two positive constant \(L_1, L_2\) such that
$$\begin{aligned} g(x,y)+g(y,z)\ge g(x,z)-L_1\Vert x-y\Vert ^2-L_2\Vert y-z\Vert ^2~~\forall ~ x,y,z\in C.\nonumber \\ \end{aligned}$$(2.8)
Let C be a nonempty, closed and convex subset of H. For any \(u\in H,\) there exists a unique point \(P_{C}u\in C\) such that
The operator \(P_C\) is called the metric projection of H onto C. It is well-known that \(P_C\) is a nonexpansive mapping and that \(P_C\) satisfies
for all \(x,y \in H.\) Furthermore, \(P_C\) is characterized by the property
and
for all \(x\in H\) and \(y\in C.\) A subset C of H is called proximal if for each \(x\in H,\) there exists \(y\in C\) such that
The Hausdorff metric on H is as follows
for all subsets A and B of H.
The normal cone \(N_C\) to C at a point \(x\in C\) is defined by \(N_C(x)=\{z\in H:\langle z, x-y\rangle \ge 0~\forall y\in C\}.\)
Lemma 2.3
[11] Let C be a convex subset of a real Hilbert space H and \(\phi :C \rightarrow {\mathbb {R}}\) be a subdifferential function on C. Then \(x^*\) is a solution to the convex problem: \(\text {minimize}\{\phi (x): x\in C\}\) if and only if \(0\in \partial \phi (x^*) + N_C(x^*),\) where \(\partial \phi (x^*)\) denotes the subdifferential of \(\phi \) and \(N_C(x^*)\) is the normal cone of C at \(x^*.\)
Lemma 2.4
[24] Let \(\{a_m\}\) be a sequence of positive real numbers, \(\{\alpha _{m}\}\) be a sequence of real numbers in (0, 1) such that \(\sum _{m=1}^{\infty }\alpha _m=\infty \) and \(\{d_m\}\) be a sequence of real numbers. Suppose that
If \(\limsup _{k\rightarrow \infty }d_{m_k} \le 0\) for all subsequences \(\{a_{m_k}\}\) of \(\{a_{m}\}\) satisfying the condition
then, \(\lim \limits _{m\rightarrow \infty }a_{m} =0.\)
3 Proposed algorithm
In this section, we present our proposed method for solving a problem (1.1)–(1.2).
Assumption 3.1
Condition A. Suppose \(f: H_2\times H_2 \rightarrow {\mathbb {R}}, g:H_1\times H_1\rightarrow {\mathbb {R}}\) satisfies the following conditions:
-
(1)
f is pseudomonotone on Q and satisfies the Lipschitz-type condition (2.8) on \(H_2\) with positive constants \(L_1, L_2\);
-
(2)
g is pseudomonotone on C and satisfies the Lipschitz-type condition (2.8) on \(H_1\) with positive constants \(L_3, L_4\);
-
(3)
\(g(\cdot , u)\) and \(f(\cdot , y)\) are sequentially weakly upper semi-continuous on C and Q for each fixed \(y\in C\) and \(u\in Q;\)
-
(4)
\(g(x, \cdot )\) and \(f(y, \cdot )\) are convex, lower semi-continuous on \(H_1\) and \(H_2\) for every fixed \(x\in H_1\) and \(y\in H_2.\)
In addition, we suppose that
-
(1)
\(T: H_1 \rightarrow H_2\) is a bounded linear operator with the adjoint operator \(T^*;\)
-
(2)
\(h: H_1\rightarrow H_1\) is a contraction with constant \(\rho \in [0,1);\)
-
(3)
\(S:H_1\rightarrow H_1\) is a quasinonexpansive mapping;
-
(4)
The solution set \(\Omega =\Gamma \cap F(S)\ne \emptyset ,\) where \(\Gamma \) is as defined in (1.7).
Condition B. Suppose that \(\{\alpha _{m}\}, \{\eta _{m}\},\{\beta _m\}, \{\epsilon _m\}\), \(\{\delta _m\}\) and \(\{\eta _{m}\}\) are positive sequences such that
-
(1)
\(\{\alpha _{m}\}\subset (0,1),\) \(\lim _{m\rightarrow \infty }\alpha _{m} =0,\) such that \(\sum _{m=1}^{\infty }\alpha _{m}=\infty ,\) \(\lim _{m\rightarrow \infty } \frac{\epsilon _m}{\alpha _m}=\frac{\delta _m}{\alpha _m}=0.\)
-
(2)
\(\{\beta _m\},\{\eta _{m}\} \subset [a,b]\subset (0,1)\) such that \(\alpha _{m}+\eta _{m}+\beta _{m}=1\).
-
(3)
\( \lambda _1,\psi ,\theta ,\tau _1>0\), \(\phi ,\nu \in (0,1)\).
Remark 3.3
The sequence \(\{\lambda _{m}\}\) and \(\{\tau _m\}\) are generated by (3.3) and (3.4) are monotonically non-increasing and
Proof
Clearly, \(\{\lambda _{m}\}\) is monotonically non-increasing. Also, since f satisfies condition A (1), we have
Hence \(\{\lambda _{m}\}\) is bounded below by \(\frac{\nu }{2\max \{L_1, L_2\}}.\) This implies that there exists
Using similar approach, we can show that (3.6) holds. \(\square \)
Remark 3.4
The choice of the step size \(\{\gamma _m\}\) in Algorithms 3.2 do not require the prior knowledge of the operator norm \(\Vert T\Vert .\) In addition, the step size is well defined. To see this, observe that From Algorithm 3.2, we have \(z_m= \text {argmin}_{y\in Q_m}\{\lambda _{m} f(y_m, y)+ \frac{1}{2}\Vert y-Tw_m\Vert ^2\},\) and using Lemma 2.3, we get
Let \(p\in \text {SOL}(g,C),\) since \(Tp\in \text {SOL}(f,Q)\subset Q\subset Q_m,\) and taking \(y:=Tp,\) we get
Since \(Tp, y_m \in Q,\) we have \(f(Tp, y_m)\ge 0,\) and by the pseudomonotonicity of f, we get \(f(y_m, Tp)\le 0.\) Thus, (3.9) becomes
In addition, from \(\chi _m\in \partial f(Tw_m, \cdot )(y_m),\) we obtain
Using the definition of \(Q_m,\) we get
it follows that
From (3.13) and (3.11), we get
Also, adding (3.14) and (3.10), we obtain
From (3.15), we get
and using the step size (3.3), and the fact that \(\nu \in (0,1),\) since \(\lim _{m \rightarrow \infty }\lambda _{m+1} = \lim _{m \rightarrow \infty }\lambda _{m}>0,\) we \(\lim _{n \rightarrow \infty }(1- \frac{\nu \lambda _{m}}{\lambda _{m+1}^2} )= 1-\nu >0,\) we obtain
Using the Cauchy-Schwarz inequality and (3.17), we have
Since \(z_m \ne Tw_m,\) we have \(\Vert Tw_m-z_m\Vert \ge 0,\) thus, we obtain that \(\Vert T^*(Tw_m-z_m)\Vert \Vert w_m-p\Vert >0.\) Hence, we have \(\Vert T^*(Tw_m-z_m)\Vert \ne 0\) and so \(\gamma _m\) is well defined.
4 Convergence analysis
Lemma 4.1
Let \(\{x_m\}\) be a sequence generated by Algorithm 3.2, under Assumption 3.1, we obtain that \(\{x_n\}\) is bounded.
Proof
Let \(p\in SOL(g,C),\) using (3.1) and the fact that \(0\le \theta _m \le {\overline{\theta }}_m,\) we have
Therefore, it follows from \(\lim _{m \rightarrow \infty }\frac{\epsilon _m}{\alpha _{m}}=0,\) that
It follows that the sequence \(\{\frac{\theta _m}{\alpha _{m}}\Vert x_m-x_{m-1}\Vert \}\) is bounded. Hence, there exists \(N_1>0\) such that \(\frac{\theta _m}{\alpha _{m}}\Vert x_m-x_{m-1}\Vert \le N_2,\) for all \(m\in {\mathbb {N}}.\) Then using Algorithm 3.2, we have
Similarly, we get
Furthermore, using Algorithm 3.2 and the step size, we have
which implies that
Also, using same approach as in (3.17), we have
which implies that
Finally, using Algorithm 3.2, (4.2), (4.3), and (4.7), we get
where \(N_3=N_1+N_2.\) Thus, \(\{x_m\}\) is bounded. \(\square \)
Theorem 4.2
Let \(\{x_m\}\) be the sequence generated by Algorithm 3.2. Then, under the Assumption 3.1, \(\{x_m\}\) converges strongly to \(p \in \Omega ,\) where \(p=P_{\Omega }\circ h(p).\)
Proof
Let \(p\in \Omega ,\) and using (3.9), we have
for some \(N_3=\sup _{m\in {\mathbb {N}}}\{ 2\Vert x_m -p\Vert , \alpha _{m}N_1\}>0.\)
Similarly, we get
In addition, we obtain
where \(\Psi _m= \bigg [\frac{\delta _m\Vert x_m -x_{m-1}\Vert N_3}{\alpha _{m}(1-\rho )} + \frac{\theta _m\Vert x_m -x_{m-1}\Vert N_4}{\alpha _{m}(1-\rho )} + \frac{2}{(1-\rho )}\langle h(p)-p, x_{m+1}-p\rangle \bigg ].\) According to Lemma 2.4, to conclude our proof, it is sufficient to establish that \(\limsup _{k\rightarrow \infty } \Psi _{m_k} \le 0\) for every subsequence \(\{\Vert x_{m_k}-p\Vert \}\) of \(\{\Vert x_{m}-p\Vert \}\) satisfying the condition:
From (4.11), (4.6), (4.10) and (4.9), we get
which implies that
Thus, we have
From (4.14), we have
Furthermore, using (4.11), (4.4), (4.10) and (4.9), we get
which implies that
Thus, we have
Thus, using (4.17), (3.19) and the boundedness of \(\{w_n\}\), we obtain
Now, observe that
which implies that
Adding (4.20) and (3.17), we get
Thus, using (4.18), we get
In addition, using Algorithm (3.2) and (2.5), we have
which implies
Thus, we have
Using, Algorithm 3.2, we have
and
Using (4.26), (4.27), (4.25) and (4.14), we get
Now, since \(\{x_{m_k}\}\) is bounded, then, there exists a subsequence \(\{x_{m_{k_j}}\}\) of \(\{x_{n_k}\}\) such that \(\{x_{m_{k_j}}\}\) converges weakly to \(x^*\in H.\) In addition, using (4.30) and the boundedness of \(\{q_{m_k}\}\), there exists a subsequence \(\{q_{m_{k_j}}\}\) of \(\{q_{m_k}\}\) such that \(\{q_{m_{k_j}}\}\) converges weakly to \(x^*\in H_1\) and since S is demiclosed with (4.25), we have that \(x^*\in F(S).\) We now establish that \(x^*\in Sol(g,C).\) Also, (4.14), we obtain that \(\{u_{m_{k_j}}\}\) converges weakly to \(x^*.\) Next, we claim that \(x^*\in SOL(g,C).\) To see this, from the definition of \(u_{m_k}\) and Lemma 2.3, we get
Thus, there exists \(\mu _{m_k}\in N_{C}(u_{m_k})\) and \(\psi _{m_k}\in \partial g(v_{m_k}, \cdot )(u_{m_k})\) such that
Since \(\mu _{m_k}\in N_{C}(u_{m_k})\), it follows that \(\langle \mu _{m_k}, u-u_{m_k}\rangle \le 0,\) for all \(u\in C.\) From (4.34), we have
Also, since \(\psi _{m_k}\in \partial g(v_{m_k}, \cdot )(u_{m_k}),\) we have
Thus,
Taking limit as \(k\rightarrow \infty ,\) and using conditions and (4.14), we have
Thus, \(x^* \in SOL(g,C).\) We now establish that \(Tx^*\in SOL(f,Q).\) To see this observe that from the definition of \(y_{m_k}\) and Lemma 2.3, we get
Thus, there exists \(\rho _{m_k}\in N_{Q}(y_{m_k})\) and \(\chi _{m_k}\in \partial f(Tw_{m_k}, \cdot )(y_{m_k})\) such that
Since \(\rho _{m_k}\in N_{Q}(y_{n_k})\), it follows that \(\langle \rho _{m_k}, y-y_{m_k}\rangle \le 0,\) for all \(y\in Q.\) From (4.38), we have
Also, since \(\chi _{m_k}\in \partial f(Tw_{m_k}, \cdot )(y_{m_k}),\) we have
Thus,
From (4.27), we obtain that \(w_{m_k}\) converges to \(x^{*}\) and since T is a bounded linear operator, we obtain that \(\{Tw_{m_k}\}\) converges \(Tx^{*}\). Hence, using (4.22), we have that \(\{y_{m_k}\}\) also converges to \(Tx^{*}.\) Thus, taking limit as \(k\rightarrow \infty ,\) in (4.41) and using the fact that \(\lim _{k\rightarrow \infty }\lambda _{m_k}>0\), we have
Thus, \(Tx^* \in SOL(f,Q).\) Hence, we have that \(x^*\in \Omega .\) Furthermore, we get
using (4.33) and (4.42), we have
which implies that
Using our Condition B (1) and the above inequality, we have that \(\limsup _{k\rightarrow \infty }\Psi _{m_k} = \limsup _{k\rightarrow \infty } \bigg [\frac{\delta _{m_k}\Vert x_{m_k}-x_{{m_k}-1}\Vert N_3}{\alpha _{m_k}(1-\rho )} + \frac{\theta _{m_k}\Vert x_{m_k} -x_{m-1}\Vert N_4}{\alpha _{m_k}(1-\rho )} + \frac{2}{(1-\rho )}\langle h(p)-p, x_{{m_k}+1}-p\rangle \bigg ]\le 0.\) Thus, by Lemma 2.4, we have \(\lim \limits _{m\rightarrow \infty }\Vert x_m-p\Vert =0.\) Thus, \(\{x_m\}\) converges strongly to \(p\in \Omega .\) \(\square \)
5 Application to split variational inequality problem
In this section, we will apply our results to split variational inequality problem (SVIP). Let H be a real Hilbert space and C be a nonempty closed convex subset H. The classical variational inequality problem (VIP) for an operator \(B:C\rightarrow C\) is formulated as follows: find \(w^*\in C\) such that
The solution set of the VIP (5.1) is denoted by VI(C, B). Let \(H_1\) and \(H_2\) be two real Hilbert spaces with nonempty closed convex subset C and Q, respectively. Let \(B:C\rightarrow C\) and \(D:Q\rightarrow Q\) be \(L^*_1\) and \(L^*_2\)–Lipschitz continuous on C and D, respectively. Let \(T:H_1\rightarrow H_2\) be a bounded linear operator with its adjoint \(T^*\). The split variational inequality problem is generated as: Find \(w^*\in C\) such that
and \({z}=T{w^*}\in Q\) solves
We denote the solution set of SVIP (5.2)–(5.3) by \(\Gamma ^*=\{w^*\in VI(C,B):Tw^*\in VI(Q,D)\}\). Now, we consider the following conditions for solving the SVIP (5.2)–(5.3):
- (\(A_1\)):
-
\(B:C\rightarrow C\) and \(D:Q\rightarrow Q\) are pseudomonotone operators, i.e.
$$\begin{aligned} \langle Bw,y-w\rangle \ge 0\implies \langle By,w-y\rangle \le 0,\forall w,y\in C. \end{aligned}$$ - (\(A_2\)):
-
\(B:C\rightarrow C\) and \(D:Q\rightarrow Q\) are \(L^*_1\) and \(L^*_2\)–Lipschitz continuous operators, i.e. there exist \(L^*_1>0\) such that
$$\begin{aligned} \Vert Bw-By\Vert \le L^*_1\Vert w-y\Vert ,\forall w,y\in C. \end{aligned}$$ - (\(A_3\)):
-
\(B:C\rightarrow C\) and \(D:Q\rightarrow Q\) are sequentially weakly continuous operators.
Set \(f(Tw,y)=\langle DTw,y-Tw\rangle \), \( \forall y\in Q\) and \(g(v,u)=\langle Bv,u-v\rangle \), \( \forall u,v\in C\), then the (SEP) becomes the (SVIP) with \(L^*_2=2L_1=2L_2\) and \(L^*_1==2L_3=2L_3\). Moreover, we have
and
where \(P_C\) and \(P_Q\) are called the metric projections of \(H_1\) and \(H_2\) onto C and Q. Hence, we obtain the following result:
Corollary 5.1
Let \(H_1\) and \(H_2\) be two real Hilbert spaces with nonempty closed convex subset C and Q, respectively. Let \(T:H_1\rightarrow H_2\) be a bounded linear operator with its adjoint \(T^*\). Assume that condition B and assumption (\(A_1\))–(\(A_3\)) hold. Let \(h:H_1\rightarrow H_1\) be a contraction mapping with contraction constant \(\rho \in [0,1)\) and \(S:H_1\rightarrow H_1\) be a quasinonexpasive mapping such that the solution set \(\Omega ^*=\Gamma ^*\cap F(S)\ne \emptyset \). Then, the sequence generated by Algorithm 5.2 strongly converges to an element \(\Omega ^*\).
6 Numerical experiments
In this section, we present some numerical examples to validate our mains results. We compare the proposed Algorithm 3.2 (briefly, Alg. 3.2) with Algorithm (1.9) (briefly, KD Alg. 1.8), Algorithm 1 (briefly, SPK Alg. 1) of Suantai et al. [26] and Algorithm 2 (briefly, SPK Alg. 2) of Suantai et al. [26]. We perform all numerical simulations using MATLAB R2020b and carried out on PC Desktop Intel® Core\(^{\textrm{TM}}\) i7-3540 M CPU @ 3.00GHz \(\times \) 4 memory 400.00GB.
Example 6.1
Let \(H_1={\mathbb {R}}^k\) and \(H_2={\mathbb {R}}^m\) be two real Hilbert spaces with the Euclidean norm. We consider the bifunctions \({\tilde{f}}\) and \({\tilde{g}}\) which are generated from Nash-Cournot oligopolistic equilibrium models of electricity markets [10, 21],
where \(U_1,V_1\in {\mathbb {R}}^{k\times k}\) and \(U_2,V_2\in {\mathbb {R}}^{m\times m}\) are matrices such that \(V_1,V_2\) are symmetric positive semidefinite and \(V_1-U_1\), \(V_2-U_2\) are negative semidefinite. Note that \( {\tilde{f}}(x,y)= {\tilde{g}}(x,y)=(x-y)^T(V_1-U_1)(x-y),\forall x,y\in {\mathbb {R}}^k\). Furthermore, from the property of \(V_1-U_1\), we have that the operator \({\tilde{f}}\) is monotone. Also, we have that \({\tilde{g}}\) is a monotone operator.
Now, we consider the two bifunctions f and g which are defined by
and
where \(C=\prod _{i=1}^{k}[-5,5]\) and \(Q=\prod _{i=1}^{k}[-20,20]\) are the constrained boxes. We observe that f and g are Lipschitz-type continuous with constants \(L_1=L_2=\frac{1}{2}\Vert U_1-Q_1\Vert \) and \(L_3=L_4=\frac{1}{2}\Vert U_2-Q_2\Vert \), respectively (see [28]). Take \(c_1=\max \{L_1,L_3\}\), and \(c_2=\max \{L_2,L_4\}\). Then, the bifunctions f and g are Lipschitz-type continuous with constants \(c_1\) and \(c_2\).
In this numerical test, the matrices \(U_1,V_1,U_2\) and \(V_2\) are generated randomly in the interval [-5,5] such that they fulfill the required conditions above and the linear operator \(T:{\mathbb {R}}^k\rightarrow {\mathbb {R}}^k\) is a \(k\times k\) matrix such that its entries is randomly generated in the interval [-2,2], and \(h:C\rightarrow C, \) and \(S: C\rightarrow C\) be a \(k\times k\) matrix such that \(\Vert h\Vert <1\) and \(S=I\) (identity mapping). Clearly, the solution set \(\Omega =\{0\}\). We consider the following control parameters for KD Alg. 1.8, SPK Alg. 1 and SPK Alg. 2: \(\eta =\frac{1}{2\Vert T\Vert ^2}\), \(\alpha =0.5\), \(\lambda _{m}=\mu _m=\frac{1}{4\max \{c_1,c_2\}}\), \(\epsilon _m=\frac{1}{(m+1)^2}\), \(\gamma _m=\frac{1}{m+1}\), and \(\beta _{m}=0.5(1-\gamma _m)\). Now, for Alg. 3.2, we consider the following control parameters: \(\epsilon _m=\delta _m=\frac{1}{(m+1)^2}\), \(\alpha _m=\frac{1}{m+1}\), \(\eta _{m}=\beta _{m}=0.5(1-\alpha _{m})\), \(\lambda _{1}=2.5\) \(\tau _{1}=1.7\), \(\psi =0.5\), \(\theta =0.4\), \(\phi =0.6\) and \(\nu =0.8\). The starting point \(x_0=x_1\in {\mathbb {R}}^k\) are generated randomly in [-5,5] and the stopping criteria for the concerned algorithm is given as \(\Vert x_{m+1}-x_m\Vert \le 10^{-9}\). The parameters k and m are picked as follows: \((k=40,m=20)\), \((k=80,m=40)\), \((k=120,m=60).\)
Example 6.2
Let \(H_2=(\ell _2({\mathbb {R}}),\Vert \cdot \Vert _{\ell _2})=H_1\), where \(\ell _2({\mathbb {R}})=\{x=(x_1,x_2,x_3,\cdots ), u_i\in {\mathbb {R}}:\sum _{j=1}^{\infty }|x_j|<\infty \}\) and \(\Vert x\Vert _{\ell _2}=(\sum _{j=1}^{\infty }|x_j|^2)^\frac{1}{2}\), \(\forall u\in \ell _2({\mathbb {R}})\) (Fig. 1). We now define the operator \(B:\ell _2({\mathbb {R}})\rightarrow \ell _2({\mathbb {R}})\) by
Then, T is a bounded and linear operator on \(\ell _2({\mathbb {R}})\) with adjoint
To show this, let \(x=(x_1,x_2,x_3,\cdots )\) and \(v=(v_1,v_2,v_3,\cdots )\) be arbitrary points in \(\ell _2({\mathbb {R}})\) and \(\beta _1\), \(\beta _2\) be arbitrary scalar in \({\mathbb {R}}\). Then,
Thus, this shows that T is linear. In a addition, \(\Vert Tx\Vert _{\ell _2}\Vert \le \Vert x\Vert _{\ell _2},\,\forall x\in \ell _2({\mathbb {R}})\). Hence, the operator T is also bounded. Showing that \(T^*\) is the adjoint of T follows immediately from definition.
Let \(C=Q=\{x\in \ell _2({\mathbb {R}}):\Vert x-p\Vert _{\ell _2}\le q\}\), where \(p=\left( 1,\frac{1}{2},\frac{1}{3},\cdots \right) \), \(q=4\) for C and \(p=\left( \frac{1}{2},\frac{1}{4},\frac{1}{8}\cdots \right) \), for \(q=1\) for Q. Then, obviously C and Q are nonempty, convex and closed subsets of \(\ell _2({\mathbb {R}})\). We now define the metric projections \(P_C\) and \(P_Q\) as follows:
Next, let the operators \(B,D:\ell _2({\mathbb {R}})\rightarrow \ell _2({\mathbb {R}})\) be defined by
It is not hard to verify that B and D are pseudomonotone operators. Furthermore, we define the mapping \(S:\ell _2({\mathbb {R}})\rightarrow \ell _2({\mathbb {R}})\) by
We define \(h:\ell _2({\mathbb {R}})\rightarrow \ell _2({\mathbb {R}})\) by
Throughout the numerical experiments, we use
for stopping criterion, where \(\upsilon =10^{-6}\) and we consider the following cases (Fig. 2):
Case I: \(u_0=\left( \frac{1}{5},\frac{1}{6},\frac{1}{7},\cdots \right) \) and \(u_1=\left( 1,\frac{1}{3},\frac{1}{4},\cdots \right) \).
Case II: \(u_0=\left( \frac{1}{6},\frac{1}{8},\frac{1}{11},\cdots \right) \) and \(u_1=\left( 1,\frac{1}{9},\frac{1}{11},\cdots \right) \).
Case III: \(u_0=\left( 1,\frac{1}{8},\frac{1}{10},\cdots \right) \) and \(u_=\left( \frac{1}{5},\frac{1}{7},\frac{1}{8}\cdots \right) \).
For this numerical experiment, we compare our Algorithm (3.2) (shortly, Alg.3.2) with Algorithm (4.8) of Tiang and Jiang [27] (briefly, TT Alg 4.8), Algorithm 1 of Huy et al. [15] (briefly, HHT Alg 1) and Algorithm 2 of Ogwo et al. [20] (briefly, OIM Alg. 2). We choose the following control parameters for various algorithms:
-
(1)
Alg 3.2: \(\epsilon _m=\delta _m=\frac{1}{(m+1)^2}\), \(\alpha _m=\frac{1}{m+1}\), \(\eta _{m}=\beta _{m}=0.5(1-\alpha _{m})\), \(\lambda _{1}=2.5\) \(\tau _{1}=1.7\), \(\psi =0.5\), \(\theta =0.4\), \(\phi =0.6\) and \(\nu =0.8\).
-
(2)
TT Alg 4.8: \(\alpha _{m}=\frac{1}{m+1}\) and \(\mu =0.66\).
-
(3)
HHT Alg 1: \(\alpha _{m}=\frac{1}{m+1}\), \(\lambda _0=2.5\), \(\mu _0=1.9\), \(\mu =0.66\) and \(\lambda =0.72\).
-
(3)
OIM Alg. 2: \(\delta _{m}=\frac{1}{m+1}\), \(\tau _m=\frac{100}{(k+1)^2}\), \(\theta _m=\frac{1}{2}-\alpha _{m}\), \(a_i=0.84\) \((i=1,2)\), \(\gamma _i=1.84\) \((i=1,2)\), \(\alpha =3\) and \(\eta =0.96\).
7 Conclusion
In this work, we have introduced an efficient method for approximating the solution of a split equilibrium problem and a fixed point problem in the framework of Hilbert spaces. The strong convergence result of the proposed method is achieved under some weak conditions on the control parameters. Furthermore, we apply our results to solve split variational inequality problem and fixed point problem. In addition, we presented some numerical experiments to show the efficiency and the applicability of our method in comparison with other existing method. It is easy to see that from the numerical experiments that our proposed method is better than some existing methods in the literature.
References
Ali, R., Kazmi, K.R., Farid, M.: Viscosity iterative method for a split equality monotone variational inclusion problem. Dyn. Contin. Discrete Impuls. Syst. 26(5b), 313–344 (2019)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 9, 773–782 (2004)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Valued Anal. 9, 3–11 (2001)
Antipin, A.S.: The convergence of proximal methods to fixed points of extremal mappings and estimates of their rate of convergence. Comput. Math. Math. Phys., 35 (1995)
Akutsah, F., Mebawondu, A.A., Abass, H.A., Narain, O.K.: A self adaptive method for solving a class of bilevel variational inequalities with split variational inequlaity and composed fixed point problem constraints in Hilbert spaces. Numer. Algebra Control Optim. (2023). https://doi.org/10.3934/naco.2021046
Akutsah, F., Mebawondu, A.A., Ugwunnadi, G.C., Narain, O.K.: Inertial extrapolation method with regularization for solving monotone bilevel variation inequalities and fixed point problems in real Hilbert space, J. Nonlinear Funct. Anal., 2022, Article ID 5, 15 pp. 539–551 (2022)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 127–149 (1994)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
Contreras, J., Klusch, M., Krawczyk, J.B.: Numerical solution to Nash-Cournot equilibria in coupled constraint electricity markets. EEE Trans. Power Syst. 19, 195–206 (2004)
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Model. Kluwer Academic Publisher, Dordrecht (2003)
Farid, M.: Two algorithms for solving mixed equilibrium problems and fixed point problems in Hilbert spaces. Ann. Univ. Ferrara 67, 253–268 (2021)
He, Z.: The split equilibrium problem and its convergence algorithms, J. Ineq. Appl. (2012)
Hieu, D.V.: An inertial-like proximal algorithm for equilibrium problems. Math. Meth. Oper. Res. 88, 399–415 (2018)
Huy, P.V., Hien, N.D., Anh, T.V.: A strongly convergent modified Halpern subgradient extragradient method for solving the split variational inequality problem. Vietnam J. Math. 48, 187–204 (2020)
Kim, D.S., Dinh, B.V.: Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces. Numer. Algorithms 77, 741–761 (2018)
Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15, 91–100 (1999)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms (2021). https://doi.org/10.1007/s11075-021-01081-1
Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52, 139–159 (2012)
Rehman, H.U., Kumam, P., Kumam, W., Shutaywi, M., Jirakitpuwapat, W.: The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 12(3), 1–20 (2020)
Safari, M., Moradlou, F., Khalilzade, A.A.: Hybrid proximal point algorithm for solving split equilibrium problems and its applications. Hacet. J. Math. Stat. 51(4), 932–957 (2022). https://doi.org/10.15672/hujms.1023754
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Suantai, S., Petrot, N., Suwannaprapa, M.: Iterative methods for finding solutions of a class of split feasibility problems over fixed point sets in Hilbert spaces. Mathematics 7, 1012 (2019)
Suantai, S., Petrot, N., Khonchaliew, M.: Inertial extragradient methods for solving split equilibrium problems. Mathematics (2021). https://doi.org/10.3390/math9161884
Tian, M., Jiang, B.N.: Viscosity approximation methods for a class of generalized split feasibility problems with variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 40, 902–923 (2019)
Tran, D.Q., Dung, L.M., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Vinh, N.T., Muu, L.D.: Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam 44, 639–663 (2019)
Funding
Open access funding provided by University of KwaZulu-Natal.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mebawondu, A.A., Ofem, A.E., Akutsah, F. et al. A new double inertial subgradient extragradient algorithm for solving split pseudomonotone equilibrium problems and fixed point problems. Ann Univ Ferrara (2024). https://doi.org/10.1007/s11565-024-00496-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11565-024-00496-7