Skip to main content
Log in

A unified framework for three accelerated extragradient methods and further acceleration for variational inequality problems

  • Optimization
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

The main strategy of this paper is intended to speed up the convergence of the inertial Mann iterative method and further speed up it through the normal S-iterative method for a certain class of nonexpansive-type operators that are linked with variational inequality problems. Our new convergence theory permits us to settle down the difficulty of unification of Korpelevich’s extragradient method, Tseng’s extragardient method, and subgardient extragardient method for solving variational inequality problems through an auxiliary algorithmic operator, which is associated with the seed operator. The paper establishes an interesting the fact that the relaxed inertial normal S-iterative extragradient methods do influence much more on convergence behaviour. Finally, the numerical experiments are carried out to illustrate that the relaxed inertial iterative methods; in particular, the relaxed inertial normal S-iterative extragradient methods may have a number of advantages over other methods in computing solutions to variational inequality problems in many cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

Enquiries about data availability should be directed to the authors.

Notes

  1. For proof of Proposition 3.1, see Appendix-II P. 27.

  2. For proof of Proposition 3.2, see Appendix-III P. 28.

  3. For proof of Proposition 4.3, see Appendix-IV P. 30.

  4. For proof of Proposition 4.4, see Appendix-V P. 30.

  5. For proof of Proposition 4.5, see Appendix-VI P. 31.

References

  • Agarwal RP, O’Regan D, Sahu DR (2009) Fixed point theory for Lipschitzian-type mappings with applications, 1st edn. Springer, New York

    MATH  Google Scholar 

  • Agarwal RP, Regan DO, Sahu DR (2007) Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J Nonlinear Convex Anal 8(1):61

    MathSciNet  MATH  Google Scholar 

  • Alvarez F (2004) Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J Optim 14(3):773–782

    MathSciNet  MATH  Google Scholar 

  • Alvarez F, Attouch H (2001) An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal 9(1–2):3–11

    MathSciNet  MATH  Google Scholar 

  • Anh PK, Thong DV, Vinh NT (2020) Improved inertial extragradient methods for solving pseudo-monotone variational inequalities. Optimization 5:1–24

    MATH  Google Scholar 

  • Ansari QH, Sahu DR (2014) Some iterative methods for fixed point problems. Top Fixed Point Theory 5:273–300

    MathSciNet  MATH  Google Scholar 

  • Ansari QH, Sahu DR (2016) Extragradient methods for some nonlinear problems. Fixed Point Theory 6:187–230

    MathSciNet  MATH  Google Scholar 

  • Censor Y, Gibali A, Reich S (2012) Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9):1119–1132

    MathSciNet  MATH  Google Scholar 

  • Bauschke HH, Combettes PL (2011) Convex analysis and monotone operator theory in Hilbert spaces, vol 408. Springer, Berlin

    MATH  Google Scholar 

  • Boţ RI, Csetnek ER, Hendrich C (2015) Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl Math Comput 256:472–487

    MathSciNet  MATH  Google Scholar 

  • Boţ RI, Csetnek ER, Vuong PT (2020) The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur J Oper Res 287(1):49–60

    MathSciNet  MATH  Google Scholar 

  • Cai G, Dong Q-L, Peng Yu (2021) Strong convergence theorems for solving variational inequality problems with pseudo-monotone and non-lipschitz operators. J Optim Theory Appl 188:447–472

    MathSciNet  MATH  Google Scholar 

  • Censor Y, Gibali A, Reich S (2011) The subgradient extragradient method for solving variational inequalities in Hilbert space. J Optim Theory Appl 148(2):318–335

    MathSciNet  MATH  Google Scholar 

  • Chidume C (2009) Geometric properties of banach spaces and nonlinear iterations, vol 1965. Springer, Berlin

    MATH  Google Scholar 

  • Dixit A, Sahu DR, Singh AK, Som T (2019) Application of a new accelerated algorithm to regression problems. Soft Comput 6:1–14

    MATH  Google Scholar 

  • Dong QL, Huang J, Li XH, Cho YJ, Rassias TM (2019) Mikm: multi-step inertial Krasnoselskii-Mann algorithm and its applications. J Global Optim 73(4):801–824

    MathSciNet  MATH  Google Scholar 

  • Dong QL, Lu YY, Yang J (2016) The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 65(12):2217–2226

    MathSciNet  MATH  Google Scholar 

  • Dong Y (2015) Comments on “the proximal point algorithm revisited’’. J Optim Theory Appl 166(1):343–349

    MathSciNet  MATH  Google Scholar 

  • Goldstein AA (1964) Convex programming in Hilbert space. Bull Am Math Soc 70(5):709–710

    MathSciNet  MATH  Google Scholar 

  • Jolaoso LO, Aphane M (2022) An explicit subgradient extragradient algorithm with self-adaptive stepsize for pseudomonotone equilibrium problems in banach spaces. Numer Algorithms 89(2):583–610

    MathSciNet  MATH  Google Scholar 

  • Kanzow C, Shehu Y (2017) Generalized Krasnoselskii-Mann-type iterations for nonexpansive mappings in Hilbert spaces. Comput Optim Appl 67(3):595–620

    MathSciNet  MATH  Google Scholar 

  • Kato T (1964) Demicontinuity, hemicontinuity and monotonicity. Bull Am Math Soc 70(4):548–550

    MathSciNet  MATH  Google Scholar 

  • Khanh PD (2016) A modified extragradient method for infinite-dimensional variational inequalities. Acta Math Vietnam 41(2):251–263

    MathSciNet  MATH  Google Scholar 

  • Korpelevich GM (1976) The extragradient method for finding saddle points and other problems. Matecon 12:747–756

    MathSciNet  MATH  Google Scholar 

  • Maingé PE (2008) Convergence theorems for inertial KM-type algorithms. J Comput Appl Math 219(1):223–236

    MathSciNet  MATH  Google Scholar 

  • Mann WR (1953) Mean value methods in iteration. Proc Am Math Soc 4(3):506–510

    MathSciNet  MATH  Google Scholar 

  • Malitsky Yu (2015) Projected reflected gradient methods for monotone variational inequalities. SIAM J Optim 25(1):502–520

    MathSciNet  MATH  Google Scholar 

  • Nachaoui A, Nachaoui M (2022) An hybrid finite element method for a quasi-variational inequality modeling a semiconductor. RAIRO-Oper Res 6:218

    Google Scholar 

  • Polyak BT (1964) Some methods of speeding up the convergence of iteration methods. USSR Comput Math Math Phys 4(5):1–17

    Google Scholar 

  • Sahu DR (2011) Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 12(1):187–204

    MathSciNet  MATH  Google Scholar 

  • Sahu DR (2020) Applications of accelerated computational methods for quasi-nonexpansive operators to optimization problems. Soft Comput 9:1–25

    Google Scholar 

  • Sahu DR, Ansari QH, Yao JC (2016) Convergence of inexact Mann iterations generated by nearly nonexpansive sequences and applications. Numer Funct Anal Optim 37(10):1312–1338

    MathSciNet  MATH  Google Scholar 

  • Sahu DR, Pitea A, Verma M (2019) A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer Algorithms. https://doi.org/10.1007/s11075-019-00688-9

    Article  MATH  Google Scholar 

  • Sahu DR, Shi L, Wong NC, Yao JC (2020) Perturbed iterative methods for a general family of operators: convergence theory and applications. Optimization 32:1–37

    Google Scholar 

  • Sahu DR, Wong NC, Yao JC (2012) A unified hybrid iterative method for solving variational inequalities involving generalized pseudocontractive mappings. SIAM J Control Optim 50(4):2335–2354

    MathSciNet  MATH  Google Scholar 

  • Sahu DR, Yao JC, Verma M, Shukla KK (2020) Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 7:1–26

    Google Scholar 

  • Sahu DR, Yao JC, Singh VK, Kumar S (2017) Semilocal convergence analysis of S-iteration process of Newton-Kantorovich like in Banach spaces. J Optim Theory Appl 172(1):102–127

    MathSciNet  MATH  Google Scholar 

  • Sahu DR, Singh AK (2021) Inertial iterative algorithms for common solution of variational inequality and system of variational inequalities problems. J Appl Math Comput 65(1):351–378

    MathSciNet  MATH  Google Scholar 

  • Tan B, Qin X, Cho SY (2022) Revisiting subgradient extragradient methods for solving variational inequalities. Numer Algorithms 90(4):1593–1615

    MathSciNet  MATH  Google Scholar 

  • Tan B, Sunthrayuth P, Cholamjiak P, Cho YJ (2023) Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem. Int J Comput Math 100(3):525–545

    MathSciNet  MATH  Google Scholar 

  • Thong DV, Hieu DV (2018) Modified Tseng’s extragradient algorithms for variational inequality problems. J Fixed Point Theory Appl 20(4):1–18

  • Tseng P (2000) A modified forward-backward splitting method for maximal monotone mappings. SIAM J Control Optim 38(2):431–446

    MathSciNet  MATH  Google Scholar 

  • Verma M, Sahu DR, Shukla KK (2017) Vaga: a novel viscosity-based accelerated gradient algorithm. Appl Intell 3:1–15

    Google Scholar 

  • Vuong PT (2018) On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J Optim Theory Appl 176(2):399–409

    MathSciNet  MATH  Google Scholar 

  • Xu HK (2002) Iterative algorithms for nonlinear operators. J Lond Math Soc 66(1):240–256

    MathSciNet  MATH  Google Scholar 

  • Yao Y, Marino G, Muglia L (2014) A modified korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63(4):559–569

    MathSciNet  MATH  Google Scholar 

  • Zeidler E (2013) Nonlinear functional analysis and its applications: III: variational methods and optimization. Springer, Berlin

    Google Scholar 

  • Zhang YC, Guo K, Wang T (2019) Generalized Krasnoselskii-Mann-type iteration for nonexpansive mappings in banach spaces. J Oper Res Soc China 3:1–12

    Google Scholar 

Download references

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. R. Sahu.

Ethics declarations

Conflict of interest

The author declares that there is no conflict of interest regarding the publication of this paper.

Ethical approval

This article does not contain any studies with human participants or animals performed by the author.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix-I: Proof of Lemma 2.5

Proof

(a) Set \(K:=\varepsilon -\theta (1+\varepsilon +\max \{1,\varepsilon \})\). Define \(K_{n}=\theta _{n}(1+\theta _{n}+\varepsilon (1-\theta _{n}))\), \( n=1,2,\cdots \) and \(\phi _{n}=\Vert x_{n}-v\Vert ^{2}\), \(n=0,1,2,\cdots .\) From (11), we have

$$\begin{aligned}{} & {} \phi _{n+1} \le (1+\theta _{n})\phi _{n}-\theta _{n}\phi _{n-1} -\varepsilon (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}\nonumber \\{} & {} \quad +K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2} \text { for all } n\in \mathbb {N} \end{aligned}$$
(50)

and hence

$$\begin{aligned}{} & {} \phi _{n+1}-(1+\theta _{n})\phi _{n}+\theta _{n}\phi _{n-1} \le -\varepsilon (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}\nonumber \\{} & {} \quad +K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2}\text { for all } n\in \mathbb {N}. \end{aligned}$$

Since \(\theta _{n}\le \theta _{n+1}\) for all \(n\in \mathbb {N}\), it follows that

$$\begin{aligned} \varphi _{n+1}-\varphi _{n}= & {} \phi _{n+1}-\theta _{n+1}\phi _{n}+K_{n+1}\Vert x_{n+1}-x_{n}\Vert ^{2} \nonumber \\{} & {} \quad -(\phi _{n}-\theta _{n}\phi _{n-1}+K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2}) \\= & {} \phi _{n+1}-(1+\theta _{n+1})\phi _{n}+\theta _{n}\phi _{n-1}\\{} & {} \quad +K_{n+1}\Vert x_{n+1}-x_{n}\Vert ^{2}-K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2} \\\le & {} -\varepsilon (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}+K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2}\\{} & {} \quad +K_{n+1}\Vert x_{n+1}-x_{n}\Vert ^{2}-K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2} \\= & {} -[\varepsilon (1-\theta _{n})-K_{n+1}]\Vert x_{n+1}-x_{n}\Vert ^{2}. \end{aligned}$$

Note \(\theta _{n}\le \theta _{n+1}\le \theta \) for all \(n\in \mathbb {N}\) and \(K_{n}=\theta _{n}\left( 1+\theta _{n}+\varepsilon (1-\theta _{n})\right) \le \theta _{n}\left( 1+\max \{1,\varepsilon \}\right) \text { for all } n\in \mathbb {N}.\) Hence

$$\begin{aligned}{} & {} -[\varepsilon (1-\theta _{n})-K_{n+1}] \\{} & {} \quad = -\varepsilon (1-\theta _{n})+\theta _{n+1}\left( 1+\theta _{n+1}+\varepsilon (1-\theta _{n+1})\right) \\{} & {} \quad \le -\varepsilon (1-\theta _{n})+\theta _{n+1}\left( 1+\max \{1,\varepsilon \}\right) \\{} & {} \quad \le -\varepsilon (1-\theta )+\theta \left( 1+\max \{1,\varepsilon \}\right) \\{} & {} \quad =-[\varepsilon -\theta (1+\varepsilon +\max \{1,\varepsilon \})] =-K. \end{aligned}$$

Note \(K>0\) by the inequality (12). Hence

$$\begin{aligned} \varphi _{n+1}-\varphi _{n}\le -K\Vert x_{n+1}-x_{n}\Vert ^{2}\text { for all }n\in \mathbb {N}, \end{aligned}$$
(51)

which implies that \(\varphi _{n+1}\le \varphi _{n} \) for all \(n\in \mathbb {N }.\) Thus, the sequence \(\{\varphi _{n}\}\) is non-increasing.

(b) For \(n\in \mathbb {N},\) we have

$$\begin{aligned} \varphi _{n}=\phi _{n}-\theta _{n}\phi _{n-1}+K_{n}\Vert x_{n}-x_{n-1}\Vert ^{2}\ge \phi _{n}-\theta _{n}\phi _{n-1} \end{aligned}$$
(52)

and

$$\begin{aligned} \varphi _{n+1}=\phi _{n+1}-\theta _{n+1}\phi _{n}+K_{n+1}\Vert x_{n+1}-x_{n}\Vert ^{2}\ge -\theta _{n+1}\phi _{n}. \end{aligned}$$
(53)

Since \(\varphi _{1}>0,\) from (52), we have

$$\begin{aligned} \phi _{n}\le & {} \theta _{n}\phi _{n-1}+\varphi _{n} \nonumber \\\le & {} \theta \phi _{n-1}+\varphi _{1} \nonumber \\&\cdot \cdot \cdot \nonumber \\\le & {} \theta ^{n}\phi _{0}+(1+\theta +\theta ^{2}+\cdots +\theta ^{n-1})\varphi _{1} \nonumber \\\le & {} \theta ^{n}\phi _{0}+\frac{\varphi _{1}}{1-\theta }\text { for all } n\in \mathbb {N}. \end{aligned}$$
(54)

Combining (53) and (54), we have

$$\begin{aligned} -\varphi _{n+1} \le \theta _{n+1}\phi _{n} \le \theta \phi _{n} \le \theta ^{n+1}\phi _{0}+\frac{\theta \varphi _{1}}{1-\theta }. \end{aligned}$$

From (51), we get

$$\begin{aligned}{} & {} K\sum _{i=1}^{n}\Vert x_{i+1}-x_{i}\Vert ^{2} \le \varphi _{1}-\varphi _{n+1} \\{} & {} \quad \le \varphi _{1}+\theta ^{n+1}\phi _{0}+\frac{\theta \varphi _{1}}{1-\theta } \text { for all } n\in \mathbb {N}. \end{aligned}$$

Taking limit as \(n\rightarrow \infty ,\) we have \( K\sum _{i=1}^{\infty }\Vert x_{i+1}-x_{i}\Vert ^{2}\le \frac{ \varphi _{1}}{1-\theta }<\infty . \)

(c) From (50), we have

$$\begin{aligned} \phi _{n+1}\le & {} (1+\theta _{n})\phi _{n}-\theta _{n}\phi _{n-1}\\{} & {} \quad +\theta \left( 1+\max \{1,\varepsilon \}\right) \Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$

From Lemma 2.4, we obtain that \(\lim _{n\rightarrow \infty }\Vert x_{n}-v\Vert \) exists. \(\square \)

Appendix-II: Proof of Proposition 3.1

Proof

(a)–(b). Let \(v\in \textrm{Fix}(S)\). Note \(\displaystyle Tw_{n}-w_{n}=\frac{ 1}{\alpha _{n}}(x_{n+1}-w_{n})\) and \(\displaystyle \varepsilon \le \frac{ 1+\kappa -\alpha _{n}}{\alpha _{n}}\) for all \(n\in \mathbb {N}.\) Note T is \( \kappa \)-strongly quasi-nonexpansive operator. From (13) and Lemma 2.6, we have

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}&=\Vert (1-\alpha _{n})w_{n}+\alpha _{n}Tw_{n}-v\Vert ^{2} \nonumber \\&\le \Vert w_{n}-v\Vert ^{2}-\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}-Tw_{n}\Vert ^{2} \end{aligned}$$
(55)
$$\begin{aligned}&=\Vert w_{n}-v\Vert ^{2}-\frac{1+\kappa -\alpha _{n}}{\alpha _{n}}\Vert x_{n+1}-w_{n}\Vert ^{2}. \end{aligned}$$
(56)

Again, from (13) and Lemma 2.2(ii), we have

$$\begin{aligned} \Vert w_{n}-v\Vert ^{2}= & {} \Vert (1+\theta _{n})(x_{n}-v)-\theta _{n}(x_{n-1}-v)\Vert ^{2} \nonumber \\= & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \nonumber \\{} & {} +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$
(57)

Combining (56) and (57), we get

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \nonumber \\{} & {} +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}\nonumber \\{} & {} -\varepsilon \Vert x_{n+1}-w_{n}\Vert ^{2}. \end{aligned}$$
(58)

From (10), we have

$$\begin{aligned}{} & {} \Vert x_{n+1}-w_{n}\Vert ^{2} = \Vert x_{n+1}-x_{n}-\theta _{n}(x_{n}-x_{n-1})\Vert ^{2} \nonumber \\{} & {} \quad \ge (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}-\theta _{n}(1-\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$

From (58), we obtain

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \nonumber \\{} & {} +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}\nonumber \\{} & {} -\varepsilon (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}\nonumber \\{} & {} +\varepsilon \theta _{n}(1-\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2} \nonumber \\= & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \nonumber \\{} & {} -\varepsilon (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2} \nonumber \\{} & {} +\theta _{n}\left( 1+\theta _{n}+\varepsilon (1-\theta _{n})\right) \Vert x_{n}-x_{n-1}\Vert ^{2}.\nonumber \\ \end{aligned}$$
(59)

Noticing that \(\{\theta _{n}\}\) is an increasing sequence in [0, 1) by the assumption (C2) and that the condition (12) holds by the assumption (C3). Apply Lemma 2.5 on (59), we conclude that \(\sum _{n=1}^{\infty }\Vert x_{n+1}-x_{n}\Vert ^{2}<\infty \text { and }\lim _{n\rightarrow \infty }\Vert x_{n}-v\Vert ~\text { exists.} \)

Clearly, \(\Vert x_{n+1}-x_{n}\Vert \rightarrow 0\) as \(n\rightarrow \infty \). Hence, from (13), we have

$$\begin{aligned} \Vert w_{n}-x_{n+1}\Vert \le \Vert x_{n}-x_{n+1}\Vert +\theta \Vert x_{n}-x_{n-1}\Vert \rightarrow 0. \end{aligned}$$

It follows that \(\lim _{n\rightarrow \infty }\Vert w_{n}-x_{n}\Vert =0\). We may assume that \(\lim _{n\rightarrow \infty }\Vert x_{n}-v\Vert =\ell >0\). It follows, from \(\lim _{n\rightarrow \infty }\Vert x_{n}-w_{n}\Vert =0\), that \(\lim _{n\rightarrow \infty }\Vert w_{n}-v\Vert =\ell \). In view of condition (C1), we obtain, from (55), that \( \lim _{n\rightarrow \infty }\Vert w_{n}-Tw_{n}\Vert =0.\)

(c) Let \(v\in \textrm{Fix}(S)\). Note \(R_{T}(n):=R_{T,\{w_{n}\}}(n)\le \Vert w_{n}-Tw_{n}\Vert \) for all \(n\in \mathbb {N}\). From (55) and (57), we have

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \\{} & {} +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2} \\{} & {} -\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}-Tw_{n}\Vert ^{2} \\\le & {} \Vert x_{n}-v\Vert ^{2}+( \theta _{n+1}\Vert x_{n}-v\Vert ^{2} \\{} & {} -\theta _{n}||x_{n-1}-v\Vert ^{2}) +\theta (1+\theta )\Vert x_{n}-x_{n-1}\Vert ^{2}\\{} & {} -\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}-Tw_{n}\Vert ^{2}. \end{aligned}$$

Hence, from the condition (C1), we have

$$\begin{aligned}{} & {} a(1+\kappa -b)\sum \limits _{i=1}^{n}\Vert w_{i}-Tw_{i}\Vert ^{2} \\{} & {} \quad \le \sum \limits _{i=1}^{n}(\Vert x_{i}-v\Vert ^{2}-\Vert x_{i+1}-v\Vert ^{2})\\{} & {} \quad +\sum \limits _{i=1}^{n}(\theta _{i+1}\Vert x_{i}-v\Vert ^{2}-\theta _{i}\Vert x_{i-1}-v\Vert ^{2}) \\{} & {} \quad +\theta (1+\theta )\sum \limits _{i=1}^{n}\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad =\Vert x_{1}-v\Vert ^{2}-\Vert x_{n+1}-v\Vert ^{2}+\theta _{n+1}\Vert x_{n}-v\Vert ^{2}\\{} & {} \qquad \quad -\theta _{1}\Vert x_{0}-v\Vert ^{2}+\theta (1+\theta )\sum \limits _{i=1}^{n}\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad \le \ \Vert x_{1}-v\Vert ^{2}+\theta _{n+1}\Vert x_{n}-v\Vert ^{2}\\{} & {} \qquad \quad +\theta (1+\theta )\sum \limits _{i=1}^{\infty }\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad \le M \end{aligned}$$

for all \(n\in \mathbb {N}\) and for some \(M>0.\) Thus, \(\sum \limits _{n=1}^{ \infty }(R_{T}(n))^{2}<\infty .\) Note \(\{R_{T}(n)\}\) is decreasing. Hence, from Lemma 2.3, we see that \( R_{T,\{w_{n}\}}(n)=o\left( \frac{1}{\sqrt{n}}\right) . \) \(\square \)

Appendix-III: Proof of Proposition 3.2

Proof

(a)–(b). Let \(v\in \textrm{Fix}(S)\). Set \(y_{n}:=(1-\alpha _{n})w_{n}+\alpha _{n}Tw_{n}\). Hence \(\displaystyle Tw_{n}-w_{n}=\frac{1}{ \alpha _{n}}(y_{n}-w_{n})\). From (19), we have \(\displaystyle \mathcal {E}\le \frac{\alpha _{n}(1+\kappa -\alpha _{n})}{2(1+{\alpha _{n}^{2}\beta }^{2})}\) for all \(n\in \mathbb {N}.\) For \(\rho =\frac{1}{2}\), from (9), we obtain

$$\begin{aligned}{} & {} \Vert x_{n+1}-Tw_{n}\Vert ^{2} = \Vert x_{n+1}-w_{n}+w_{n}-Tw_{n}\Vert ^{2} \\{} & {} \quad \ge \frac{1}{2}\Vert x_{n+1}-w_{n}\Vert ^{2}-\Vert w_{n}-Tw_{n}\Vert ^{2}. \end{aligned}$$

Since T is \(\beta \)-Lipschitz continuous, we have

$$\begin{aligned}{} & {} \Vert w_{n}-Tw_{n}\Vert ^{2} = \frac{1}{{\alpha _{n}^{2}}}\Vert y_{n}-w_{n}\Vert ^{2} \\{} & {} \quad \ge \frac{1}{{\alpha _{n}^{2}\beta }^{2}}\Vert Ty_{n}-Tw_{n}\Vert ^{2}= \frac{1}{{\alpha _{n}^{2}\beta }^{2}}\Vert x_{n+1}-Tw_{n}\Vert ^{2} \\{} & {} \quad \ge \frac{1}{{\alpha _{n}^{2}\beta }^{2}}\left( \frac{1}{2}\Vert x_{n+1}-w_{n}\Vert ^{2}-\Vert w_{n}-Tw_{n}\Vert ^{2}\right) . \end{aligned}$$

Thus,

$$\begin{aligned} \Vert w_{n}-Tw_{n}\Vert ^{2}\ge \frac{1}{2(1+{\alpha _{n}^{2}\beta }^{2}){\ }}\Vert x_{n+1}-w_{n}\Vert ^{2}. \end{aligned}$$
(60)

From Lemma 2.6, we get

$$\begin{aligned}{} & {} \Vert y_{n}-v\Vert ^{2}\nonumber \\{} & {} \quad = \Vert (1-\alpha _{n})w_{n}+\alpha _{n}Tw_{n}-v\Vert ^{2} \nonumber \\{} & {} \quad = \Vert w_{n}-v\Vert ^{2}-\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}-Tw_{n}\Vert ^{2}. \end{aligned}$$
(61)

Using (61) and then (60), we obtain

$$\begin{aligned}{} & {} \Vert x_{n+1}-v\Vert ^{2} = \Vert Ty_{n}-v\Vert ^{2}\nonumber \\{} & {} \quad \le \Vert y_{n}-v\Vert ^{2}-\kappa \Vert y_{n}-Ty_{n}\Vert ^{2} \nonumber \\{} & {} \quad \le \Vert w_{n}-v\Vert ^{2}-\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}-Tw_{n}\Vert ^{2}\nonumber \\{} & {} \qquad -\kappa \Vert y_{n}-Ty_{n}\Vert ^{2} \end{aligned}$$
(62)
$$\begin{aligned}{} & {} \quad =\Vert w_{n}-v\Vert ^{2}-\frac{\alpha _{n}(1+\kappa -\alpha _{n})}{2(1+{ \alpha _{n}^{2}\beta }^{2})}\Vert x_{n+1}-w_{n}\Vert ^{2}\nonumber \\{} & {} \qquad -\kappa \Vert y_{n}-Ty_{n}\Vert ^{2} \nonumber \\{} & {} \quad \le \Vert w_{n}-v\Vert ^{2}-\mathcal {E}\Vert x_{n+1}-w_{n}\Vert ^{2} -\kappa \Vert y_{n}-Ty_{n}\Vert ^{2}. \end{aligned}$$
(63)

From (17) and Lemma 2.2(ii), we have

$$\begin{aligned} \Vert w_{n}-v\Vert ^{2}= & {} \Vert x_{n}+\theta _{n}(x_{n}-x_{n-1})-v\Vert ^{2} \nonumber \\= & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}||x_{n-1}-v\Vert ^{2} \nonumber \\{} & {} \quad +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$
(64)

From (10), we have

$$\begin{aligned}{} & {} \Vert x_{n+1}-w_{n}\Vert ^{2} = \Vert x_{n+1}-x_{n}-\theta _{n}(x_{n}-x_{n-1})\Vert ^{2} \\{} & {} \quad \ge (1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2} -\theta _{n}(1-\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$

From (63), we obtain

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \\{} & {} \quad +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2} \\{} & {} -\mathcal {E}\Vert x_{n+1}-w_{n}\Vert ^{2}-\kappa \Vert y_{n}-Ty_{n}\Vert ^{2} \\\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \\{} & {} \quad +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}\\{} & {} \qquad -\mathcal {E}(1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}\\{} & {} +\mathcal {E}\theta _{n}(1-\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2}\\{} & {} \qquad -\kappa \Vert y_{n}-Ty_{n}\Vert ^{2} \\= & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2}\\{} & {} \quad -\mathcal {E}(1-\theta _{n})\Vert x_{n+1}-x_{n}\Vert ^{2}\\{} & {} +\theta _{n}\left( 1+\theta _{n}+\mathcal {E}(1-\theta _{n})\right) \Vert x_{n}-x_{n-1}\Vert ^{2}\\{} & {} \quad -\kappa \Vert y_{n}-Ty_{n}\Vert ^{2}. \end{aligned}$$

Note \(\{\theta _{n}\}\) is an increasing sequence in [0, 1) by the assumption (C2) and the condition (12) holds by the assumption (C5) with \(\varepsilon =\mathcal {E}\). Thus, Lemma 2.5 infers us that \(\sum _{n=1}^{\infty }\Vert x_{n+1}-x_{n}\Vert ^{2}<\infty \) and \(\lim _{n\rightarrow \infty }\Vert x_{n}-v\Vert \) exists. It immediately follows, from (17), that \( \lim _{n\rightarrow \infty }\Vert w_{n}-x_{n}\Vert =0\).

Since \(\ell :=\lim _{n\rightarrow \infty }\Vert x_{n}-v\Vert \) exists, it follows, from \(\lim _{n\rightarrow \infty }\Vert x_{n}-w_{n}\Vert =0\), that \( \lim _{n\rightarrow \infty }\Vert w_{n}-v\Vert =\ell \). Hence, from (62), we obtain that \(\lim _{n\rightarrow \infty }\Vert w_{n}-Tw_{n}\Vert \) \(=\) 0 \( =\) \(\lim _{n\rightarrow \infty }\Vert y_{n}-Ty_{n}\Vert . \)

(c) Let \(v\in \textrm{Fix}(S)\). Note \(\{\theta _{n}\}\) is an increasing sequence in [0, 1). From (62) and (64), we get

$$\begin{aligned} \Vert x_{n+1}-v\Vert ^{2}\le & {} (1+\theta _{n})\Vert x_{n}-v\Vert ^{2}-\theta _{n}\Vert x_{n-1}-v\Vert ^{2} \\{} & {} \quad +\theta _{n}(1+\theta _{n})\Vert x_{n}-x_{n-1}\Vert ^{2} \\{} & {} \quad -\alpha _{n}(1+\kappa -\alpha _{n})\Vert w_{n}\\{} & {} \quad -Tw_{n}\Vert ^{2}-\kappa \Vert y_{n}-Ty_{n}\Vert ^{2}. \end{aligned}$$

Hence, from the condition (C1), we have

$$\begin{aligned}{} & {} a(1+\kappa -b)\sum \limits _{i=1}^{n}\Vert w_{i}-Tw_{i}\Vert ^{2}+\kappa \sum \limits _{i=1}^{n}\Vert y_{i}-Ty_{i}\Vert ^{2} \\{} & {} \quad \le \sum \limits _{i=1}^{n}(\Vert x_{i}-v\Vert ^{2}-\Vert x_{i+1}-v\Vert ^{2}) +\sum \limits _{i=1}^{n}(\theta _{i+1}\Vert x_{i}\\{} & {} \qquad \quad -v\Vert ^{2}-\theta _{i}\Vert x_{i-1}-v\Vert ^{2}) +\theta (1+\theta )\sum \limits _{i=1}^{n}\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad \le \Vert x_{1}-v\Vert ^{2}-\Vert x_{n+1}-v\Vert ^{2}+\theta _{n+1}\Vert x_{n}-v\Vert ^{2}\\{} & {} \qquad \quad -\theta _{1}\Vert x_{0}-v\Vert ^{2}+\theta (1+\theta )\sum \limits _{i=1}^{n}\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad \le \Vert x_{1}-v\Vert ^{2}+\theta _{n+1}\Vert x_{n}-v\Vert ^{2} +\theta (1+\theta )\\{} & {} \qquad \sum \limits _{i=1}^{\infty }\Vert x_{i}-x_{i-1}\Vert ^{2} \\{} & {} \quad \le M\text { for all } n\in \mathbb {N}\text { and for some } M>0. \end{aligned}$$

Since T is \(\beta \) -Lipschitz continuous, we have

$$\begin{aligned} \Vert x_{n+1}-Tx_{n+1}\Vert= & {} \Vert Ty_{n}-Tx_{n+1}\Vert \le \beta \Vert y_{n}-x_{n+1}\Vert \nonumber \\{} & {} \quad =\beta \Vert y_{n}-Ty_{n}\Vert \text { for all } n\in \mathbb {N}. \end{aligned}$$
(65)

Since \(\sum \limits _{n=1}^{\infty }\Vert y_{n}-Ty_{n}\Vert ^{2}<\infty ,\) it follows that \(\sum \limits _{n=1}^{\infty }\Vert x_{n}-Tx_{n}\Vert ^{2}<\infty \). Noticing that \(R_{T}(n):=R_{T,\{x_{n}\}}(n)\le \Vert x_{n}-Tx_{n}\Vert \) for all \(n\in \mathbb {N}\) and that \(\{R_{T}(n)\}\) is decreasing. Hence \( \sum \limits _{n=1}^{\infty }(R_{T}(n))^{2}<\infty .\) Therefore, from Lemma 2.3, we conclude that (20) holds. \(\square \)

Appendix-IV: Proof of Proposition 4.2

Proof

Let \(\{x_{n}\}\) be a sequence in X such that \(x_{n}\) \(\rightharpoonup z\) and \((I-S_{\lambda })x_{n}\rightarrow 0\) as \(n\rightarrow \infty \). We now show that \((I-S_{\lambda })z=0.\) If \(Fz=0,\) then \(z\in \varOmega [ \textrm{VI}(C,F)],\) i.e, \((I-S_{\lambda })z=0.\) Assume that \(Fz\ne 0.\) Let \( x\in C.\) Set \(y_{n}=S_{\lambda }x_{n}.\) From Lemma 2.1(a), we have

$$\begin{aligned}{} & {} \left\langle x_{n}-\lambda Fx_{n}-y_{n},x-y_{n}\right\rangle \\{} & {} \quad =\left\langle x_{n}-\lambda Fx_{n}-P_{C}(I-\lambda F)x_{n}, x-P_{C}(I-\lambda F)x_{n}\right\rangle \\{} & {} \quad \le 0 \text { for all } n\in \mathbb {N}. \end{aligned}$$

Hence \( \left\langle x_{n}-y_{n},x-y_{n}\right\rangle \le \lambda \left\langle Fx_{n},x-y_{n}\right\rangle , \) which implies that

$$\begin{aligned} \left\langle x_{n}-y_{n},x-y_{n}\right\rangle -\lambda \left\langle Fx_{n},x_{n}-y_{n}\right\rangle \le \lambda \left\langle Fx_{n},x-x_{n}\right\rangle . \nonumber \\ \end{aligned}$$
(66)

Since \(x_{n}\rightharpoonup z,\Vert x_{n}-y_{n}\Vert \rightarrow 0\) and F is sequentially weak-to-weak continuous, we have \(y_{n}\rightharpoonup z\) and \(Fx_{n}\rightarrow Fz.\) Hence, from (66), we have

$$\begin{aligned} 0\le \liminf _{n\rightarrow \infty }\left\langle Fy_{n},x-y_{n}\right\rangle . \end{aligned}$$
(67)

Since the norm is sequentially weakly lower semicontinuous, we get

$$\begin{aligned} 0<\Vert Fz\Vert \le \liminf _{n\rightarrow \infty }\Vert Fy_{n}\Vert . \end{aligned}$$
(68)

It follows that there exists \(n_{0}\in \mathbb {N}\) such that \(Fy_{n}\ne 0\) for all \(n\ge n_{0}.\ \)Now we choose a sequence \(\{\varepsilon _{k}\}\) in \( (0,\infty )\) such that \(\{\varepsilon _{k}\}_{_{k\ge 0}}\) is strictly decreasing and \(\varepsilon _{k}\rightarrow 0.\ \)For \(\{\varepsilon _{k}\}_{k\ge 0}\), from (67) and (68), there exists a strictly increasing sequence \(\{n_{k}\}_{k\ge 0}\) of positive integers such that

$$\begin{aligned} \left\langle Fy_{n_{k}},x-y_{n_{k}}\right\rangle +\varepsilon _{k}\ge 0 \text { and }Fy_{n_{k}}\ne 0\text { for all }k\ge 0. \end{aligned}$$
(69)

Set \(p_{k}=\frac{1}{\Vert Fy_{n_{k}}\Vert ^{2}}Fy_{n_{k}}.\) Then \( \left\langle Fy_{n_{k}},p_{k}\right\rangle =1.\) From (69), we obtain

$$\begin{aligned}{} & {} \left\langle Fy_{n_{k}},x-y_{n_{k}}+\varepsilon _{k}p_{k}\right\rangle = \left\langle Fy_{n_{k}},x-y_{n_{k}}\right\rangle \\{} & {} \quad + \varepsilon _{k}\left\langle Fy_{n_{k}},p_{k}\right\rangle \ge 0\text { for all }k\ge 0. \end{aligned}$$

By the pseudo-monotonicity of F on X, we have

$$\begin{aligned} \left\langle F(x+\varepsilon _{k}p_{k}),x-y_{n_{k}}+\varepsilon _{k}p_{k}\right\rangle \ge 0\text { for all }k\ge 0. \end{aligned}$$
(70)

Note F satisfies the condition \((\mathscr {B})\). Then, by the boundedness of \(\{Fy_{n_{k}}\},\) there exist constants \(m,M>0\) such that \(m\le \Vert Fy_{n_{k}}\Vert \le M\) for all \(k\ge 0.\) Hence \(\Vert \varepsilon _{k}p_{k}\Vert =\varepsilon _{k}\Vert p_{k}\Vert =\frac{\varepsilon _{k}}{ \Vert Fy_{n_{k}}\Vert }\rightarrow 0\) as \(k\rightarrow \infty .\) Thus, from ( 70), we get \( \left\langle Fx,x-z\right\rangle \ge 0. \) Since x is an arbitrary element in C, we infer that \(z\in \varOmega [\textrm{DVI}(C,F)]\). Therefore, from Lemma 4.2, we conclude that \( z\in \varOmega [\textrm{VI}(C,F)],~\)i.e., \((I-S_{\lambda })z=0.\) \(\square \)

Appendix-V: Proof of Proposition 4.3

Proof

(a) Let \(u,v\in X.\) Then, from Remark 4.1, we have

$$\begin{aligned} \Vert E_{\lambda }(u)-E_{\lambda }(v)\Vert\le & {} \Vert u-v-\lambda (FS_{\lambda }(u)-FS_{\lambda }(v)\Vert \\\le & {} \Vert u-v\Vert +\lambda L\Vert S_{\lambda }(u)-S_{\lambda }(v)\Vert \\\le & {} (1+\lambda L+\lambda ^{2}L^{2})\Vert u-v\Vert . \end{aligned}$$

(b) Let \(\lambda \in (0,1/L)\), \(x\in X\) and \(v\in \varOmega [\textrm{VI}(C,F)]\). Set \( y=P_{C}(x-\lambda Fx)\) and \(z=P_{C}(x-\lambda Fy)\). Then \(\left\langle Fv,y-v\right\rangle \ge 0.\) By the pseudo-monotonicity of F, we have \( \left\langle Fy,y-v\right\rangle \ge 0.\) From Lemma 2.1(d), we have

$$\begin{aligned}{} & {} \Vert P_{C}(x-\lambda Fy)-v\Vert ^{2}\nonumber \\{} & {} \quad \le \Vert x-\lambda Fy-v\Vert ^{2}-\Vert x-\lambda Fy-P_{C}(x-\lambda Fy)\Vert ^{2} \nonumber \\{} & {} \quad =\Vert x-v-\lambda Fy\Vert ^{2}-\Vert x-z-\lambda Fy\Vert ^{2} \nonumber \\{} & {} \quad =\Vert x-v\Vert ^{2}-2\lambda \left\langle Fy,x-v\right\rangle +\lambda ^{2}\Vert Fy\Vert ^{2} -\Vert x-z\Vert ^{2}\nonumber \\{} & {} \qquad +2\lambda \left\langle Fy,x-z\right\rangle -\lambda ^{2}\Vert Fy\Vert ^{2} \nonumber \\{} & {} \quad =\Vert x-v\Vert ^{2}-\Vert x-z\Vert ^{2}+2\lambda \left\langle Fy,v-z\right\rangle \nonumber \\{} & {} \quad =\Vert x-v\Vert ^{2}-\Vert x-z\Vert ^{2}+2\lambda [ \left\langle Fy,v-y\right\rangle +\left\langle Fy,y-z\right\rangle ] \nonumber \\{} & {} \quad \le \Vert x-v\Vert ^{2}-\Vert x-z\Vert ^{2}+2\lambda \left\langle Fy,y-z\right\rangle . \end{aligned}$$
(71)

Substituting x by \(x-\lambda Fx\) and y by z in Lemma 2.1(a), we get

$$\begin{aligned} \left\langle x-\lambda Fx-P_{C}(x-\lambda Fx),z-P_{C}(x-\lambda Fx)\right\rangle \le 0, \end{aligned}$$

which gives us that

$$\begin{aligned} \left\langle x-\lambda Fx-y,z-y\right\rangle \le 0. \end{aligned}$$
(72)

Thus, from (72), we have

$$\begin{aligned}{} & {} \left\langle x-\lambda Fy-y,z-y\right\rangle \nonumber \\{} & {} \quad =\left\langle x-\lambda Fx-y,z-y\right\rangle +\left\langle \lambda Fx-\lambda Fy,z-y\right\rangle \nonumber \\{} & {} \quad \le \lambda \left\langle Fx-Fy,z-y\right\rangle \nonumber \\{} & {} \quad \le \lambda L\left\| x-y\right\| \left\| z-y\right\| \nonumber \\{} & {} \quad \le \frac{\lambda L}{2}(\Vert x-y\Vert ^{2}+\Vert y-z\Vert ^{2}). \end{aligned}$$
(73)

Hence, from (71) and (73), we get

$$\begin{aligned} \Vert z-v\Vert ^{2}\le & {} \Vert x-v\Vert ^{2}-\Vert x-y+(y-z)\Vert ^{2} \\{} & {} +2\lambda \left\langle Fy,y-z\right\rangle \\= & {} \Vert x-v\Vert ^{2}-\Vert x-y\Vert ^{2}-\Vert y-z\Vert ^{2}\\{} & {} -2\left\langle x-y,y-z\right\rangle +2\lambda \left\langle Fy,y-z\right\rangle \\= & {} \Vert x-v\Vert ^{2}-\Vert x-y\Vert ^{2}-\Vert y-z\Vert ^{2}\\{} & {} -2\left\langle x-\lambda Fy-y,y-z\right\rangle \\\le & {} \Vert x-v\Vert ^{2}-\Vert x-y\Vert ^{2}-\Vert y-z\Vert ^{2}\\{} & {} +\lambda L(\Vert x-y\Vert ^{2}+\Vert y-z\Vert ^{2}) \\= & {} \Vert x-v\Vert ^{2}-(1-\lambda L)\Vert x-y\Vert ^{2}\\{} & {} -(1-\lambda L)\Vert y-z\Vert ^{2}. \end{aligned}$$

For \(\rho =1/2\), from (9), we have

$$\begin{aligned} \Vert y-z\Vert ^{2}=\Vert y-x+x-z\Vert ^{2}\ge \frac{1}{2}\Vert x-z\Vert ^{2}-\Vert y-x\Vert ^{2}. \nonumber \\ \end{aligned}$$
(74)

Hence, from (74), we have

$$\begin{aligned} \Vert z-v\Vert ^{2}\le & {} \Vert x-v\Vert ^{2}-(1-\lambda L)\Vert x-y\Vert ^{2}\\{} & {} -\frac{1-\lambda L}{2}\Vert x-z\Vert ^{2}+(1-\lambda L)\Vert y-x\Vert ^{2}\\= & {} \Vert x-v\Vert ^{2}-\frac{1-\lambda L}{2}\Vert x-z\Vert ^{2}. \end{aligned}$$

(c) Let \(\lambda \in (0,1/L)\). Lemma 4.1 shows that \(\textrm{Fix}(S_{\lambda })=\varOmega [\textrm{VI}(C,F)].\) Note \(\emptyset \ne \textrm{Fix} (S_{\lambda })\subseteq \textrm{Fix}(E_{\lambda }).\) We now show that \( \textrm{Fix}(E_{\lambda })\subseteq \textrm{Fix}(S_{\lambda }).\) Let \(u\in \textrm{Fix}(E_{\lambda })\) and \(v\in \textrm{Fix}(S_{\lambda }).\) From Part b(i), we get

$$\begin{aligned}{} & {} \Vert u-v\Vert ^{2} = \Vert E_{\lambda }(u)-v\Vert ^{2} \le \Vert u-v\Vert ^{2}\\{} & {} \quad -(1-\lambda L)\Vert u-S_{\lambda }(u)\Vert ^{2} -(1-\lambda L)\Vert S_{\lambda }(u)-E_{\lambda }(u)\Vert ^{2} \\{} & {} \quad =\Vert u-v\Vert ^{2}-2(1-\lambda L)\Vert u-S_{\lambda }(u)\Vert ^{2}, \end{aligned}$$

which implies that \(u\in \textrm{Fix}(S_{\lambda }).\)

(d) Let \(\lambda \in (0,1/L)\). Let \(\{x_{n}\}\) be a bounded sequence in X such that \( \lim _{n\rightarrow \infty }\Vert x_{n}-E_{\lambda }(x_{n})\Vert =0.\) Let \(v\in \varOmega [\textrm{VI}(C,F)].\) From Part (b)(i), we have

$$\begin{aligned} \Vert E_{\lambda }(x_{n})-v\Vert ^{2}\le & {} \Vert x_{n}-v\Vert ^{2}\\{} & {} \quad -(1-\lambda L)\Vert x_{n}-S_{\lambda }(x_{n})\Vert ^{2}\text { for all } n\in \mathbb {N}, \end{aligned}$$

it follows that

$$\begin{aligned}{} & {} (1-\lambda L)\Vert x_{n}-S_{\lambda }(x_{n})\Vert ^{2}\\{} & {} \quad \le \Vert x_{n}-v\Vert ^{2}-\Vert E_{\lambda }(x_{n})-v\Vert ^{2} \\{} & {} \quad =(\Vert x_{n}-v\Vert -\Vert E_{\lambda }(x_{n})-v\Vert ) (\Vert x_{n}-v\Vert +\Vert E_{\lambda }(x_{n})-v\Vert ) \\{} & {} \quad \le 2\Vert x_{n}-v\Vert \Vert x_{n}-E_{\lambda }(x_{n})\Vert \text { for all }n\in \mathbb {N}. \end{aligned}$$

By the boundedness of \(\{x_{n}\}\), we have \(\lim _{n\rightarrow \infty }\Vert x_{n}-S_{\lambda }(x_{n})\Vert =0.\) \(\square \)

Appendix-VI: Proof of Proposition 4.4

Proof

(a) Let \(u,v\in X.\) Then

$$\begin{aligned} \Vert T_{\lambda }(u)-T_{\lambda }(v)\Vert\le & {} \Vert S_{\lambda }(u)-S_{\lambda }(v)\Vert +\lambda \Vert FS_{\lambda }(u)\\{} & {} -FS_{\lambda }(v)\Vert +\lambda \Vert Fu-Fv\Vert \\\le & {} (1+3\lambda L+\lambda ^{2}L^{2})\Vert u-v\Vert . \end{aligned}$$

(b) Let \(\lambda \in (0,1/L),\) \(x\in X\) and \(z\in \varOmega [\textrm{VI} (C,F)].\) Set \(y:=P_{C}(x-\lambda Fx).~\)Hence

$$\begin{aligned}{} & {} \Vert T_{\lambda }(x)-x\Vert =\Vert y-x-\lambda (Fy-Fx)\Vert \\{} & {} \quad \le \Vert y-x\Vert +\lambda \Vert Fy-Fx\Vert \le (1+\lambda L)\Vert y-x\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert S_{\lambda }(x)-x\Vert&=\Vert T_{\lambda }x-x+\lambda (Fy-Fx)\Vert \le \Vert T_{\lambda }x-x\Vert \\&+\lambda L\Vert y-x\Vert , \end{aligned}$$

which implies that \((1-\lambda L)\Vert x-S_{\lambda }(x)\Vert \le \Vert x-T_{\lambda }x\Vert .\) From Lemma 2.1(a), we have

$$\begin{aligned}{} & {} y=P_{C}(x-\lambda Fx) \Leftrightarrow \langle y-x+\lambda Fx,y-z\rangle \\{} & {} \quad \le 0 \Leftrightarrow \langle y-x,y-z\rangle \le -\lambda \langle Fx,y-z\rangle . \end{aligned}$$

Thus,

$$\begin{aligned} \Vert T_{\lambda }(x)-z\Vert ^{2}= & {} \Vert y-\lambda (Fy-Fx)-z\Vert ^{2}\\= & {} \Vert y-x+x-z\Vert ^{2}+\lambda ^{2}\Vert Fy-Fx\Vert ^{2}\\{} & {} -2\lambda \langle Fy-Fx,y-z\rangle \\= & {} \Vert y-x\Vert ^{2}+\Vert x-z\Vert ^{2}+2\langle y-x,x-z\rangle \\{} & {} +\lambda ^{2}\Vert Fy-Fx\Vert ^{2}-2\lambda \langle Fy-Fx,y-z\rangle \\= & {} \Vert y-x\Vert ^{2}+\Vert x-z\Vert ^{2}+2\langle y-x,x-y\rangle \\{} & {} +2\langle y-x,y-z\rangle \\{} & {} +\lambda ^{2}\Vert Fy-Fx\Vert ^{2}-2\lambda \langle Fy-Fx,y-z\rangle \\= & {} \Vert x-z\Vert ^{2}-\Vert y-x\Vert ^{2}+2\langle y-x,y-z\rangle \\{} & {} +\lambda ^{2}\Vert Fy-Fx\Vert ^{2}-2\lambda \langle Fy-Fx,y-z\rangle \\\le & {} \Vert x-z\Vert ^{2}-\Vert y-x\Vert ^{2}+\lambda ^{2}\Vert Fy-Fx\Vert ^{2}\\{} & {} -2\lambda \langle Fx,y-z\rangle -2\lambda \langle Fy-Fx,y-z\rangle \\\le & {} \Vert x-z\Vert ^{2}-\Vert y-x\Vert ^{2}+\lambda ^{2}L^{2}\Vert y-x\Vert ^{2}\\{} & {} -2\lambda \langle Fy,y-z\rangle . \end{aligned}$$

Since \(z\in \varOmega [\text {VI}(C,F)],\) we have \(\langle Fz,y-z\rangle \ge 0.\) By pseudo-monotonicity of F,  we have \(\langle Fy,y-z\rangle \ge 0.\) From (75), we have

$$\begin{aligned} \Vert T_{\lambda }(x)-z\Vert ^{2}\le & {} \Vert x-z\Vert ^{2}-(1-\lambda ^{2}L^{2})\Vert x-S_{\lambda }(x)\Vert ^{2}\\{} & {} \quad =\Vert x-z\Vert ^{2}-\kappa \Vert x-T_{\lambda }(x)\Vert ^{2}. \end{aligned}$$

(c) Let \(\lambda \in (0,1/L)\). Note \(\textrm{Fix}(S_{\lambda })=\varOmega [\textrm{VI}(C,F)].\) Observe that \(\textrm{Fix}(S_{\lambda })\subseteq \textrm{Fix}(T_{\lambda }).\ \)We now show that \(\textrm{Fix} (T_{\lambda })\subseteq \textrm{Fix}(S_{\lambda }).\) Let \(u\in \textrm{Fix} (T_{\lambda })\) and \(v\in \textrm{Fix}(S_{\lambda }).\) From Part b(iii), we get

$$\begin{aligned} \Vert u-v\Vert ^{2}\!=\!\Vert T_{\lambda }(u)\!-\!v\Vert ^{2}\!\le \! \Vert u-v\Vert ^{2}\!-\!(1\!-\!\lambda ^{2}L^{2})\Vert u\!-\!S_{\lambda }(u)\Vert ^{2}, \end{aligned}$$

which implies that \(u\in \textrm{Fix}(S_{\lambda }).\)

(d) Let \(\lambda \in (0,1/L)\). Let \(\{x_{n}\}\) be a bounded sequence in X such that \(\lim _{n\rightarrow \infty }\Vert x_{n}-T_{\lambda }(x_{n})\Vert =0.\) Using Part (b)(ii), we obtain \( (1-\lambda L)\Vert x_{n}-S_{\lambda }(x_{n})\Vert \le \Vert x_{n}-T_{\lambda }(x_{n})\Vert \text { for all }n\in \mathbb {N}. \) Thus, we conclude that the operator \(S_{\lambda }\) has property \((\mathscr {A} )\) with respect to the operator \(T_{\lambda }\). \(\square \)

Appendix-VII: Proof of Example 5.2

To prove the pseudo-monotonicity and Lipschitz continuity of operator F defined by (46), we define \(g:X\rightarrow \mathbb {R}\) such that \( g(x)=e^{-\langle x,Ux\rangle }+\alpha \ \text {for all}\ x\in X. \) Let \(x,y\in X\) such that \(\langle Fx,y-x\rangle \ge 0.\) Since \(g(x)>0,\) it follows that \(\langle Vx+q,y-x\rangle \ge 0.\) Hence

$$\begin{aligned} \langle Fy,y-x\rangle= & {} g(y)\langle Vy+q,y-x\rangle \\\ge & {} g(y)(\langle Vy+q,y-x\rangle -\langle Vx+q,y-x\rangle )\\= & {} g(y)\langle V(y-x),y-x\rangle \\\ge & {} 0. \end{aligned}$$

Thus, F is pseudo-monotonous.

Now, let \(x,h\in X.\) Then \( \nabla F(x)(h) =2e^{-\langle x,Ux\rangle }\langle Ux,h\rangle (Vx+p)+\left( e^{-\langle x,Ux\rangle }+\alpha \right) Vh \) and

$$\begin{aligned}{} & {} \Vert \nabla F(x)(h)\Vert \le 2e^{-\eta \Vert x\Vert ^{2}}\Vert U\Vert \Vert x\Vert (\Vert V\Vert \Vert x\Vert +\Vert p\Vert )\Vert h\Vert \\{} & {} \quad +\left( e^{-\eta \Vert x\Vert ^{2}} +\alpha \right) \Vert V\Vert \Vert h\Vert . \end{aligned}$$

Hence

$$\begin{aligned} \Vert \nabla F(x)\Vert\le & {} 2e^{-\eta \Vert x\Vert ^{2}}\Vert U\Vert \Vert x\Vert (\Vert V\Vert \Vert x\Vert +\Vert p\Vert )\\{} & {} +\left( e^{-\eta \Vert x\Vert ^{2}}+\alpha \right) \Vert V\Vert . \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sahu, D.R. A unified framework for three accelerated extragradient methods and further acceleration for variational inequality problems. Soft Comput 27, 15649–15674 (2023). https://doi.org/10.1007/s00500-023-08806-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-023-08806-5

Navigation