Convergence Analysis of the Approximation Problems for Solving Stochastic Vector Variational Inequality Problems

In this paper, we consider stochastic vector variational inequality problems (SVVIPs). Because of the existence of stochastic variable, the SVVIP may have no solutions generally. For solving this problem, we employ the regularized gap function of SVVIP to the loss function and then give a low-risk conditional value-at-risk (CVaR) model. However, this low-risk CVaR model is difficult to solve by the general constraint optimization algorithm. This is because the objective function is nonsmoothing function, and the objective function contains expectation, which is not easy to be computed. By using the sample average approximation technique and smoothing function, we present the corresponding approximation problems of the low-risk CVaR model to deal with these two difficulties related to the low-risk CVaR model. In addition, for the given approximation problems, we prove the convergence results of global optimal solutions and the convergence results of stationary points, respectively. Finally, a numerical experiment is given.


Introduction
Variational inequality problems (VIPs) are a class of equilibrium optimization problems. ey are powerful tools to solve large-scale optimization problems and equilibrium problems, and they have widely used in economic equilibrium, optimal control, countermeasure theory, and transportation planning. Recalling the variational inequality problems, denoted by VI(X, f) is to find a vector x * ∈ X such that where X ⊆ R n is a nonempty closed convex set and f: X ⟶ R n is a given function. However, some elements may involve uncertain data in practice. For example, the demands are generally difficult to be determined in supply chain network because they vary with the change in income level. In addition, in traffic equilibrium problems, the users attempt to minimize travel cost leads the equilibrium flows to uncertain.
Since the existence of the stochastic elements, the solutions of stochastic variational inequality problems would change with stochastic elements, respectively. In order to meet the needs in practice, many authors began to consider the following stochastic variational inequality problems, denoted by SVI(X, f), which requires an x * ∈ X such that x − x * T f x * , ξ(ω) ≥ 0, ∀x ∈ X, ξ(ω) ∈ Ω, a.s., (2) where ξ(ω): Ω ⟶ Q is a stochastic vector defined on probability space (Ω, F, P). Q is a support of the probability space. f: X × Q ⟶ R n is a given function, and a.s. is the abbreviation for "almost surely" under the given probability measure. For simplicity, we use ξ instead of ξ(ω) in this paper.
At present, scholars have obtained many theoretical results in the study of stochastic variational inequality problems. Because of the existence of stochastic variables, it is difficult to obtain a solution of stochastic variational inequality problems satisfying all stochastic variables. In order to solve these problems, scholars gave reasonable deterministic model of stochastic variational inequality problems and regarded the solutions of this deterministic model as the solutions of stochastic variational inequality problems. For example, the expected value (EV) method [1,2,3] and the expected residual minimization (ERM) method [4,5,6] focused on minimizing the average or expected residual. In addition, conditional value-at-risk model was presented by Chen and Lin [7]. is model can better avoid risks.
In real life, many problems have multiple evaluation indicators. In order to make these evaluation indicators comprehensively optimal, scholars began to study the vector variational inequality problems. e vector variation inequality problems are powerful tools for studying optimization problems, quantitative economics, and equilibrium problems. As a result, in-depth research about vector variational inequality problems has been obtained (see [8,9,10,11,12,13] for details).
Similar to the practical applications of variational inequality problems, vector variational inequality problems will also encounter the situations that the practical problems contain stochastic variables. Hence, this paper considers stochastic vector variational inequality problems (SVVIPs). e definition of stochastic vector variational inequality problems is as follows: finding x * ∈ X such that where X ⊆ R n is a nonempty closed convex set, F j : R n × Q ⟶ R n is a vector valued function for each j � 1, . . . , m, ξ(ω): Ω ⟶ Q is a stochastic vector defined on probability space (Ω, F, P) with support set Q ⊂ R r , and a.s. is the abbreviation for "almost surely" under the given probability measure. For simplicity, in this paper, we use ξ to denote the ξ(ω).
Because of the existence of the stochastic variable ξ, generally, there is no vector x * satisfying (3) for all ξ ∈ Q. In order to give reasonable solutions of (3), we will present a low-risk deterministic model. And the solutions of this model are regarded as the solutions of SVVIP.
In this paper, we assume Q ⊂ R r is a nonempty and compact set. F(x, ξ) is a continuously differentiable with respect to x, and F(x, ξ) is continuously differentiable with respect to ξ ∈ Q. ρ(·) is continuity probability density function. For convenience, we use ∇ x g(x) to denote gradient of g: R n ⟶ R with respect to x.
. e remainder of this paper is organized as follows. In Section 2, we give a definition, a lemma, and the equivalent formulation of SVVIP along with regularized gap function of SVVIP. In Section 3, we present a CVaR deterministic model for solving SVVIP and then regard the solutions of this CVaR model as the solutions of SVVIP. Furthermore, we employ sample average approximation and smoothing method to solve this CVaR model. In Section 4, the convergence proof of global optimal solutions sequence of the CVaR approximation problems is given. In Section 5, the convergence result of stationary points sequence of corresponding CVaR approximation problems is considered. In Section 6, a numerical experiment is considered. In Section 7, conclusions are given.

Preliminaries
Definition 1 (see [14]). e Clarke generalized gradient of f(x) with respect to x is defined as where D f(·) denotes the set of points near x where function f: R n ⟶ R near x is Frechét differentiable, ∇ x f(y) denotes f(y) general gradient with respect to x, and conv denotes convex hull of a set.
Lemma 1 (see [15]). e SVVIP (3) is equivalent to the following stochastic variational inequalities with simple constraints: find (x * , λ * ) ∈ X × Λ such that Here, Λ ≔ λ ∈ R m : λ j ≥ 0, m j�1 λ j � 1 . Based on the regularized gap function proposed by Fukushima in [16], in this paper, we present the corresponding regularized gap function of problem (5) as follows: where C > 0 is a given parameter and G is an n × n symmetric positive definite matrix. It follows from [16] that, for any (x, λ) ∈ X × Λ, and Also based on the work of Fukushima in [16], it is easy to obtain that g(x, λ, ξ) ≥ 0 and its value is equal to 0 if and only if (x * , λ * ) solves (5). erefore, (5) can be transformed into the following constraint optimization problem: Due to the existence of stochastic variables, problem (3) and problem (5) usually have no solutions. However, based on the needs of such problems in practical applications, we urgently need to give a reasonable solution which can be regarded as the solution of problem (3) or (5). erefore, in the next section, we will present the conditional value-at-risk (CVaR) model for solving stochastic vector variational inequality problems.

Low-Risk CVaR Model and Its Approximation Problems
We introduce the definition of value-at-risk (VaR) firstly before giving the low-risk CVaR model of (3). For a variable x and a parameter u ∈ R + , the risk value less than level u and its probability is defined as where ρ(ξ) denotes probability density function of ξ. For a given confidence level α, the definition of VaR α (x) about x is given as follows: where q(x, ξ): R n × Q ⟶ R is the loss function. Even though VaR is a popular risk measure, it has many disadvantages such as a lack of subadditivity, convexity, and fails to be coherent in the sense. Luckily, CVaR can cover the shortage of VaR; that is, CVaR satisfies coherence and convexity. Moreover, Rockafellar and Uryasev have indicated that when minimizing CVaR, VaR can be obtained as the result of CVaR's auxiliary in [17,18]. CVaR is a low-risk model. Many scholars have used the CVaR model to solve stochastic complementarity problems [19], stochastic variational inequalities problems [9], and stochastic second-order cone complementarity problems [20]. ere are also scholars who use CVaR to deal with stochastic constraints such as [21,22].
For a given confidence level α, CVaR is defined as the conditional expectation of the loss associated with x relative to that loss being VaR α or greater, that is, Hence, the optimization model of minimizing the CVaR can be denoted by We then introduce the function Q α (x, u): R n × R + ⟶ R as follows: where E denotes mathematical expectation and [t] + � max(t, 0). Following from the conclusion has given in [17,18], we have that minimizing CVaR α (x) is equivalent to minimize the function Q α (x, u). So CVaR model (3) is equivalent to the following constraint optimization problem: In this paper, we apply regularized gap function (6) and combine the optimal problem (9) to give the loss function in (15). en, the low-risk CVaR model for solving SVVIP is given as follows: Here, α ∈ (0, 1) denotes a given confidence level.
In fact, problem (16) is difficult to compute. is is because the model contains an expectation, which is difficult to calculation in general, and [·] + is not differentiable everywhere. Hence, we will employ the sample average approximation method and use the smoothing function presented by Li in [23] to obtain the corresponding approximation problem.
Sample average approximation (SAA) method uses the sample average value to approximate the expected [24]. Let Q k ≔ ξ 1 , . . . , ξ N k denote the set of independently and identically distributed stochastic samples from Q, and suppose that N k tends to infinity as k increases. For an integrable function ψ: Q ⟶ R, the strong law of large numbers guarantees that this procedure converges with probability one (abbreviated by "w.p.1."), that is, We then present the SAA problem of the low-risk CVaR model by applying the sample average approximate method as follows: e smoothing form of function [t] + , which is presented by Li et al. in [23], is given as follows: In addition, they have proved that the following properties hold: In order to express easily, let en, for any μ > 0, we use Finally, we combine the sample average approximation method and the smoothing function together to give smoothing sample average approximation (SSAA) problem of the low-risk CVaR model as follows:

Convergence of Global Optimal Solutions
We next consider the convergence of the smoothing sample average approximation problem (25). In this section and the next section, we only consider the most complex convergence, and let μ k ⟶ 0 as k ⟶ ∞. Here, μ varies as k increases.
Theorem 1. Suppose that, for each k, (x k , λ k , u k ) is a global optimal solution of problem (25), and (x, λ, u) is an accumulation point of sequence (x k , λ k , u k ) . en, (x, λ, u) is a global optimal solution of the low-risk CVaR model (16), which holds with probability one.
Proof. Without loss of generality, we assume that lim k ⟶ ∞ (x k , λ k , u k ) � (x, λ, u). Let B is a compact set including the whole sequence (x k , λ k , u k ) . Z ⊆ R + is a compact set including μ k . Because h μ (x, λ, ξ, u) is continuously differentiable on compact set B × Q × Z, we can have that there exists a constant D > 0 such that, for any (x, λ, ξ, u) ∈ B × Q × Z, the following formulation holds: In addition, by mean value theorem, for each x k , λ k , u k , ξ i and μ k , there exists α ki ∈ (0, 1) such that (x ki , λ ki , ξ i , u ki ) � α ki (x k , λ k , ξ i , u k ) + (1 − α ki )(x, λ, ξ i , u) ∈ B, and we then obtain that Hence, by (26) and (27), we obtain that On the other hand, following from (17) and (24), we have Similar to (29), we have it then follows from (28) and (29) that 4 Complexity Moreover, for every k, since (x k , λ k , u k ) is a global optimal solution of problem (25), we have Taking limits to (33), according to (30) and (32), we have (34)

Convergence of Stationary Points
In fact, SSAA problem (25) is a nonconvex optimal problem usually. Hence, when solving this problem, we usually obtain stationary points instead of global optimal solutions. erefore, it is necessary to consider the convergence of stationary point of (25). In this section, suppose that where c i (x), l j (x), i � 1, . . . , p, j � 1, . . . , q, are all continuous differentiable functions. We firstly give the definitions of stationary point of (16) and (25), respectively.
Note that X is a compact set and g(x, λ, ξ) is continuously differentiable with respect to x, and we get that ζ μ (x, λ, ξ, u, δ, β) is bounded and compact set value function in B × Λ × Q × C × D × T.

Numerical Experiment
In this section, we apply the approach introduced in the previous sections to solve the vector variational inequality problem. at is, we first formulate the vector variational inequality problem into the CVaR formulation (16) and then employ the SAA method and smoothing method to solve the CVaR model. In the following example, we use the function 6 Complexity "fmincon" in Matlab2018b function to solve constraints optimization program (25). And we use the command "random" to obtain the independent and identically distributed samples.

Conclusions
In this paper, we consider stochastic vector variational inequality problems (SVVIPs). Since for all ξ ∈ Q, SVVIP may have no solutions generally. erefore, we give a reasonable deterministic model, that is, conditional valueat-risk (CVaR) model and then regard the solutions of this model as the solutions of SVVIP. For solving this CVaR model, the sample average approximation method and smoothing method are employed to deal with the problems that the expectation is difficult to calculate and the objective function is nonsmoothing function, respectively. In addition, convergence analysis of global optimal solutions sequence and stationary points sequence of SSAA problems is considered. Finally, a numerical experiment indicates that the proposed approach is applicable.

Data Availability
No data were used in this article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Authors' Contributions
Kun Zhang is responsible for the calculation of the numerical experiment and some theoretical derivations. Meiju Luo is responsible for the overall work.