Error bounds for generalized vector inverse quasi-variational inequality Problems with point to set mappings

: The goal of this paper is further to study a kind of generalized vector inverse quasi-variational inequality problems and to obtain error bounds in terms of the residual gap function, the regularized gap function, and the global gap function by utilizing the relaxed monotonicity and Hausdor ﬀ Lipschitz continuity. These error bounds provide e ﬀ ective estimated distances between an arbitrary feasible point and the solution set of generalized vector inverse quasi-variational inequality problems.


Introduction
In 2014, Li et al. [1] suggested a new class of inverse mixed variational inequality in Hilbert spaces that has simple problem of traffic network equilibrium control, market equilibrium issues as applications in economics and telecommunication network problems. The concept of gap function plays an important role in the development of iterative algorithms, an evaluation of their convergence properties and useful stopping rules for iterative algorithms, see [2][3][4][5]. Error bounds are very important and useful because they provide a measure of the distance between a solution set and a feasible arbitrary point. Solodov [6] developed some merit functions associated with a generalized mixed variational inequality, and used those functions to achieve mixed variational error limits. Aussel et al. [7] introduced a new inverse quasi-variational inequality (IQVI), obtained local (global) error bounds for IQVI in terms of certain gap functions to demonstrate the applicability of IQVI, and provided an example of road pricing problems, also see [8,9]. Sun and Chai [10] introduced regularized gap functions for generalized vector variation inequalities (GVVI) and obtained GVVI error bounds for regularized gap functions. Wu and Huang [11] implemented generalized f -projection operators to deal with mixed variational inequality. Using the generalized f -projection operator, Li and Li [12] investigated a restricted mixed set-valued variational inequality in Hilbert spaces and proposed four merit functions for the restricted mixed set valued variational inequality and obtained error bounds through these functions.
Our goal in this paper is to present a problem of generalized vector inverse quasi-variational inequality problems. They propose three gap functions, the residual gap function, the regularized gap function, and the global gap function, and obtain error bounds for generalized vector inverse quasi-variational inequality problem using these gap functions and generalized f -projection operator under the monotonicity and Lipschitz continuity of underlying mappings.

Preliminaries
Throughout this article, R + denotes the set of non-negative real numbers, 0 denotes the origins of all finite dimensional spaces, · and ·, · denotes the norms and the inner products in finite dimensional spaces, respectively. Let Ω, F, P : R n → R n be the set-valued mappings with nonempty closed convex values, N i : R n × R n → R n (i = 1, 2, · · · , m) be the bi-mappings, B : R n → R n be the single-valued mappings, and f i : R n → R (i = 1, 2, · · · , m) be real-valued convex functions. We put and for any x, w ∈ R n , N(x, x), w = ( N 1 (x, x), w , N 2 (x, x), w , · · · , N n (x, x), w ).
In this paper, we consider the following generalized vector inverse quasi-variational inequality for findingx ∈ Ω(x),ū ∈ F(x) andv ∈ P(x) such that and solution set is denoted by .

Special cases:
(i) If P is a zero mapping and N(·, ·) = N(·), then (2.1) reduces to the following problem for findinḡ studied in [13] and solution set is denoted by 1 . (ii) If F is single valued mapping, then (2.2) reduces to the following vector inverse mixed quasivariational inequality for findingx ∈ Ω(x) such that studied in [14] and solution set is denoted by 2 .
(iii) If C ⊂ R n is a nonempty closed and convex subset, B(x) = x and Ω(x) = C for all x ∈ R n , then (2.3) collapses to the following generalized vector variational inequality for findingx ∈ C such that which is considered in [10]. (iv) If f (x) = 0 for all x ∈ R n , then (2.4) reduces to vector variational inequality for findingx ∈ C such that N(x), y − x −intR m + , ∀y ∈ C, (2.5) studied in [15]. (v) If R m + = R + then (2.5) reduces to variational inequality for findingx ∈ C such that studied in [16].
Definition 2.1 [7] Let G : R n → R n and g : R n → R n be two maps.
(i) (G, g) is said to be a strongly monotone if there exists a constant µ g > 0 such that (ii) g is said to be L g -Lipschitz continuous if there exists a constant L g > 0 such that For any fixed γ > 0, let G : R n ×Ω → (−∞, +∞] be a function defined as follows: whereΩ ⊂ R n is a nonempty closed and convex subset, and f : R n → R is convex. Definition 2.2 [11] We say that ‫ג‬ fΩ : R n → 2Ω is a generalized f -projection operator if then the generalized f -projection operator ‫ג‬ fΩ is equivalent to the following metric projection operator: Lemma 2.4 [1,11] The following statements hold: (i) For any given ϕ ∈ R n , ‫ג‬ fΩ ϕ is nonempty and single-valued; (ii) For any given ϕ ∈ R n , x = ‫ג‬ fΩ ϕ if and only if (iii) ‫ג‬ fΩ : R n → Ω is nonexpansive, that is, Lemma 2.5 [17] Let m be a positive number, B ⊂ R n be a nonempty subset such that Let Ω : R n → R n be a set-valued mapping such that, for each x ∈ R n , Ω(x) is a closed convex set, and let f : R n → R be a convex function on R n . Assume that (i) there exists a constant τ > 0 such that where D(·, ·) is a Hausdorff metric defined on R n ; (ii) 0 ∈ w∈R n Ω(w); (iii) f is -Lipschitz continuous on R n . Then there exists a constant κ = 6τ(m + γ ) such that Lemma 2.6 A function r : R n → R is said to be a gap function for a generalized vector inverse quasi-variational inequality on a setS ⊂ R n if it satisfies the following properties: (i) r(x) ≥ 0 for any x ∈S; (ii) r(x) = 0,x ∈S if and only ifx is a solution of (2.1).
Definition 2.7 Let B : R n → R n be the single-valued mapping and N : R n × R n → R n be a bi-mapping.
(i) (N, B) is said to be a strongly monotone with respect to the first argument of N and B, if there exists a constant µ B > 0 such that is said to be a relaxed monotone with respect to the second argument of N and B, if there exists a constant ζ B > 0 such that (iii) N is said to be σ-Lipschitz continuous with respect to the first argument with constant σ > 0 and ℘-Lipschitz continuous with respect to the second argument with constant ℘ > 0 such that (iv) B is said to be -Lipschitz continuous if there exists a constant > 0 such that Example 2.8 The variational inequality (2.6) can be solved by transforming it into an equivalent optimization problem for the so-called merit function r(·; τ) : where τ is a nonnegative parameter. If X is finite dimensional,the function r(·; 0) is usually called the gap function for τ = 0, and the function r(·; τ) for τ > 0 is called the regularized gap function. Example 2.9 Assume that N : R n → R n be a given mapping and C a closed convex set in R n . Let and be given scalar satisfying > > 0 then (2.6) has a D-gap function if where D stands for difference.

The residual gap functions
In this section, we discuss the residual gap function for generalized vector inverse quasi-variational inequality problem by using the strong monotonicity, relaxed monotonicity, Hausdorff Lipschitz continuity and prove error bounds related to the residual gap function. We define the residual gap function for (2.1) as follows: Theorem 3.1 Suppose that F, P : R n → R n are set-valued mappings and N i : R n × R n → R n (i = 1, 2, · · · , m) are the bi-mappings. Assume that B : R n → R n is single-valued mapping, then for any γ > 0, r γ (x) is a gap function for (2.1) on R n .

Proof. For any
On the other side, if r γ (x) = 0, and It gives that Thus,x is a solution of (2.1).
Conversely, ifx is a solution of (2.1), there exists 1 ≤ i 0 ≤ m such that By using the Lemma 2.4, we have This means that The proof is completed.
Next we will give the residual gap function r γ , error bounds for (2.1). Theorem 3.2 Let F, P : R n → R n be D-ϑ F -Lipschitz continuous and D-P -Lipschitz continuous mappings, respectively. Let N i : R n × R n → R n (i = 1, 2, · · · , m) be σ i -Lipschitz continuous with respect to the first argument and ℘ i -Lipschits continuous with respect to the second argument, and B : R n → R n be -Lipschitz continuous, and (N i , B) be strongly monotone with respect to the first argument of N i and B with positive constant µ B i , and relaxed monotone with respect to the second argument of Assume that there exists κ i ∈ 0, Then, for any x ∈ R n and µ B x −x denotes the distance between the point x and the solution set .
Letx ∈ Ω(x) be the solution of (2.1) and thus for any i ∈ {1, · · · , m}, we have , and Lemma 2.4, we have Replacing y by B(x) in (3.4), we get Utilizing (3.5) and (3.6), we have which implies that Since F is D-ϑ F -Lipschitz continuous, P is D-P -Lipschits continuous and N i is σ i -Lipschitz continuous with respect to the first argument and ℘ i -Lipschitz continuous with respect to the second argument, we have (3.7) Again, for i = 1, 2, · · · , m, (N i , B) are strongly monotone with respect to the first argument of N i and B with a positive constant µ B i ,, and relaxed monotone with respect to the second argument of N i and B with a positive constant ζ B i , we have and using the Cauchy-Schwarz inequality along with the triangular inequality, we have Using the (3.7) and condition (3.2), we have Hence, for any x ∈ R n and i ∈ {1, 2, · · · , m}, µ we have This implies which means that The proof is completed.

The regularized gap function
The regularized gap function for (2.1) is defined for all x ∈ R n as follows: Proof. For given x ∈ R n , u ∈ F(x), v ∈ P(x) and i ∈ {1, 2, · · · , m}, set Consider the following problem: Since ψ i (x, ·) is a strongly concave function and Ω(x) is nonempty closed convex, the above optimization problem has a unique solution z ∈ Ω(x). Evoking the condition of optimality at z, we get where N Ω(x) (z) is the normal cone at z to Ω(x) and ∂ f i (z) denotes the subdifferential of f i at z. Therefore, , v ∈ P(x). Hence g i (x) can be rewritten as From the definition of projection For any x ∈ B −1 (Ω), we have B(x) ∈ Ω(x).
Therefore, putting y = B(x) in (4.5), we get that is, From the definition of r γ (x) and (4.1), we get The proof is completed. Proof. From the definition of φ γ , we have for all y ∈ Ω(x), u ∈ F(x), v ∈ P(x).
Conversely, ifx is a solution of (2.1), there exists 1 ≤ i 0 ≤ m such that which means that The preceding claim leads to φ γ (x) ≥ 0 and it implies that φ γ (x) = 0.
The proof is completed.
Since φ γ can act as a gap function for (2.1), according to Theorem 4.2, investigating the error bound properties that can be obtained with φ γ is interesting. The following corollary is obtained directly by Theorem 3.2 and (3.5).
Corollary 4.3 Let F, P : R n → R n be D-ϑ F -Lipschitz continuous and D-P -Lipschitz continuous mappings, respectively. Let N i : R n × R n → R n (i = 1, 2, · · · , m) be σ i -Lipschitz continuous with respect to the first argument and ℘ i -Lipschitz continuous with respect to the second argument, B : R n → R n be -Lipschitz continuous, and (N i , B) be strongly monotone with respect to the first argument of N and B with respect to the constant µ B i > 0, and relaxed monotone with respect to the second argument of N and B with respect to the constant ζ B i > 0. Let Assume that there exists κ i ∈ 0, Then, for any x ∈ B −1 (Ω) and any

The global gap functions
The regularized gap function φ γ does not provide global error bounds for (2.1) on R n . In this section, we first discuss the D-gap function, see [6] for (2.1), which gives R n the global error bound for (2.1). For (2.1) with > > 0, the D-gap function is defined as follows: From (4.1), we know G can be rewritten as , v ∈ P(x). Theorem 5.1 For any x ∈ R n , > > 0, we have Proof. From the definition of G (x), it follows that For any given i ∈ {1, 2, · · · , m}, we set ∈ Ω(x), by Lemma 2.4, we know Combining (5.2) and (5.3), we get ∈ Ω(x), from Lemma 2.4, we have and so It will require and (5.3), From (5.4) and (5.5), for any i ∈ {1, 2, · · · , m}, we get Hence and so The proof is completed.
Now we are in position to prove that G in the set R n is a global gap function for (2.1). Theorem 5.2 For 0 < < , G is a gap function for (2.1) on R n .
From Theorem 3.1, we knowx is a solution of (2.1).
Conversely, ifx is a solution of (2.1), than from Theorem 3.1, it follows that r (x) = 0.
The proof is completed.
Use Theorem 3.2 and (5.2), we immediately get a global error bound in the set R n for (2.1). Corollary 5.3 Let F, P : R n → R n be D-ϑ F -Lipschitz continuous and D-P -Lipschitz continuous mappings, respectively. Let N i : R n × R n → R n (i = 1, 2, · · · , m) be σ i -Lipschitz continuous with respect to the first argument and ℘ i -Lipschitz continuous with respect to the second argument, and B : R n → R n be -Lipschitz continuous. Let (N i , B) be the strongly monotone with respect to the first argument of N i and B with constant µ B i and relaxed monotone with respect to the second argument of N and B with modulus Assume that there exists κ i ∈ 0, µ B i − ζ B i ϑ F σ i + ℘ i P such that ‫ג‬ f i Ω(x) z − ‫ג‬ f i Ω(y) z ≤ κ i x − y , ∀x, y ∈ R n , u ∈ F(x), v ∈ P(x), z ∈ {w | w = B(x) − N i (u, v)}.
Then, for any x ∈ R n and

Conclusions
One of the traditional approaches to evaluating a variational inequality (VI) and its variants is to turn into an analogous optimization problem by notion of a gap function. In addition, gap functions play a pivotal role in deriving the so-called error bounds that provide a measure of the distances between the solution set and feasible arbitrary point. Motivated and inspired by the researches going on in this direction, the main purpose of this paper is to further study the generalized vector inverse quasivariational inequality problem (1.2) and to obtain error bounds in terms of the residual gap function, the regularized gap function, and the global gap function by utilizing the relaxed monotonicity and Hausdorff Lipschitz continuity. These error bounds provide effective estimated distances between an arbitrary feasible point and the solution set of (1.2).