Abstract
We enrich the theory of variational inequalities in the case of an aggregative structure by implementing recent results obtained by using the Selten–Szidarovszky technique. We derive existence, semi-uniqueness and uniqueness results for solutions and provide a computational method. As an application we derive very powerful practical equilibrium results for Nash equilibria of sum-aggregative games and illustrate with Cournot oligopolies.
Similar content being viewed by others
1 Introduction
When dealing with optimisation, equilibrium or related problems, a usual program is to study existence, semi-uniqueness (i.e. there is at most one solution), uniqueness and computation of solutions. For such problems, variational inequalities provide a unifying, natural, simple and quite novel setting. The systematic study of this subject began in the early 1960 s with the influential work of Hartman and Stampacchia in [9] for the study of (infinite)-dimensional partial differential equations. The present theory of (finite dimensional) variational inequalities has found applications in mathematical programming, engineering, economics and finance.Footnote 1 In particular this theory applies to Nash equilibria of games in strategic form. However, various quite sophisticated recent results for sum-aggregative games with pseudo-concave conditional payoff functions do not follow from this theory. The results we have in mind here concern uniqueness results as in [11] which are derived by what was called ‘the Selten-Szidarovszky technique’ (SS-technique) in [26].
The origin of the SS-technique can be found in the book [21] of Selten dealing with aggregative games and in the article [22] of Szidarovszky dealing with Cournot oligopolies.Footnote 2 The aim of the present article is to go a theoretical step further by integrating an advanced version of the SS-technique into the theory of variational inequalities. For more on the SS-technique, see [4, 11].
We consider two types of variational inequalities that are special cases of the following quite general form
where X is a non-empty subset of \({\mathbb {R}}^n\) and \(\textbf{F} = (F_1,\ldots ,F_n): X \rightarrow {\mathbb {R}}^n\) is a function. A solution of \(\textrm{VI}(X,\textbf{F})\) is defined as an \(\textbf{x}^{\star } \in X\) that satisfies all inequalities in (1).Footnote 3 Both cases relate to the aggregative variational inequality \(\textrm{VI}(\textbf{X},\textbf{T})\) with \(\textbf{X} = {\mathbb {R}}^n_+\) or \(\textbf{X} = \textsf {X}_{l=1}^n [0,m_l]\) where, with \(N {{\; \mathrel {\mathop {:}}= \;}}\{1,\ldots ,n\}\), letting \(x_N {{\; \mathrel {\mathop {:}}= \;}}\sum _{l \in N} x_l\),
So here \(T_i\) depends on \(x_i\) and the aggregate \(x_N\). (A precise definition concerning \(t_i\) is in order.) One may refer to this problem as an ‘aggregative variational inequality’. In case \(X = {\mathbb {R}}^n_+\) this variational inequality specialises to a nonlinear complementarity problem and in the other case to a mixed nonlinear complementarity problem. We shall study the complete set of solutions and do not exclude boundary or degenerateFootnote 4 ones.
In Sect. 2 the results are obtained by applying standard theory for these aggregative variational inequalities. Although results in this section are not really new, they may contribute to the literature in the sense that the presentation there is efficient, self-contained and in addition critically reviews and repairs a result in [19]. The new and much more powerful results are obtained by the Selten–Szidarovszky technique in Sects. 3 and 4 assuming \(X = {\mathbb {R}}^n_+\). In Sect. 3, contrary to Sect. 4, there are no differentiability assumptions for the \(t_i\), just continuity is assumed. However, a discontinuity at (0, 0) always is allowed.
A vast part of the ideas of proving the results in Sects. 3 and 4 is based on [3, 11, 29] dealing with sum-aggregative games and [26] dealing with so-called abstract games. In particular Sect. 4.5 provides necessary and sufficient conditions for the variational inequality (2) to have a unique solution. As we shall see, the used mathematics in the SS-technique is quite elementary (although technical): for example, no deep results like Brouwer’s fixed point theorem, Gale–Nikaido theorem or advanced theories like topological fixed point index theory is needed. The fundamental idea behind the SS-technique is the transformation of the n-dimensional problem for the aggregate variational inequality into a 1-dimensional fixed point problem for the correspondence \( b {{\; \mathrel {\mathop {:}}= \;}}\sum _i b_i\) with \(b_i: {\mathbb {R}}_+ \multimap \mathbb {R}\) is given byFootnote 5
see Theorem 3.2. Various assumptions made on the \(t_i\) relate to the so-called At Most Single Crossing From Above property; see Definition 3.1. In the differentiable case checking these assumptions may be straightforward. Theorem 3.2 also is at the base for computational methods as shown in [1, 24] for the Cournot oligopoly context.
Section 5 explains how the theory of (aggregative) variational inequalities applies to Nash equilibria of (sum-aggregative) games in strategic form. Especially economic games in strategic form have an aggregative structure. Among others this concerns oligopolistic, public good, cost-sharing, common resource, contest and rent-seeking games (e.g. see [3, 27]). The most important results concerning Nash equilibria of sum-aggregative games are in Theorem 5.1 which provides a very practical uniqueness result and Theorem 4.3 which is, as illustrated in Sect. 5.4, at the base for games with a possible discontinuity at the origin. The latter one is especially important for contest and rent-seeking games and in fact provides a (very abstract) generalisation and improvement of the results in [10, 23]. Both theorems do not use explicit pseudo-concavity conditions for conditional payoff functions (which may be not so easy to verify in various applications); in fact they implicitly hold. In doing so, the game theoretic results in [11] are improved upon.
When one looks to the articles on Cournot oligopoly theory it becomes clear that generalised convexity properties of the price function play an important role in more sophisticated results; also Assumption (c) for the \(t_i\) in Theorem 5.1 is closely related to such properties.Footnote 6 In this context it may be interesting to note that minima of various (pre)invex functions (see [17, 18]) can be characterised by so-called variational-like inequalities.
There are three appendices: on variational inequalities, on smoothness issues and on various types of matrices.
2 Standard Technique
2.1 Setting
With \(\textrm{VI}(X,\textbf{F})\) the general variational inequality as in (1), we consider in this section
where \(\textbf{X} = {\mathbb {R}}_+^n\) (unbounded case) or \(\textbf{X} = \textsf {X}_{l=1}^n [0,m_l]\) with \(m_l > 0\) (bounded case) and
with \(t_i: {\mathbb {R}}_+ \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) (unbounded case) and \(t_i: [0,m_i] \times [0, \sum _{l=1}^n m_l] \rightarrow {\mathbb {R}}\) (bounded case). Further we suppose \(n \ge 2\). Let
Results in this section not being really new, we shall not use the designation ‘theorem’ for them.
2.2 Assumptions
In this section the following assumptions will occur.
- \(\overline{\textrm{CONT}}\).:
-
\(t_i\) is continuous.
- \(\overline{\textrm{DIFF}}\).:
-
\(T_i\) and \(t_i\) are continuously differentiable.
- \(\overline{\textrm{EC}}\).:
-
(For unbounded case) There exists \(\overline{x}_i > 0\) such that, \(t_i(x_i,y) < 0\) for every \((x_i,y) \in {\mathbb {R}}^2_+\) with \(\overline{x}_i \le x_i \le y\).Footnote 7 Let \(K_i {{\; \mathrel {\mathop {:}}= \;}}[0,\overline{x}_i]\).
For the unbounded case, with \(K_i\) as in Assumption \(\overline{\textrm{EC}}\), let \(\textbf{K} {{\; \mathrel {\mathop {:}}= \;}}\textsf {X}_{l=1}^n K_l\).
These assumptions are supposed to hold for every \(i \in N\).Footnote 8 Below we often consider situations where such an assumption just holds for a specific i; then we add [i] to the assumption; for example, \(\overline{\textrm{EC}}\)[i].
Some comments concerning \(\overline{\textrm{DIFF}}\) are in order. Of course, in \(\overline{\textrm{DIFF}}\), properties of \(T_i\) and \(t_i\) are related. However, it is comfortable to present them here as stated. As the domain of \(T_i \; (t_i)\) is not open, we interpret continuous differentiability in \(\overline{\textrm{DIFF}}\) as usual: there exists a continuously differentiable extension of \(T_i \; (t_i)\) to an open set.
If Assumption \(\overline{\textrm{DIFF}}\) holds, then the Jacobi matrix \(\textbf{J}(\textbf{x})\) of \(\textbf{T}: \textbf{X} \rightarrow {\mathbb {R}}^n\) is given by
2.3 Existence
Proposition 2.1
\(\textbf{0} \) is a solution of \(\mathrm {\overline{AVI}}\) if and only if \(N_> = \emptyset \). \(\diamond \)
Proof
\(\textbf{0}\) is a solution if and only if \(\textbf{T}(\textbf{0}) \cdot \textbf{x} \ge 0 \; (\textbf{x} \in \textbf{X})\), i.e. if and only if \(\sum _{i=1}^n t_i(0,0) x_i \le 0 \; (\textbf{x} \in \textbf{X})\). And this is equivalent with \(t_i(0,0) \le 0 \; (i \in N)\), i.e. with \(N_> = \emptyset \). \(\square \)
Lemma 2.1
Consider the unbounded case. Suppose \(\overline{\textrm{EC}}\) holds. Let B be a subset of \({\mathbb {R}}^n_+\) with \(\textbf{K} \subseteq B\). Each solution of \(\textrm{VI}(B,\textbf{T})\) is a solution of \(\textrm{VI}(\textbf{K},\textbf{T})\).Footnote 9\(\diamond \)
Proof
Suppose \(\textbf{x}^{\star }\) is a solution of \( \textrm{VI}(B,\textbf{T})\). As \(\textbf{K} \subseteq B\), it is sufficient to show that \(\textbf{x}^{\star } \in \textbf{K}\). By contradiction, suppose \(x^{\star }_j > \overline{x}_j\) for some j. \(\overline{\textrm{EC}}\) implies \(t_j(x_j^{\star }, x_N^{\star }) < 0\). We have \(\textbf{x}^{\star } \in B\) and \(\sum _i t_i(x_i^{\star }, x_N^{\star }) (x_i - x^{\star }_i) \le 0\) for all \(\textbf{x} \in B\). By taking \(x_i = x^{\star }_i \; (i \ne j)\) and \(x_j =0\), \( t_j(x_j^{\star }, x_N^{\star }) x^{\star }_j \ge 0\) follows. Thus, \(t_j(x_j^{\star }, x_N^{\star }) \ge 0\), a contradiction. \(\square \)
Proposition 2.2
Suppose Assumption \(\overline{\textrm{CONT}}\) holds.
-
1.
Consider the bounded case. The set of solutions of \(\overline{\textrm{AVI}}\) is a non-empty compact subset of \({\mathbb {R}}^n\).
-
2.
Consider the unbounded case. If Assumption \(\overline{\textrm{EC}}\) holds, then the set of solutions of \(\overline{\textrm{AVI}}\) is a non-empty compact subset of \({\mathbb {R}}^n\) and each solution belongs to \(\textbf{K}\). \(\diamond \)
Proof
1. \(\overline{\textrm{CONT}}\) implies that \(\textbf{T}\) is continuous. Now apply Lemma A.9 in “Appendix A”.
2. By Lemma 2.1 with \(B = {\mathbb {R}}^n_+\) each solution of \(\overline{\textrm{AVI}}\) is a solution of \(\textrm{VI}(\textbf{K},\textbf{T})\) and thus belongs to \(\textbf{K}\). Next we are going to apply Lemmas A.9 and A.10 with \(X= {\mathbb {R}}^n_+\). Fix an \(r > 0\) such that \(\textbf{K} \subseteq X_{r/2} \subset X_r \subseteq 137 \textbf{K}\).Footnote 10 As \(137 \textbf{K}\) is compact, Lemma A.9 guarantees that \(\textrm{VI}(137 \textbf{K}, \textbf{T})\) has a solution, say \(\textbf{x}^{\star }\). Lemma 2.1 guarantees that \(\textbf{x}^{\star } \in \textbf{K}\). So also \(\textbf{x}^{\star } \in X_{r/2} \subset X_r\). This implies that \(\textbf{x}^{\star }\) also is a solution of \(\textrm{VI}(X_r,\textbf{T})\) and that \({{\parallel \textbf{x}^{\star } \parallel }} \le r/2 < r\). By Lemma A.10 in “Appendix A”, \(\textbf{x}^{\star }\) is a solution of \(\overline{\textrm{AVI}}\).
In order to prove that the set of solutions of \(\overline{\textrm{AVI}}\) is compact, it is sufficient, as this set is, by part 1, bounded, that this set is closed. Well, this is guaranteed by Lemma A.7. \(\square \)
2.4 Semi-uniqueness
Suppose Assumption \(\overline{\textrm{DIFF}}\) holds. Thus, by (4), in short notations,
It is important to realise that \(\textbf{J}(\textbf{x})\) may not be symmetric.
Proposition 2.3
Consider the unbounded case. Suppose Assumption \(\overline{\textrm{DIFF}}\) holds. Each of the following two conditions separately is sufficient for \(\mathrm {\overline{AVI}}\) to have at most one solution.
-
(a).
The matrix \(\textbf{J}(\textbf{x})\) is for every \(\textbf{x} \in {\mathbb {R}}^n_+\) positive quasi-definite.Footnote 11
-
(b).
The matrix \(\textbf{J}(\textbf{x})\) is for every \(\textbf{x} \in {\mathbb {R}}^n_+\) a P-matrix. \(\diamond \)
Proof
In order for \(\mathrm {\overline{AVI}}\) to have at most one solution, it is, by Lemma A.4 in “Appendix A”, sufficient to show that \(\textbf{T}: \textbf{X} \rightarrow {\mathbb {R}}^n\) is strictly monotone on \(\textbf{X}\) or a P-function on \(\textbf{X}\).
(a). Suppose \(\textbf{J}(\textbf{x})\) is positive quasi-definite for every \(\textbf{x} \in {\mathbb {R}}^n_+\). By Lemma A.5, \(\textbf{T}\) is strictly monotone.
(b). Suppose \(\textbf{J}(\textbf{x})\) is for every \(\textbf{x} \in {\mathbb {R}}^n_+\) a P-matrix. By Lemma A.6, \(\textbf{T}\) is a P-function.
\(\square \)
Now results for \(\overline{\textrm{AVI}}\) for the unbounded case are implied by conditions that guarantee that each matrix \(\textbf{J}(\textbf{x})\) is positive quasi-definite or a P-matrix. Such conditions can be found in “Appendix C”. The next proposition presents such a result.
Proposition 2.4
Consider the unbounded case. Suppose Assumption \(\overline{\textrm{DIFF}}\) holds. Sufficient for \(\mathrm {\overline{AVI}}\), to have at most one solution is that \((D_1 + D_2) t_i(x_i,x_N) < - (n-1) | D_2 t_i(x_i,x_N) | \) for every \(i \in N\) and \(\textbf{x} \in \textbf{X}\). \(\diamond \)
Proof
The proof is, by Proposition 2.3(b) complete if \(\textbf{J}(\textbf{x})\) is for every \(\textbf{x} \in \textbf{X}\) a P-matrix. Well, if \(\textbf{J}(\textbf{x})\) is row diagonally dominant with positive diagonal entries, then it is a P-matrix. By (4), this specialises to that for every \(\textbf{x} \in \textbf{X}\) and \(i \in N\): \( (D_1 + D_2) t_i(x_i,x_N) < 0\) and \( (D_1 + D_2) t_i(x_i,x_N) < - (n-1) \mid D_2 t_i(x_i,x_N) \mid \), i.e. to \( (D_1 + D_2) t_i(x_i,x_N) < - (n-1) \mid D_2 t_i(x_i,x_N) \mid \). \(\square \)
Clearly, as n grows the inequality in part 2 of this proposition gets more difficult to be satisfied. And note that, by Proposition 2.1, if we add in addition that \(N_> \ne \emptyset \), then we obtain the result that if \(\mathrm {\overline{AVI}}\) has a solution, then this solution is unique and nonzero.
2.5 Uniqueness
Combining Proposition 2.4 (or a variant of it) with Proposition 2.2, we obtain a uniqueness result for the aggregative variational inequality \(\mathrm {\overline{AVI}}\). In Sects. 3 and 4 we shall obtain more interesting results by using the SS-technique.
2.6 Application: Cournot Oligopoly
In this subsection, we critically reconsider and repair with Proposition 2.5 below an equilibrium uniqueness result in [19].Footnote 12 This result is as far as we know, the first one analysing equilibria of Cournot oligopolies by means of nonlinear complementarity problems. The setting for this result is a Cournot oligopoly game \(\varGamma \) with \(n \ge 2\) firms without capacity constraints with a price function \(p: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) and with a cost function \(c_i: {\mathbb {R}}\rightarrow {\mathbb {R}}\) for firm \(i \in N\). With these notations the profit function \(f_i: {\mathbb {R}}^n_+ \rightarrow {\mathbb {R}}\) for firm i is given by
This defines a game in strategic form \(\varGamma \) with N as player set and with \({\mathbb {R}}_+\) as strategy set for each player and with \(f_i\) as payoff function of firm i.
If p and every \(c_i\) is twice continuously differentiable, then the aggregative variational inequality \(\mathrm {\overline{AVI}}\) where \(t_i: {\mathbb {R}}^2_+ \rightarrow {\mathbb {R}}\) is given by
is referred here to as ‘oligopolistic variational inequality’ and will be denoted by OVI. In fact this aggregative variational inequality concerns what we call in Definition 5.1 in Sect. 5, for a more general setting, the associated variational inequality VI(\(\varGamma )\). Proposition 2.5 deals with the solution set of the oligopolistic variational inequality and the Nash equilibrium set of \(\varGamma \). Concerning the latter we have to refer in the proof of Proposition 2.5 to results which are developed in Sect. 5.
Proposition 2.5
Consider a Cournot oligopoly \(\varGamma \) where \(p: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) and every \(c_i: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) is twice continuously differentiable with the following two conditions.
-
(a).
For every \(i \in N\) and \(\textbf{x} \in {\mathbb {R}}^n_+\):
$$\begin{aligned} 2 p'(x_N) + p''(x_N) x_i - c''_i(x_i) < - (n-1)\, | p'(x_N) + p''(x_N) x_i |. \end{aligned}$$ -
(b).
For every \(i \in N\) there exists an \(\overline{x}_i >0 \) such that for every \(\textbf{x} \in {\mathbb {R}}^n_+\) with \(x_i \ge \overline{x}_i\),
$$\begin{aligned} p'(x_N) x_i + p(y) - c'_i(x_i) < 0. \end{aligned}$$
The following results hold.
-
1.
Under condition (a), the oligopolistic variational OVI has at most one solution and \(\varGamma \) has at most one Nash equilibrium.
-
2.
Under condition (b), the OVI has a solution.
-
3.
Under conditions (a) and (b), the OVI has a unique solution and \(\varGamma \) has a Nash equilibrium. \(\diamond \)
Proof
1. Note that \(\overline{\textrm{DIFF}}\) holds. The inequality in Proposition 2.4 specialises to the inequality of the statement and so guarantees that OVI has at most one solution. By Proposition 5.1(1) the set of Nash equilibria is contained in the set of solutions of OVI. So the game has at most one Nash equilibrium.
2. Note that \(\overline{\textrm{EC}}\) holds. Apply Proposition 2.2(2).
3. By parts 1 and 2, OVI has a unique solution, say \(\textbf{e}\). We prove that all conditional profit functions \(f_i^{(\textbf{z})}\) are pseudo-concave. Then Proposition 5.1(2) implies that \(\textbf{e}\) is a Nash equilibrium and then next with Proposition 5.1(1) it follows that \(\textbf{e}\) is a unique Nash equilibrium. Well, as \({( f_i^{(\textbf{z})} )}''(x_i) = p''(x_i + \sum _j z_j) x_i + 2 p'(x_i + \sum _j z_j) - c''_i(x_i)\), condition (a) implies that \({( f_i^{(\textbf{z})} )}'' < 0\) and thus \(f_i^{(\textbf{z})}\) even is strictly concave. \(\square \)
For more on Cournot oligopolies, see, for example, [20, 25, 27].
3 SS-Technique; Without Differentiability Assumptions
3.1 Setting
Let us fix again the setting. Let
With \(\textrm{VI}(X,\textbf{F})\) being the general variational inequality (1), the special case that we consider in this section is
where \(T_i(\textbf{x}) {{\; \mathrel {\mathop {:}}= \;}}- t_i(x_i,x_N) \) with \(t_i: \varDelta \rightarrow {\mathbb {R}}\). Further we suppose \(n \ge 2\).
Comparing \(\textrm{AVI}\) with \(\overline{\textrm{AVI}}\), note that for \(\textrm{AVI}\) we only consider the unbounded case. The reason for this is that an analysis with the SS-technique becomes here much more technical. Also note that the setting uses a smaller domain of \(t_i\) than that in Sect. 2.1: \(\varDelta \) instead of \({\mathbb {R}}^2_+\). \(\varDelta \) is of course all that matters as \((x_i,x_N) \in \varDelta \) for every \(\textbf{x} \in {\mathbb {R}}^n_+\).
We always assume in this section that and denote the set of solutions of \(\textrm{AVI}\) by
3.2 AMSCFA-Property
The following definition is very important for assumptions on the \(t_i\) in the following subsection.
Definition 3.1
A function \(g: I \rightarrow \mathbb {R}\), where I is a real interval, has the AMSCFA-property (‘At Most Single Crossing From Above’ property) if the following holds: if z is a zero of g, then \(g(x) > 0 \; (x \in I \text{ with } x < z)\) and \(g(x) < 0 \; (x \in I \text{ with } x > z)\). \(\diamond \)
Thus, a function with the AMSCFA-property has at most one zero. Sufficient for a function to have the AMSCFA-property is that it is strictly decreasing. Two other simple results, that we freely use throughout the article, are the following: suppose \(g: I \rightarrow {\mathbb {R}}\) is continuous where I is a proper real interval. Then:
– If g is at every \(x \in I\) with \(g(x) =0\) differentiable with \(g'(x) < 0\), then g has the AMSCFA-property.
– If g has the AMSCFA-property, then for all \(x, x' \in I\)
3.3 Assumptions
For \(i \in N\) and \(\mu \in [0,1]\), defining the function \(\overline{t}_i^{(\mu )}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) by
the following assumptions appear in the analysis.Footnote 13
-
AMSV.
For every \(y >0\), the function \(t_i(\cdot ,y): [0,y] \rightarrow {\mathbb {R}}\) has at most one zero and if it has a positive zero, then \(t_i(0,y) > 0\).
-
LFH’.
For every \(y >0\), the function \(t_i(\cdot ,y): [0,y] \rightarrow {\mathbb {R}}\) has the AMSCFA-property.
-
RA.
For every \(\mu \in {] {0},{1} ] }\), the function \(\overline{t}_i^{(\mu )}\) has the AMSCFA-property.
-
RA1.
The function \(\overline{t}_i^{(1)}\) has the AMSCFA-property.
-
RA0.
For every \(0< y < y'\): \(t_i(0,y) \le 0 \; \; \Rightarrow \; t_i(0,y') \le 0\).
-
EC.
There exists \(\overline{x}_i > 0\) such that \(t_i(x_i,y) < 0\) for every \((x_i,y) \in {\mathbb {R}}^2_+\) with \(\overline{x}_i \le x_i \le y\).
These assumptions are supposed to hold for every \(i \in N\). Below we very often consider situations where such an assumption just holds for a specific i; then we add [i] to the assumption; for example, RA[i]. Note that the above assumptions do not depend on the value of \(t_i\) at (0, 0). In fact this value is not important for results on \(\textrm{AVI}^{\bullet } \setminus \{ \textbf{0} \}\); the reader also may see Lemma A.1.
Of course, RA[i] \(\Rightarrow \) RA1[i], and LFH’[i] \( \Rightarrow \) AMSV[i]. In addition to these assumptions, we use the following terminology. We call \(i \in N\) of
type \(I^+\) if \(\overline{t}_i^{(1)}(\lambda ) > 0\) for \(\lambda > 0\) small enough;
type \(I^-\) if \(\overline{t}_i^{(1)}(\lambda ) < 0\) for \(\lambda > 0\) small enough;
type \(II^-\) if \(\overline{t}_i^{(0)}(\lambda ) < 0\) for \(\lambda > 0\) large enough.
Lemma 3.1
[LFH’[i] \(\wedge \) RA[i] ] \( \Rightarrow \) RA0[i]. \(\diamond \)
Proof
By contradiction. So suppose LFH’[i] and RA[i] hold, and \(0< y < y'\), with \(t_i(0,y) \le 0\) and \(t_i(0,y') > 0\). The continuity of \(t_i(0,\cdot ): {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) implies that there exists \(y'' \in {[ {y},{y'} \,[}\) with \(t_i(0,y'') =0\). Also \(t_i(x_i,y') > 0\) for \(x_i > 0\) small enough. LFH’[i] implies \(t_i(x_i,y'')< 0 \; (0 < x_i \le y'')\). Now take \(\mu > 0\) so small that \( \overline{t}_i^{(\mu )}(y') = t_i(\mu y', y') > 0\). As \( \overline{t}_i^{(\mu )}(y'') = t_i(\mu y'', y'') < 0\) and \( \overline{t}_i^{(\mu )}\) is continuous, there exists \(y''' \in {] {y''},{y'} \, [ }\) with \( \overline{t}_i^{(\mu )}(y''') =0\). But this is impossible as by virtue of RA[i], \(\overline{t}_i^{(\mu )}\) has the AMSCFA-property. \(\square \)
Lemma 3.2
Suppose Assumption RA1[i] holds.
-
1.
i is of type \(I^+\) or of type \(I^-\).
-
2.
If i is of type \(I^-\), then \(\overline{t}_i^{(1)} < 0\). \(\diamond \)
Proof
1. In the case when \(\overline{t}_i^{(1)}\) has a zero, say m, we have, \(\overline{t}_i^{(1)}(x_i) >0\) for \(x_i \in {] {0},{m} \, [ }\) and thus i is of type \(I^+\). Now suppose that \(\overline{t}_i^{(1)}\) does not have a zero. As \(\overline{t}_i^{(1)}\) is continuous, we have \(\overline{t}_i^{(1)} > 0\) or \(\overline{t}_i^{(1)} <0\). In the first case i is of type \(I^+\) and in the second of type \(I^-\).
2. By contradiction. So suppose i is of type \(I^-\) and \(\overline{t}_i^{(1)}(a_i) \ge 0\) for some \(a_i > 0\). As \(\overline{t}_i^{(1)}(x_i) < 0\) for \(x_i > 0\) small enough, the continuity of \(\overline{t}_i^{(1)}\) implies the existence of an \(l_i \in {] {0},{a_i} ] }\) with \(\overline{t}_i^{(1)}(l_i) = 0\). Assumption RA1[i] implies \(\overline{t}_i^{(1)}(x_i) > 0\) for \(0< x_i < l_i\), a contradiction with i being of type \(I^-\). \(\square \)
3.4 Classical Nonlinear Complementarity Problem
Lemma A.2 in “Appendix A” implies: \(\textbf{x}^{\star } \in {\mathbb {R}}^n_+\) is a solution of \(\textrm{AVI}\) if and only if \(\textbf{x}^{\star }\) satisfies
A solution \(x^{\star }\) of \(\textrm{AVI}\) is degenerate if there exists \(i \in N\) such that
3.5 Solution \(\textbf{0}\)
Besides \(N_>\) in (3), let
Proposition 3.1
-
1.
\(\textbf{0} \in \textrm{AVI}^{\bullet } \; \Leftrightarrow \; N_> = \emptyset \).
-
2.
Suppose Assumption AMSV[i] holds. If \(\textbf{e} \in \textrm{AVI}^{\bullet }\) and \(i \not \in \tilde{N}\), then \(e_i = 0\). Thus, if \(\tilde{N} = \emptyset \), then \( \textrm{AVI}^{\bullet } \subseteq \{ \textbf{0} \}\).
-
3.
Suppose \(\tilde{N} = N_> = \emptyset \) and Assumption AMSV holds. Then \(\textrm{AVI}^{\bullet } = \{ \textbf{0} \}\). \(\diamond \)
Proof
1. Exactly the same proof as in Proposition 2.1 (with \(\textbf{X} = {\mathbb {R}}^n_+)\).
2. By contradiction. So suppose \(\textbf{e}\) is a solution of AVI, \(i \not \in \tilde{N}\) and \(e_i > 0\). Now \(e_N > 0\). By (9), \(t_i(e_i,e_N) = 0\). AMSV implies \(t_i(0,e_N) > 0\). So \(i \in \tilde{N}\), a contradiction.
3. By parts 1 and 2. \(\square \)
Proposition 3.2
Suppose Assumption RA1 holds and every \(i \in N\) is of type \(I^-\). Then \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \} \; \Rightarrow \# \{ j \in N \; | \; x_j^{\star } > 0 \} \ge 2\). \(\diamond \)
Proof
By contradiction. So suppose \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet }\) with \(\textbf{x}^{\star } \ne \textbf{0}\) and \(\# \{ j \in N \; | \; x_j^{\star } \ne 0 \} \le 1\). Let \(x_i^{\star } \ne 0\) and \(x_j^{\star } = 0 \; (j \ne i)\). By (9), \(\overline{t}_i^{(1)}(x_i^{\star }) = t_i(x_i^{\star },x_i^{\star }) = 0\). As RA1[i] holds, Lemma 3.2(2) gives a contradiction. \(\square \)
3.6 Computation
Definition 3.2
-
1.
For \(i \in N\), define the correspondence \(b_i: {\mathbb {R}}_+ \multimap \mathbb {R}\) by
$$\begin{aligned} b_i(y) {{\; \mathrel {\mathop {:}}= \;}}\{ x_i \in {\mathbb {R}}_+ \; | \; x_i \in [0,y] \, \wedge \, x_i t_i(x_i,y) = 0 \, \wedge \, t_i(x_i,y)\le 0 \}. \end{aligned}$$And define the correspondences \(\textbf{b}: {\mathbb {R}}_+ \multimap \mathbb {R}^n\) and \(b: {\mathbb {R}}_+ \multimap \mathbb {R}\) by
$$\begin{aligned} \textbf{b}(y) {{\; \mathrel {\mathop {:}}= \;}}{b}_1(y) \times \cdots \times {b}_n(y), \;\;\; b(y) {{\; \mathrel {\mathop {:}}= \;}}\{ \sum _{i \in N} x_i \; | \; \textbf{x} \in \textbf{b}(y) \}. \end{aligned}$$ -
2.
Define the correspondences \(s_i: {\mathbb {R}}_{++} \multimap \mathbb {R} \; (i \in N)\) and \(s: {\mathbb {R}}_{++} \multimap \mathbb {R}\) by
$$\begin{aligned} s_i(y) {{\; \mathrel {\mathop {:}}= \;}}{b}_i(y) / y, \;\; s(y) {{\; \mathrel {\mathop {:}}= \;}}b(y)/y. \;\; \diamond \end{aligned}$$
Note thatFootnote 14
The correspondence \(b_i\) provides global information on the \(t_i\). Denote by \(\textrm{fix}(b)\) the set of fixed points of the correspondence \(b: {\mathbb {R}}_+ \multimap \mathbb {R}\), i.e. the set of \(y \in {\mathbb {R}}_+\) for which \(y \in b(y)\).
Definition 3.3
The aggregative variational inequality \(\textrm{AVI}\) is
-
internal backward solvable if \( \textrm{AVI}^{\bullet } \subseteq \cup _{ y \in \textrm{fix}(b) } \textbf{b}(y)\);
-
external backward solvable if \( \textrm{AVI}^{\bullet } \supseteq \cup _{ y \in \textrm{fix}(b) } \textbf{b}(y)\);
-
backward solvable if it is internal and external backward solvable. \(\diamond \)
Lemma 3.3
\(\textbf{x} \in \textrm{AVI}^{\bullet } \; \Leftrightarrow \; \textbf{x} \in \textbf{b}(x_N) \; \Leftrightarrow \; [\textbf{x} \in \textbf{b}(x_N) \text{ and } x_N \in \textrm{fix}(b)]\). \(\diamond \)
Proof
Write the statement as \(A \Leftrightarrow B \Leftrightarrow C\).
\(A \Rightarrow B\): suppose \(\textbf{x} \in \textrm{AVI}^{\bullet }\). By (9), we have for every i that \(x_i \in {\mathbb {R}}_+, \; x_i t_i(x_i,x_N) = 0\) and \(t_i(x_i,x_N) \le 0\). As \(x_i \in [0,x_N]\), we have, \(x_i \in {b}_i(x_N)\).
\(B \Rightarrow C\): suppose \( \textbf{x} \in \textbf{b}(x_N)\). This implies \(x_N = \sum _i x_i \in \sum _i {b}_i(x_N) = b(x_N)\). Thus \(x_N \in \textrm{fix}(b)\).
\(C \Rightarrow A\): suppose \(\textbf{x} \in \textbf{b}(x_N) \text{ and } x_N \in \textrm{fix}(b)\). Then for every i we have \(x_i \in {\mathbb {R}}_+, \; x_i t_i(x_i,x_N) = 0\) and \(t_i(x_i,x_N) \le 0\). By (9), \(\textbf{x}\) is a solution of \(\textrm{AVI}\). \(\square \)
The solution aggregator is defined as the function \(\sigma : \textrm{AVI}^{\bullet } \rightarrow {\mathbb {R}}\) given by
Theorem 3.1
-
1.
\(\sigma ( \textrm{AVI}^{\bullet } ) = \textrm{fix}(b)\).
-
2.
\(\textrm{AVI}\) is internal backward solvable.
-
3.
If b is at most single-valued on \(\textrm{fix}(b)\), then \(\textrm{AVI}\) is backward solvable. \(\diamond \)
Proof
1. ‘\(\subseteq \)’: suppose \(\textbf{x}\) is a solution of AVI. By Lemma 3.3, \(x_i \in {b}_i(x_N)\; (i \in N)\). This implies \(x_N = \sigma (\textbf{x}) = \sum _i x_i \in \sum _i {b}_i(x_N) = b(x_N)\), i.e. \(\sigma (\textbf{x}) \in \textrm{fix}(b)\).
‘\(\supseteq \)’: suppose \(y \in \textrm{fix}(b)\). So \(y \in b(y) = \sum _i {b}_i(y)\). Fix \(x_i \in {b}_i(y) \; (i \in N)\) with \(y = \sum _i x_i\). So \(y = x_N\) and \(\textbf{x} \in \textbf{b}(x_N)\). By Lemma 3.3, \(\textbf{x} \in \textrm{AVI}^{\bullet }\).
2. Suppose \(\textbf{x} \in \textrm{AVI}^{\bullet }\). By Lemma 3.3, \(\textbf{x} \in \textbf{b}(x_N)\) and \(x_N \in \textrm{fix}(b)\). It follows that \(\textbf{x} \in \textbf{b}(x_N) \subseteq \cup _{ y \in \textrm{fix}(b) } \textbf{b}(y)\). Thus, \(\textbf{x} \in \cup _{ y \in \textrm{fix}(b) } \textbf{b}(y)\).
3. By part 2, we still have to prove ‘\(\supseteq \)’. So suppose \(\textbf{x} \in \cup _{ y \in \textrm{fix}(b) } \textbf{b}(y)\). Fix \(y \in \textrm{fix}(b)\) with \(\textbf{x} \in \textbf{b}(y)\). As \(y \in b(y)\) and b is at most single-valued on \(\textrm{fix}(b)\), we have \(b(y) = \{ y \}\). Noting that \(x_N = \sum _l x_l \in \sum _l {b}_l(y) = b(y)\), \( x_N =y\) follows. Thus, \( \textbf{x} \in \textbf{b}(x_N)\). Now apply Lemma 3.3. \(\square \)
The standard Szidarovszky variant of the SS-technique deals with at most single-valued \({b}_i\). For such situation also b is at most single-valued and Theorem 3.1(3) shows that \(\textrm{AVI}\) is backward solvable. So what is a (weak) sufficient condition for the \({b}_i\) to be at most single-valued? Well, the next lemma provides such a condition.
Lemma 3.4
If Assumption AMSV[i] holds, then for every \(y \in {\mathbb {R}}_{+}\) there exists at most one \(x_i \in [0,y]\) with \(x_i t_i(x_i,y) = 0 \wedge t_i(x_i,y) \le 0\). \(\diamond \)
Proof
Suppose AMSV[i] holds, By contradiction, suppose \(x_i, x'_i \in [0,y]\) with \(x_i < x'_i\) are such. As \(x'_i > 0\), \(t_i(x'_i,y) = 0\) follows. Because of AMSV[i], \(t_i(x_i,y) \ne 0\) and \(t_i(0,y) > 0\). This implies \(x_i =0\) and \(t_i(0,y) \le 0\), a contradiction. \(\square \)
Furthermore, for \(i\in N\) let \(W_i\) denote the essential domain of the correspondence \({b}_i\), i.e. the set \(\{ y \in {\mathbb {R}}_+ \; | \; {b}_i(y) \ne \emptyset \}\). Now, the essential domain of \(s_i\) is \(W_i^{\star } {{\; \mathrel {\mathop {:}}= \;}}W_i \setminus \{ 0 \}\), that of b is \( W {{\; \mathrel {\mathop {:}}= \;}}\cap _{i \in N} W_i\) and that of s is
Note that \( 0 \in W_i \; \Leftrightarrow \; i \not \in N_> \text{ and } \text{ that } 0 \in W \; \Leftrightarrow \; N_> = \emptyset . \)
Let , i.e. the restriction of the correspondence \({b}_i\) to \(W_i\); so \(\hat{b}_i: W_i \multimap \mathbb {R}\). Also define . Finally, let , and . If Assumption AMSV holds, then by Lemma 3.4 the correspondences \(\hat{b}_i, \; \hat{s}_i, \; \hat{b}\) and \(\hat{s}\) are single-valued and we can and will interpret them as functions. Then in particular \(\hat{\textbf{b}}(y) = ( \hat{b}_1(y), \ldots , \hat{b}_n(y) )\).
Theorem 3.2
Suppose Assumption AMSV holds. Then
-
1.
\( \textrm{AVI}^{\bullet } = \{ \hat{\textbf{b}}(y) \; | \; y \in \textrm{fix}(\hat{b}) \}\).
-
2.
\(\sigma ( \textrm{AVI}^{\bullet } ) = \textrm{fix}(\hat{b})\).
-
3.
\( \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \} = \{ \hat{\textbf{b}}(y) \; | \; y \in W^{\star } \text{ with } y \in \textrm{fix}(\hat{b}) \} = \{ \hat{\textbf{b}}(y) \; | \; y \in W^{\star } \text{ with } \hat{s}(y) =1 \}\).
-
4.
\( \textbf{x}^{\star } \in \textrm{AVI}^{\bullet } \; \Rightarrow \; x^{\star }_i = \hat{b}_i(x_N^{\star }) \; (i \in N)\). \(\diamond \)
Proof
1. Theorem 3.1(3) guarantees that \(\textrm{AVI}\) is backward solvable. As b is at most single-valued, we obtain \(\textrm{AVI}^{\bullet } = \cup _{y \in \textrm{fix}(b)} \textbf{b}(y) = \cup _{y \in \textrm{fix}(\hat{b})} (\hat{b}_1(y), \ldots , \hat{b}_n(y) ) = \{ \hat{\textbf{b}}(y) \; | \; y \in \textrm{fix}(\hat{b}) \}\).
2. By Lemma 3.1(1). 3. By parts 1 and 2.
4. Suppose \( \textbf{x}^{\star } \in \textrm{AVI}^{\bullet }\). By part 1, there exists \(y \in \textrm{fix}(\hat{b}) \}\) such that \( x^{\star }_i = \hat{b}_i(y) \; (i \in N)\). It follows that \(y = \hat{b}(y) = \sum _i \hat{b}_i(y) = \sum _i x^{\star }_i = x^{\star }_N\). Thus, \(x^{\star }_i = \hat{b}_i(x_N^{\star }) \; (i \in N)\). \(\square \)
Proposition 3.3
If Assumption AMSV holds, then the solution aggregator \(\sigma \) is injective. \(\diamond \)
Proof
By contradiction. So suppose AMSV holds and \(\textbf{x}, \textbf{x}'\) are distinct solutions with \(\sigma (\textbf{x}) = \sigma (\textbf{x}') \; =: \;y\). As \(\textbf{x} \ne \textbf{x}'\), we can fix \(i \in N\) with \(x'_i > x_i\). Note that \(y \ne 0\). By (9), \(t_i(x_i,y) \le 0 = t_i(x'_i,y)\) AMSV implies \(t_i(0,y) > 0\). So \(x_i > 0\) and therefore, by (9), \(t_i(x_i,y) = 0\) which is a contradiction with AMSV. \(\square \)
Proposition 3.4
Suppose \(t_1=\cdots = t_n\). If Assumption AMSV holds, then each solution \(\textbf{x}^{\star }\) of \(\textrm{AVI}\), is symmetric, i.e. \(x^{\star }_1 = \cdots = x^{\star }_n\). \(\diamond \)
Proof
By contradiction. So suppose \(\textbf{x}^{\star }\) is a non-symmetric solution. Fix \(\pi \in S_n\) such thatFootnote 15\(P_{\pi } ( \textbf{x}^{\star } ) \ne \textbf{x}^{\star }\); The assumption \(t_1=\cdots =t_n\) implies that the aggregative variational inequality \(\textrm{AVI}\) is symmetric.Footnote 16 By Lemma A.11, \(P_{\pi } ( \textbf{x}^{\star } )\) is another solution. As \(\sigma (\textbf{x}^{\star }) = \sigma ( P_{\pi } ( \textbf{x}^{\star } ) )\), we have a contradiction with Proposition 3.3, i.e. with the injectivity of \(\sigma \). \(\square \)
3.7 Structure of the Sets \(W_i, W_i^+\) and \(W_i^{++}\)
For the further analysis it is important to obtain more insight into the structure of \(W_i\). If Assumption AMSV holds, then let
Note that
Lemma 3.5
Suppose Assumptions AMSV[i] and RA0[i] hold, \(y \in W_i^{\star }\) and \(y' > y\). Then \(\hat{b}_i(y) = 0 \; \Rightarrow \; [y' \in W_i^{\star } \wedge \hat{b}_i(y') =0]\). Thus, \(W_i^{\star }\) is a real interval. \(\diamond \)
Proof
Suppose \(\hat{b}_i(y) = 0\). We have \(t_i(0,y) = t_i(\hat{b}_i(y),y) \le 0\). By RA0[i], \(t_i(0,y') \le 0\). So \(\hat{b}_i(y') = 0\) and \(y' \in W_i^{\star }\). \(\square \)
Lemma 3.6
Suppose Assumption LFH’[i] holds. Then
-
1.
\(W_i^{++}\) is open.
-
2.
If Assumption RA1[i] and RA0[i] hold, then \(W_i^{++}\) and \(W_i^+\) are real intervals. \(\diamond \)
Proof
1. Suppose \(y \in W_i^{++}\). So \(t_i(\hat{b}_i(y), y ) = 0\). By LFH’[i], \(t_i(y,y)< 0 < t_i(0,y)\). As \(t_i(0,\cdot ): {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) and \(\overline{t}_i^{(1)}\) are continuous, there exists \(\delta > 0\) such that \( t_i(y',y')< 0< t_i(0,y') \; (0< y-\delta< y' < y + \delta )\). For every \(y' \in {] {y-\delta },{y+\delta } \, [ }\), the function \(t_i(\cdot ,y'): [0,y'] \rightarrow {\mathbb {R}}\) is continuous, and therefore, there exists \(x_i \in {] {0},{y'} \, [ }\) with \(t_i(x_i,y') = 0\). Thus, \(W_i^{++}\) is open.
2. Suppose RA1[i] and RA0[i] hold. First we prove that \(W_i^{++}\) is an interval. To this end suppose \(y, y' \in W_i^{++}\) with \(y < y'\) and \( y'' \in {] {y},{y'} \, [ }\). We have \(0< \hat{b}_i(y) \le y, \; 0 < \hat{b}_i(y') \le y', \; t_i(\hat{b}_i(y),y) = 0\) and \(t_i(\hat{b}_i(y'),y') =0\). By LFH’[i], the continuous functions \(t_i(\cdot ,y)\) and \(t_i(\cdot ,y')\) have the AMSCFA-property. It follows that \(t_i(y,y) \le 0\) and \( t_i(0,y') > 0\). Now RA0[i] implies \(t_i(0,y'') > 0\). By RA1[i], the continuous function \(\overline{t}_i^{(1)}\) has the AMSCFA-property. It follows that \(t_i(y'',y'') < 0\). Next the continuity of \(t_i(\cdot ,y'')\) implies that there exists an \(x_i \in {] {0},{y''} \, [ }\) with \(t_i(x_i,y'') = 0\) and therefore \(y'' \in W_i^{++}\). Thus, \(W_i^{++}\) is an interval.
Statement concerning \(W_i^+\): suppose \(y, y' \in W_i^{+}\) with \(y < y'\) and \( y'' \in {] {y},{y'} \, [ }\). Now the above proof again applies, and shows that \(y'' \in W_i^{++} \subseteq W_i^+\). \(\square \)
Lemma 3.7
Suppose Assumptions AMSV[i] and EC[i] hold. Then \(\hat{b}_i(y) < \overline{x}_i \; (y \in W_i)\). \(\diamond \)
Proof
This is, as \(\overline{x}_i > 0\), trivial if \(\hat{b}_i(y) = 0\). Now suppose \(\hat{b}_i(y) > 0\). We have \(0 =t_i(\hat{b}_i(y),y) \). As \((\hat{b}_i(y), y) \in \varDelta ^+\), EC[i] implies \(\hat{b}_i(y) < \overline{x}_i\). \(\square \)
If \(\overline{t}_i^{(1)}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) has a unique zero, then we denote it by
(Thus, \(\underline{x}_i > 0\).) Sufficient for \(\underline{x}_i\) to be well-defined is that \(\overline{t}_i^{(1)}\) has a zero and that Assumption RA1[i] holds. If in addition Assumption AMSV[i] holds, then we have \(\hat{b}_i ( \underline{x}_i ) = \underline{x}_i \in W_i^+\).
Note that if i is of type \(I^-\) and Assumption RA1[i] holds, then, by Lemma 3.2(2), \(\underline{x}_i\) is not well-defined.
Lemma 3.8
If \(\underline{x}_i\) is well-defined and Assumption EC[i] holds, then \(\underline{x}_i \le \overline{x}_i\). \(\diamond \)
Proof
By the definitions of \(\underline{x}_i\) and \(\overline{x}_i\). \(\square \)
Lemma 3.9
Suppose i is of type \(I^+\) and Assumption RA1[i] holds. Any of the following three assumptions is sufficient for \(\underline{x}_i\) to be well-defined.
-
(a).
Assumptions LFH’[i] and RA0[i] hold and \(W_i^{\star } \ne \emptyset \).
-
(b).
Assumption EC[i] holds.
-
(c).
Assumption LFH’[i] holds and i is of type \(II^-\). \(\diamond \)
Proof
Having RA1[i], we prove that \(\overline{t}_i^{(1)}\) has a zero. As \(\overline{t}_i^{(1)}\) is continuous, it is sufficient to show that this function assumes a positive and a negative value.
(a). Fix \(y \in W_i^{\star }\). We have \(t_i(\hat{b}_i(y),y) \le 0\). LFH’[i] implies \(\overline{t}_i^{(1)}(y) = t_i(y,y ) \le t_i(\hat{b}_i(y),y) \le 0\). As i is of type \(I^+\), we can fix \(x_i \in {] {0},{y} ] }\) with \(\overline{t}_i^{(1)}(x_i) = t_i(x_i,x_i) > 0\).
(b). As i is of type \(I^+\), \(\overline{t}_i^{(1)}(x_i) > 0\) for \(x_i\) small enough. EC[i] implies \(\overline{t}_i^{(1)}(x_i) < 0 \; (x_i > \overline{x}_i)\).
(c). As i is of type \(I^+\), \(\overline{t}_i^{(1)}(\lambda ) = t_i(\lambda ,\lambda ) > 0\) for \(\lambda > 0\) small enough. As i is of type \(II^-\), \(t_i(0,\lambda ) < 0\) for \(\lambda >0 \) large enough. As, by LFH’[i], \(t_i(\cdot ,\lambda )\) has the AMSCFA-property, it follows that \(\overline{t}_i^{(1)}(\lambda ) < 0\) for \(\lambda > 0\) small enough. \(\square \)
Lemma 3.10
Suppose i is of type \(I^+\), Assumptions LFH’[i] and RA1[i] hold and \(\underline{x}_i\) is well-defined. Then
-
1.
\(W_i^{\star } = {[ { \underline{x}_i },{+\infty } \,[}\).
-
2.
If \(t_i(0,y)> 0 \; ( y > 0)\), then \(W_i^{++} = {] { \underline{x}_i },{+\infty } \, [ }\) and \(W_i^{+} = {[ { \underline{x}_i },{+\infty } \,[}\). \(\diamond \)
Proof
1. ‘\(\subseteq \)’: by contradiction. So suppose \( y \in W_i^{\star }\) and \(y < \underline{x}_i\). As \( y >0\), the AMSCFA-property of \(\overline{t}_i^{(1)}\) (by virtue of RA1[i]) gives \(t_i(y,y) = \overline{t}_i^{(1)}(y) > \overline{t}_i^{(1)}(\underline{x}_i) = 0\). As (by virtue of LFH’[i]) \(t_i(\cdot ,y)\) has the AMSCFA-property, we have \(t_i(x_i,y) > 0\) for all \(0 \le x_i \le y\). Thus, \(y \not \in W_i\), a contradiction.
‘\(\supseteq \)’: suppose \(y \ge \underline{x}_i\). If \(t_i(0,y) \le 0\), then \(y \in W_i^{\star }\). Now suppose \(t_i(0,y) > 0\). RA1[i] implies \(t_i(y,y) \le 0\). As \({t}_i(\cdot ,y)\) is continuous, there exists an \(x_i \in {] {0},{y} ] }\) with \(t_i(x_i,y) = 0\). Thus, \(y \in W_i^{\star }\).
2. First statement ‘\(\subseteq \)’: suppose \(y \in W_i^{++}\). Then \(t_i(\hat{b}_i(y),y) = 0\) and \(0< \hat{b}_i(y) < y\). By LFH’[i], \(t_i(y,y) < 0\). RA1[i], implies \(y > \underline{x}_i\).
First statement ‘\(\supseteq \)’: suppose \(y > \underline{x}_i\). We have \(t_i(0,y) > 0\) and, by RA1[i], \(t_i(y,y) < 0\). The continuity of \(t_i(\cdot ,y)\) implies that there exists \(x_i \in {] {0},{y} \, [ }\) with \(t_i(x_i,y) = 0\). As LFH’[i] holds, \(y \in W_i^{++}\) follows.
Second statement ‘\(\subseteq \)’: suppose \(y \in W_i^{+}\). Then \(t_i(\hat{b}_i(y),y) = 0\) and \(0 < \hat{b}_i(y) \le y\). LFH’[i] implies \(t_i(y,y) \le 0\). Now, RA1[i] implies \(y \ge \underline{x}_i\).
Second statement ‘\(\supseteq \)’: suppose \(y \ge \underline{x}_i\). We have \(t_i(0,y) > 0\) and, by RA1[i], \(t_i(y,y) \le 0\). The continuity of \(t_i(\cdot ,y)\) implies that there exists \(x_i \in {] {0},{y} ] }\) with \(t_i(x_i,y) = 0\). So \(0 < x_i = \hat{b}_i(y) \le y\). Thus, \(y \in W_i^{+}\). \(\square \)
Lemma 3.11
Suppose Assumptions AMSV[i] and RA1[i] hold and i is type \(I^-\). Then
-
1.
\(\{ y> 0 \; | \; t_i(0,y) > 0 \} \subseteq W_i^{++}\).
-
2.
\(W_i^{\star } = {\mathbb {R}}_{++}\).
-
3.
\(W_i^{++} = W_i^+\).
-
4.
If Assumption EC[i] holds, then \(\hat{b}_i(y) < \overline{x}_i \; (y > 0)\). \(\diamond \)
Proof
1. Suppose \(y > 0\) with \(t_i(0,y) > 0\). By Lemma 3.2(2), \(t_i(y,y) < 0\). As \(t_i(\cdot ,y)\) is continuous, there exists an \(x_i \in {] {0},{y} \, [ }\) with \(t_i(x_i,y) = 0\). So \(y \in W_i^{++}\).
2. ‘\(\subseteq \)’: trivial. ‘\(\supseteq \)’: suppose \(y > 0\). If \(t_i(0,y) \le 0\), then \(y \in W_i^{\star }\). Now suppose \(t_i(0,y) > 0\). By part 1, \(y \in W_i^{++} \subseteq W_i^{\star }\).
3. ‘\(\subseteq \)’ is trivial. Now suppose \(y \in W_i^+\). The proof is complete if we show that \(\hat{b}_i(y) < y\). Well, if \(\hat{b}_i(y) = y\), then \(\overline{t}_i^{(1)}(y) = t_i(\hat{b}_i(y),y) = 0\) contradicting Lemma 3.2(2).
4. Suppose EC[i] holds. Fix \(y > 0\). The statement is clear if \(\hat{b}_i(y) = 0\). Now suppose \(\hat{b}_i(y) > 0\). We have \(0 =t_i(\hat{b}_i(y),y) \). EC[i] implies that \(\hat{b}_i(y) < \overline{x}_i\). \(\square \)
Lemma 3.12
Suppose Assumption LFH’, RA1 and RA0 hold. Let \(N' = \{ k \in N \; | \; k \text{ is } \text{ of } \text{ type } I^+ \}\).
-
1.
If \(N' = \emptyset \), then \(W^{\star } = {\mathbb {R}}_{++}\).
-
2.
Suppose \(N' \ne \emptyset \) and that for every \(i \in N'\): Assumption EC[i] holds or i is of type \(II^-\). Then, with \(\underline{x} = \max _{k \in N'} \underline{x}_k\), \(W^{\star } = {[ {\underline{x}},{+ \infty } \,[}\). \(\diamond \)
Proof
By Lemma 3.2(1) every i is of type \(I^+\) or \(I^-\). Remember that \(W^{\star } = \cap _i W_i^{\star }\).
1. Suppose \(N' = \emptyset \). So every i is of type \(I^-\). Now apples Lemma 3.11(2).
2. Lemma 3.9(b,c) guarantees that \(\underline{x}_k \; (k \in N')\) are well-defined. By Lemma 3.10(1), \(W_k^{\star } = {[ {\underline{x}_k},{+\infty } \,[} \; (k \in N')\) and, by Lemma 3.11(2) \(W_l^{\star } = {\mathbb {R}}_{++} \; (l \in N {\setminus } N')\). Thus, \(W^{\star } = {[ {\underline{x}},{+ \infty } \,[}\). \(\square \)
3.8 Properties of the Functions \(\hat{b}_i\) and \(\hat{s}_i\)
Proposition 3.5
Suppose Assumptions LFH’[i], RA1[i] and RA0[i] hold. Then the function \(\hat{b}_i: W_i^{\star } \rightarrow {\mathbb {R}}\) is continuous. \(\diamond \)
Proof
We may suppose that \(W_i^{\star } \ne \emptyset \). By Lemma 3.5, \(W_i^{\star }\) is a non-empty interval. It is sufficient to prove that \(\hat{b}_i\) is continuous on each non-empty compact interval I with \(I \subseteq W_i^{\star }\). Fix such an interval. Further consider the function \(\hat{b}_i: I \rightarrow \mathbb {R}\). As \(0 \le \hat{b}_i(y) \le y \; (y \in I)\), \(\hat{b}_i\) is bounded. As \(\hat{b}_i\) is bounded, continuity of \(\hat{b}_i\) is equivalent to the closedness of its graph, i.e. of the closedness of the subset \(\{ (y, \hat{b}_i(y) ) \; | \; y \in I \}\) in \(I \times {\mathbb {R}}\). As \(I \times {\mathbb {R}}\) is closed in \({\mathbb {R}}^2\), it remains to be proved that this graph is closed in \({\mathbb {R}}^2\). In order to do this take a sequence \(( ( y_m, \hat{b}_i(y_m) ) )\) in \(I \times {\mathbb {R}}\) that is in \({\mathbb {R}}^2\) convergent, with, say, limit \((y_{\star }, {b}_{\star } )\), and prove that \((y_{\star }, {b}_{\star } ) \in \{ (y, \hat{b}_i(y) ) \; | \; y \in I \}\), i.e. that \(y_{\star } \in I\) and \(\hat{b}_i(y_{\star }) = b_{\star }\). We have \(\lim y_m = y_{\star }\) and \(\lim \hat{b}_i(y_m) = {b}_{\star }\). As I is closed, \(y_{\star } \in I\) follows; so \(y_{\star } > 0\). We have \( 0 \le \hat{b}_i(y_m) \le y_m\), \({b}_i(y_m) t_i(\hat{b}_i(y_m), y_m ) = 0\) and \(t_i(\hat{b}_i(y_m), y_m ) \le 0\). Taking limits and noting that \(t_i: \varDelta ^+ \rightarrow \mathbb {R}\) is continuous, we obtain \(0 \le \hat{b}_{\star } \le y_{\star }\), \({b}_{\star } t_i({b}_{\star }, y_{\star }) =0 \) and \(t_i({b}_{\star }, y_{\star } ) \le 0\). Thus, as desired, \(\hat{b}_i(y_{\star }) = b_{\star }\).
\(\square \)
Proposition 3.6
-
1.
If Assumptions AMSV and RA[i] hold, then \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is injective.
-
2.
If Assumptions LFH’[i], RA1[i] and RA0[i] hold, then \(\hat{s}_i\) is on the interval \(W_i^+\) strictly increasing or strictly decreasing. \(\diamond \)
Proof
1. Suppose AMSV and RA[i] hold. We prove by contradiction that \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is injective. So suppose \(\hat{s}_i(y) = \hat{s}_i(y') = w\) where \(y, y' \in W_i^{+}\) with \(y \ne y'\). So \(\hat{b}_i(y) = w y > 0\) and \(\hat{b}_i(y') = w y' > 0\). It follows that \(t_i(w y, y) = t_i(w y', y') = 0\), i.e. \(\overline{t}_i^{(w)}(y) = \overline{t}_i^{(w)}(y') =0\). But, by RA[i], the function \(\overline{t}_i^{ (w) }\) has the AMSCFA-property.
2. Suppose LFH’[i], RA1[i] and RA0[i] hold, Lemma 3.6(2) guarantees that \(W_i^+\) is an interval. Remember that \(W_i^+\) is a subset of \(W_i^{\star }\). Proposition 3.5 implies that \(\hat{s}_i\) is continuous on \(W_i^+\). It now follows with part 1 that \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is strictly decreasing or strictly increasing. \(\square \)
Lemma 3.13
Suppose Assumptions LFH’[i] and RA[i] hold and: i is of type \(I^+\) or of type \(II^-\) or Assumption EC[i] holds. Then \(\hat{s}_i\) is strictly decreasing on the interval \(W_i^+\). \(\diamond \)
Proof
By Lemma 3.1, RA0[i] holds and by Lemma 3.6(2), \(W_i^+\) is a real interval. We may assume that \(W_i^+\) is not empty. Now Lemma 3.5 implies that \(W_i^{\star }\) is an interval without upper bound. By Proposition 3.6(2) \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is strictly decreasing or strictly increasing. By contradiction we prove that \(\hat{s}_i\) is strictly decreasing on \(W_i^+\); so suppose \(\hat{s}_i\) is strictly increasing on \(W_i^+\). By Proposition 3.5, \(\hat{s}_i: W_i^{\star }\) is continuous.
Case where i is of type \(I^+\): by Lemma 3.9(a), \(\underline{x}_i\) in (12) is well-defined. We have \(\underline{x}_i \in W_i^+\) and \(\hat{s}_i(\underline{x}_i) = 1\). Since \(\hat{s}_i\) is strictly increasing on \(W_i^+\), \(y \not \in W_i^+\) for every \(y > \underline{x}_i\). Fix such an y. Then \(\hat{s}_i(y) = 0\). The continuity of \(\hat{s}_i\) implies that there exists \(y' \in {] {\underline{x}_i},{y} \, [ }\) with \(\hat{s}_i(y') = 1/137\). But then \(y' \in W_i^+\), a contradiction.
Case where i is of type \(II^-\): fix \(y' \in W_i^+\). So \(\hat{s}_i(y') > 0\). As i is of type \(II^-\), \(\hat{s}_i(y) = 0\) for y large enough. Let \(y''\) with \(y'' > y'\) be such an y. The continuity of \(\hat{s}_i\) implies that there exists \(\tilde{y} \in {] {y'},{y''} \, [ }\) with \(\hat{s}_i(\tilde{y}) = \hat{s}_i(y')/137\). But then \(\tilde{y} \in W_i^+\) and \(\hat{s}_i(\tilde{y}) < \hat{s}_i(y')\), a contradiction.
Case where EC[i] holds: fix \(y' \in W_i^+\). So \(\hat{s}_i(y') > 0\). By Lemma 3.7, \(\hat{s}_i(y) \le \overline{x}_i /y \; (y \in W_i^{\star })\). This implies \(\lim _{y \rightarrow + \infty } \hat{s}_i(y) = 0\). The continuity of \(\hat{s}_i\) implies that there exists \(\tilde{y} > y'\) with \(\hat{s}_i(\tilde{y}) = \hat{s}_i(y')/137\). But then \(\tilde{y} \in W_i^+\) and \(\hat{s}_i(\tilde{y}) < \hat{s}_i(y')\), a contradiction. \(\square \)
Lemma 3.14
Suppose Assumptions LFH’ and RA0 hold and every \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is strictly decreasing. Then \(\hat{s}\) is strictly decreasing on the subset of its domain \(W^{\star }\) where it is positive. \(\diamond \)
Proof
We may suppose that the subset of \(W^{\star }\) where \(\hat{s}\) is positive contains at least two elements. Let \(y_a, y_b\) with \(y_a < y_b\) be such. So \(\hat{s}(y_a) > 0\) and \(\hat{s}(y_b) > 0\). Note that \(y_a, y_b \in W_i^{\star } \; (i \in N)\) and that \(y \in W_i^{\star } {\setminus } W_i^+ \; \Rightarrow \; \hat{s}_i(y) = 0\).
First we prove \(\hat{s}_i(y_a) - \hat{s}_i(y_b) \ge 0 \; (i \in N)\). We consider four cases.
Case where \(y_a, y_b \in W_i^+\): \(\hat{s}_i(y_a) - \hat{s}_i(y_b) > 0\), by assumption.
Case where \(y_a \in W_i^+, y_b \not \in W_i^+\): \(\hat{s}_i(y_a) - \hat{s}_i(y_b) = \hat{s}_i(y_a) - 0 = \hat{s}_i(y_a) > 0\).
Case where \(y_a \not \in W_i^+, y_b \not \in W_i^+\): \(\hat{s}_i(y_a) - \hat{s}_i(y_b) = 0 - 0 = 0\).
Case where \(y_a \not \in W_i^+, y_b \in W_i^+\): this case is impossible by Lemma 3.5.
Next fix j with \(\hat{s}_j(y_a) > 0\). If also \(\hat{s}_j(y_b) > 0\), then \(y_a, y_b \in W_i^+\) and by the above, \(\hat{s}_j(y_a) - \hat{s}_j(y_b) > 0\). If \(\hat{s}_j(y_b) = 0\), then also \(\hat{s}_j(y_a) - \hat{s}_j(y_b) = \hat{s}_j(y_a)> 0\). As desired, we obtain that \(s(y_a) - s(y_b) = \sum _{i \in N } (\hat{s}_i(y_a) - \hat{s}_i(y_b) ) > 0\). \(\square \)
Lemma 3.15
Suppose Assumptions AMSV, RA1 and RA0 hold. If every \(i \in N\) is of type \(I^-\), then \(W^{\star } = {\mathbb {R}}_{++}\) and for every \(y' > 0\) with \(\hat{s}(y') >0\) it holds that \(\hat{s}(y) > 0 \; (0 < y \le y')\). \(\diamond \)
Proof
By Lemma 3.11(2), \(W^{\star } = {\mathbb {R}}_{++}\). Fix \(0< y \le y'\) with \(\hat{s}(y') > 0\). Then \( \hat{b}_i(y') > 0\) for at least one i. For such an i, Lemma 3.5 implies \(\hat{b}_i(y) > 0\). So \(\hat{b}(y) = \sum _{l \in N} \hat{b}_l(y) > 0\). Thus, \(\hat{s}(y') > 0\). \(\square \)
Lemma 3.16
Suppose Assumptions LFH’, RA1, EC hold and for every \(i \in N\): i is of type \(I^+\) and \(t_i(0,y)> 0 \; ( y > 0)\). Let \(\underline{x} = \max _{i \in N} \underline{x}_i\). Then \(W^{\star } = {[ {\underline{x}},{+\infty } \,[}\), \( \hat{s} (\underline{x}) > 1\) and for every \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \}\), it holds that \(x_N^{\star } >\underline{x}\). \(\diamond \)
Proof
By Lemma 3.9(b),the \(\underline{x}_i\) are well-defined. Fix \(k_{\star }\) such that \(\underline{x} = \underline{x}_{k_{\star }}\). By Lemma 3.10(1,2), \(W^{\star } = {[ {\underline{x}},{+\infty } \,[}\) and \(W_i^+ = {[ {\underline{x}_i},{+\infty } \,[} \; (i \in N)\). Noting that \(\underline{x} \in W_i^+ \; (i \in N)\) and \(n \ge 2\), we obtain \( \hat{s} (\underline{x}) = \hat{s}_{k_{\star }}(\underline{x}_{k_{\star }}) + \sum _{k \ne k_{\star } } \hat{s}_k (\underline{x}) = 1 + \sum _{k \ne k_{\star } } \hat{s}_k (\underline{x}) > 1\). If \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \}\), then, by Theorem 3.2(2), \(\hat{b}( x_N^{\star } ) = x_N^{\star } \in W^{\star }\). So \(\hat{s}(x_N^{\star }) = 1\) and thus \(x_N^{\star } > \underline{x}\). \(\square \)
3.9 Semi-uniqueness, Existence and Uniqueness
The proof of the following proposition follows a reasoning similar to a result in [2] for sum-aggregative games.
Proposition 3.7
Suppose Assumption LFH’ holds and every \(t_i\) is decreasing in its second variable. Then AVI has at most one solution. \(\diamond \)
Proof
By contradiction. So suppose \(\textbf{x}, \textbf{x}' \in \textrm{AVI}^{\bullet }\) with \(\textbf{x} \ne \textbf{x}'\). We may suppose \(x_N \le x'_N\). Note that \(x'_N > 0\). As \(\textbf{x} \ne \textbf{x}'\), we can fix i with \(x_i < x'_i\). (9) implies \(t_i(x'_i,x'_N ) = 0 \ge t_i(x_i,x_N)\). By LFH’[i], the function \( t_i(\cdot ,x'_N )\) has the AMSCFA-property; so \( t_i(x_i,x'_N ) > 0\) follows. As \(t_i\) is decreasing in its second variable, \(0 < t_i(x_i,x'_N ) \le t_i(x_i,x_N )\) holds, which is a contradiction. \(\square \)
Theorem 3.3
Suppose Assumptions LFH’ and RA hold and for every \(i \in N\): i is of type \(I^+\) or of type \(II^-\) or EC[i] holds. Then AVI has at most one nonzero solution. \(\diamond \)
Proof
By Lemma 3.1, RA0 holds. Lemma 3.13 guarantees that every \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is strictly decreasing. By Lemma 3.14, \(\hat{s}\) is strictly decreasing on the subset of its domain where it is positive. Theorem 3.2(3) now implies the desired result. \(\square \)
Of course, if we add \(N_> \ne \emptyset \) as assumption to this theorem, then (by Proposition 3.1(1)) AVI has at most one solution and such a solution is nonzero.
Theorem 3.4
Suppose Assumptions LFH’, RA1, RA0 hold and at least one \(i \in N\) is of type \(I^+\). Any of the following two assumptions is sufficient for AVI to have a nonzero solution and for the solution set of AVI to be a non-empty compact subset of \({\mathbb {R}}^n\).
-
(a.)
Assumption EC holds.
-
(b.)
Every \(i \in N\) is of type \(II^-\).
If in addition to (a) and (b) Assumption RA holds, then AVI has a unique nonzero solution. \(\diamond \)
Proof
We prove the first statement about existence; then the second about uniqueness follows from Theorem 3.3.
Let \(N'= \{ k \in N \; | \; k \text{ is } \text{ of } \text{ type } I^+ \}\). For both cases (a) and (b), Lemma 3.12(2) guarantees that \(W^{\star } = {[ {\underline{x}},{+ \infty } \,[}\) with \(\underline{x} = \underline{x}_p\) for some \(p \in N'\). It follows that \( \hat{s}(\underline{x}) = \sum _{i \in N} \hat{s}_i(\underline{x}) \ge \hat{s}_p (\underline{x}_p) = 1\). The solution set of AVI is a non-empty compact subset of \({\mathbb {R}}^n\) if \( \textrm{AVI}^{\bullet } \setminus \{ \textbf{0} \}\) is a non-empty compact subset of \({\mathbb {R}}^n_+\); we shall prove the latter. By Theorem 3.2(3), \(\textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \}\) equals \(\hat{b} (Z)\) where Z is the set of zeros of the function \(\hat{b}: {[ {\underline{x}},{+ \infty } \,[} \rightarrow {\mathbb {R}}\). As this function is continuous, it follows that Z is a closed subset of \({[ {\underline{x}},{+ \infty } \,[}\), so also a closed subset of \({\mathbb {R}}\). Below we show that Z also is a bounded subset of \({\mathbb {R}}\) and therefore a compact subset of \({\mathbb {R}}\). As Proposition 3.5 also implies that \(\hat{\textbf{b}}: {[ {\underline{x}},{+ \infty } \,[} \rightarrow {\mathbb {R}}^n\) is continuous, it then follows that \( \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \} = \hat{\textbf{b}} ( Z )\) is a compact subset of \({\mathbb {R}}^n\). Finally note that, by Lemma 3.2, each i is of type \(I^+\) or of type \(I^-\).
(a.) Having EC, fix \(\overline{y}\) with \(\overline{y} \ge \sum _{i \in N} \overline{x}_i\). By Lemma 3.8, \(\overline{y} \ge \overline{x}_p > \underline{x}_p = \underline{x}\). Thus, \( \overline{y} \in W^{\star }\). With Lemmas 3.7 and 3.11(4), we obtain, \( \hat{b}( \overline{y} )= \sum _{i} \hat{b}_i(\overline{y}) \le \sum _{i} \overline{x}_i \le \overline{y}\); thus \(\hat{s}(\overline{y}) \le 1\). By the intermediate value theorem, there exists \(y_{\star } \in [\underline{x}, \overline{y}]\) with \(\hat{s}(y_{\star }) = 1\); so \(y_{\star } \in \textrm{fix}(\hat{b})\). By Theorem 3.2(3), \( \hat{\textbf{b} }( y_{\star } )\) is a nonzero solution of AVI. With Lemmas 3.7 and 3.11(4), we obtain, \(\hat{b}(y) \le \sum _i \overline{x}_i \; (y \ge \underline{x})\). So Z is a bounded subset of \({\mathbb {R}}\).
(b). As every i is of type \(II^-\), we have \(\hat{b}_i(y) = 0 \; (i \in N)\) for y large enough. So \(\hat{s}(y) = 0\) for y large enough. Fix \(\overline{y}\) with \(\overline{y} \ge \underline{x}\) and \(\hat{s}(\overline{y}) = 0\). Consider \(\hat{s}: {[ {\underline{x} },{+\infty } \,[} \rightarrow \mathbb {R}\). Proposition 3.5 implies that \(\hat{s}\) is continuous. By the intermediate value theorem, there exists \(y_{\star } \in [\underline{x}, \overline{y}]\) with \(\hat{s}(y_{\star }) = 1\). Thus, \(y_{\star } \in \textrm{fix}(\hat{b})\). By Theorem 3.2(3), \( \hat{\textbf{b} }( y_{\star } )\) is a nonzero solution of AVI. By the above, \(\hat{b}(y) = 0 \) for y large enough. So Z is a bounded subset of \({\mathbb {R}}\). \(\square \)
4 SS-Technique; with Differentiability Assumptions
4.1 Setting
The setting here is the same as in Sect. 3.1. However, we always assume here not only .Footnote 17 Partial differentiability is necessary for defining Assumptions LFH, DIR and DIR’ given below.
4.2 Assumptions
Besides Assumptions AMSV, LFH’, RA, RA1, RA0 and EC from Sect. 3.3, we here also consider four new ones:
-
DIFF.
\(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuously partially differentiable.
-
LFH.
For every \((x_i,y)\in \varDelta ^+\): \( t_i(x_i,y)= 0 \; \Rightarrow \; D_1 t_i(x_i,y)<0\).
-
DIR.
For every \((x_i,y) \in \varDelta ^+\) with \(x_i > 0\): \(t_i(x_i,y)=0 \; \Rightarrow \; (x_i D_1 + y D_2)t_i(x_i,y)<0\).
-
DIR’.
For every \(x_i > 0\): \( t_i(x_i,x_i)=0 \; \Rightarrow \; (D_1+ D_2)t_i(x_i,x_i)<0\).
Note that Assumptions LFH, DIR and DIR’ concern local conditions.Footnote 18 Note that
DIR[i] \(\Rightarrow \) DIR’[i] and that LFH[i] \(\Rightarrow \) LFH’[i] \( \Rightarrow \) AMSV[i].
Lemma 4.1
-
1.
[ DIFF[i] \(\wedge \) DIR’[i] ] \( \Rightarrow \) RA1[i].
-
2.
[ DIFF[i] \(\wedge \) DIR[i] ] \(\Rightarrow \) RA[i]. \(\diamond \)
Proof
1. Suppose Assumptions DIFF[i] and DIR’[i] hold. We prove that \(\overline{t}_i^{(1)}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) has the AMSCFA-property, by showing that \(\overline{t}_i^{(1)}(\lambda ) = 0 \; \Rightarrow \; {( \overline{t}_i^{(1)}(\lambda ) )}' < 0\). Well, by Lemma B.1 in “Appendix B”, we have \( {( \overline{t}_i^{(1)} )}'(\lambda ) = (D_1 + D_2) t_i(\lambda ,\lambda )\). DIR’[i] implies the desired result.
2. Suppose DIFF[i] and DIR[i] hold. Fix \(\mu \in {] {0},{1} ] }\). We have to prove that \(\overline{t}_i^{(\mu )}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) has the AMSCFA-property. This we do by showing \(\overline{t}_i^{(\mu )}(\lambda ) = 0 \; \Rightarrow \; {( \overline{t}_i^{(\mu )} )}' (\lambda ) ) < 0\). Well, by Lemma B.1, \({( \overline{t}_i^{(\mu )} )}'(\lambda ) = (\mu D_1 + D_2) t_i (\mu \lambda , \lambda )\). If \(\overline{t}_i^{(\mu )}(\lambda ) = 0\), then \(t_i(\mu \lambda , \lambda ) = 0\) and DIR[i] implies \(\mu \lambda D_1 t_i(\mu \lambda , \lambda ) + \lambda D_2 t_i(\mu \lambda , \lambda ) < 0\). So \({( \overline{t}_i^{(\mu )} )}' (\lambda ) < 0\). \(\square \)
Lemma 4.2
Suppose Assumptions DIFF[i], LFH[i] and DIR[i] hold. Then for every \((x_i,y) \in \varDelta ^+\)
Proof
Suppose \((x_i,y) \in \varDelta ^+\) with \( t_i(x_i,y) =0\). We have the following identity:
If \(x_i > 0\), then LFH[i] and DIR[i] imply the desired result. Now suppose \(x_i = 0\). We have to prove \( t_i(0,y) = 0 \; \Rightarrow \; (D_1 + D_2) t_i(0,y) \le 0\). Well, by LFH[i], \(D_1 t_i(0,y) < 0\). By Lemmas 4.1(2) and 3.1, RA0[i] holds and as \(t_i(0,y) = 0\) it follows that \(t_i(0,y +h) \le 0 \; (h > 0)\). Therefore, \(D_2 t_i(0,y) = \lim _{h \downarrow 0} \frac{ t_i(0,y+h) - t_i(0,y) }{h} = \lim _{h \downarrow 0} \frac{ t_i(0,y+h) }{h} \le 0\). \(\square \)
4.3 Properties of the Functions \(\hat{b}_i\) and \(\hat{s}_i\)
In the next lemma we consider the differentiability of \(\hat{b}_i\); note that by Lemma 3.6(1), \(W_i^{++}\) is open.
Proposition 4.1
Suppose Assumptions DIFF[i] and LFH[i] hold and \(W_i^{++} \ne \emptyset \). Then
-
1.
\(\hat{b}_i\) is continuously differentiable on \(W_i^{++}\) with \( { \hat{b}_i }' = - \frac{D_2 t_i}{D_1 t_i}\) on \(W_i^{++}\).
-
2.
\(\hat{s}_i\) is continuously differentiable on \(W_i^{++}\) with \( { \hat{s}_i }'(y) = - \frac{ \hat{b}_i(y) D_1 t_i( \hat{b}_i(y), y) + y D_2 t_i (\hat{b}_i(y),y) }{ y^2 \cdot D_1 t_i( \hat{b}_i(y), y) }\).
-
3.
If Assumption DIR[i] holds, then \({ \hat{s}_i }'(y) < 0 \; (y \in W_i^{++})\). \(\diamond \)
Proof
1. For every \(y \in W_i^{++}\) we have \(\hat{b}_i(y) > 0\) and therefore, by (9), \( t_i(\hat{b}_i(y),y) = 0\). As DIFF[i] holds, \(t_i\) is continuously differentiable on \( {\textrm{Int}(\varDelta ^+)}\). As by LFH[i], \(D_1 t_i (\hat{b}_i(y), y) < 0 \; ( y \in W_i^{++})\), the classical implicit function theorem implies that \(W_i^{++}\) is open and \(\hat{b}_i\) is continuously differentiable on \(W_i^{++}\). Differentiating the identity \(t_i(\hat{b}_i(y),y) = 0 \; (y \in W_i^{++})\), the second statement follows.
2. By part 1.
3. Suppose DIR[i] holds. As for \(y \in W_i^{++}\) we have \(t_i(\hat{b}_i(y),y) = 0\), the formula in part 2 together with LFH[i] and DIR[i] imply \(\hat{s}'_i(y) < 0 \; (y \in W_i^{++})\). \(\square \)
Lemma 4.3
Suppose Assumptions DIFF, LFH and DIR hold. Then \(\hat{s}: W^{\star } \rightarrow {\mathbb {R}}\) and every \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) are strictly decreasing where positive. \(\diamond \)
Proof
By Lemma 4.1(2), RA holds; so with Lemma 3.1 RA0 also holds. By Lemma 3.14, the proof is complete if we show that every \(\hat{s}_i: W_i^+ \rightarrow {\mathbb {R}}\) is strictly decreasing. Well, by Lemma 3.13, this holds if i is of type \(I^+\). As each i is, by Lemma 3.2(1), of type \(I^+\) or \(I^-\), the proof is complete if strict decreasingness holds for i of type \(I^-\). So suppose i is of type \(I^-\). If \(W_i^+ = \emptyset \), we are done. Suppose \(W_i^+ \ne \emptyset \). Proposition 4.1(2) together with RA implies \(\hat{s}'_i(y) < 0 \; (y \in W_i^{++})\). By Lemma 3.11(3), \(W_i^+ = W_i^{++}\). Thus, \(\hat{s}_i: W_i^{+} \rightarrow {\mathbb {R}}\) is strictly decreasing. \(\square \)
4.4 Semi-uniqueness, Existence and Uniqueness
The following theorems provide variants of Theorems 3.3 and 3.4. Concerning this note that, by Lemma 4.1(2), DIFF together with DIR imply RA. As a matter of fact this makes that the other assumptions about type \(I^+\), type \(II^-\) and EC in Theorem 3.3 are not used anymore.
Theorem 4.1
Suppose Assumptions DIFF, LFH and DIR hold. Then AVI has at most one nonzero solution. \(\diamond \)
Proof
By Lemma 4.3, \(\hat{s}\) is strictly decreasing on the subset of its domain where it is positive. Theorem 3.2(3) now implies the desired result. \(\square \)
Theorem 4.2
Suppose Assumptions DIFF, LFH, DIR’ and RA0 hold and at least one \(i \in N\) is of type \(I^+\). Then any of the following two assumptions is sufficient for AVI to have a nonzero solution and for the solution set of AVI to be a non-empty compact subset of \({\mathbb {R}}^n\).
-
(a.)
Assumption EC holds.
-
(b.)
Every \(i \in N\) is of type \(II^-\).
If in addition to (a) and (b) Assumption DIR holds, then AVI has a unique nonzero solution. \(\diamond \)
Proof
First statement (about existence): by Lemma 4.1(1), RA1 holds and so by Lemma 3.1 also RA0 holds. So the first statement in Theorem 4.1 applies and implies the desired result.
Second statement (about uniqueness): by Lemma 4.1(2), RA holds. So the second statement in Theorem 4.1 applies and implies the desired result. \(\square \)
In addition to the previous theorem that presupposes that at least one \(i \in N\) is of type \(I^+\), we provide with the next theorem a result that can handle situations where every \(i \in N\) is of type \(I^-\). Remember the definition of \(\tilde{N} \) in (11).
Theorem 4.3
Suppose Assumptions DIFF, LFH, DIR and EC hold and every \(i \in N\) is of type \(I^-\). Then
-
1.
For every \(i \in \tilde{N}\) and sufficiently small \(y > 0\), there exists a unique \(\xi _i(y) \in {] {0},{y} \, [ }\) with \(t_i(\xi _i(y),y) = 0\).
-
2.
For every \(i \in \tilde{N}\) the limit \(\overline{s}_i {{\; \mathrel {\mathop {:}}= \;}}\lim _{y \downarrow 0} \frac{\xi _i(y)}{y}\) exists and \(\overline{s}_i \in {] {0},{1} ] }\).
-
3.
\(\sum _{i \in \tilde{N} } \overline{s}_i > 1 \; \Leftrightarrow \; \textrm{AVI}\) has a unique nonzero solution. \(\diamond \)
Proof
Note that by Lemma 4.1(2), RA holds. Now by Lemma 3.1, RA0 also holds.
1. Suppose \(i \in \tilde{N}\), so \(t_i(0,\tilde{y}_i) > 0\) for some \(\tilde{y}_i > 0\). By RA0, \(t_i(0,y) > 0 \; (0 < y \le \tilde{y}_i)\). So, by Lemma 3.11(1), \({] {0},{\tilde{y}_i} ] } \in W_i^{++}\). Thus, for every \(y \in {] {0},{\tilde{y}_i} ] }\), \(\xi _i(y) = \hat{b}_i(y)\) is as looked for.
2. By the proof of part 1, we have \(\hat{b}_i(y) > 0 \; (0 < y \le \tilde{y}_i)\). Lemma 4.3 guarantees that \(\hat{s}_i\) is strictly decreasing on \({] {0},{\tilde{y}_i} \, [ }\). As \(\hat{s}_i \le 1\), the limit \(\overline{s}_i\) exists and \(0 < \overline{s}_i \le 1\).
3. For \(i \in N \setminus \tilde{N}\), we have \(t_i(0,y) \le 0 \; (y > 0)\) and thus \(\hat{b}_i(y) =0 \; (y > 0)\). Therefore \(\hat{s}_i(y) = 0 \; (y > 0)\). Also we already know (by the proof of part 1) that, in parts 1 and 2, \(\xi _i(y) = \hat{b}_i(y)\).
‘\(\Rightarrow \)’: suppose \(\sum _{i \in \tilde{N} }\overline{s}_i > 1\); so \(\tilde{N} \ne \emptyset \). By Theorem 4.1 we still have to prove that \(\textrm{AVI}\) has a nonzero solution. As RA1 holds, Lemma 3.12(2) guarantees that \(W^{\star } = {\mathbb {R}}_{++}\). Consider \(\hat{s}: {\mathbb {R}}_{++} \rightarrow \mathbb {R}\). By part 2, we obtain \(\lim _{y \downarrow 0} \hat{s}(y) = ( \sum _{ i \in \tilde{N}} + \sum _{i \in N {\setminus } \tilde{N}} ) \lim _{y \downarrow 0} \hat{s}_i(y) = \sum _{i \in \tilde{N} } \overline{s}_i > 1\). By virtue of EC, we can fix \(\overline{y}\) with \(\overline{y} > \sum _{k \in N} \overline{x}_k\). So \(\tilde{y} \in W^{\star }\). By Lemma 3.7, \(\hat{b}_i(\overline{y}) \le \overline{x}_i \; (i \in N)\). It follows that \(b(\overline{y}) = \sum _{k \in N} \hat{b}_k(\overline{y}) \le \overline{y}\) and therefore \(\hat{s}(\overline{y}) \le 1\). Proposition 3.5 implies that \(\hat{s}\) is continuous. By the intermediate value theorem, there exists \(y_{\star }\in W^{\star }\) with \(\hat{s}(y_{\star }) = 1\). Theorem 3.2(3) implies \(\hat{\textbf{b}}(y_{\star }) \in \textrm{AVI}^{\bullet } {\setminus } \{ \textbf{0} \}\).
‘\(\Leftarrow \)’: suppose \(\textrm{AVI}\) has a unique nonzero solution \(\textbf{e}\). By Theorem 3.2(2), \(\hat{s}(e_N) = 1\). By Lemma 3.15, \(\hat{s} > 0 \) on \({] {0},{e_N} ] }\). By Lemma 4.3, \(\hat{s}\) is strictly decreasing on \({] {0},{e_N} ] }\). So \(\sum _{i\in \tilde{N} } \overline{s}_i = \sum _{i \in \tilde{N}} \lim _{y \downarrow 0} \hat{s}_i(y) = \lim _{y \downarrow 0} \sum _{i \in \tilde{N} } \hat{s}_i(y) = \lim _{y \downarrow 0} \sum _{i \in N} \hat{s}_i(y) = \lim _{y \downarrow 0} \hat{s}_(y) > \hat{s}(e_N) = 1\). \(\square \)
The fundamental result about the existence of the limit in Theorem 4.3(2) guarantees that this limit in various cases can be computed as we shall illustrate in Sect. 5.4. Its part 3 then gives a sufficient and necessary condition for \(\textrm{AVI}\) to have a unique solution while \(\textbf{0}\) is not a solution.
4.5 Sufficient and Necessary Conditions
For Cournot oligopolies there are powerful results dealing with sufficient and necessary conditions for equilibrium uniqueness. Concerning this, [7] is a milestone. It concerns a variant of a result in [14]. Contrary to the latter result, it considers the whole equilibrium set and in particular does not exclude degenerate ones.Footnote 19 The proof in [7] also is much more elementary than the proof in [14] which deals with Cournot equilibria as solutions of a complementarity problem to which differential topological fixed point index theory is applied. The more simple nature of this proof was realised by using ideas from the Selten–Szidarovszky technique. A shortcoming of the result in [7] is that a strong variant of a Fisher–Hahn condition (see footnote 13) has to hold.Footnote 20 Another is that the price function is not allowed to be everywhere positive (which is an assumption that often is used). In [29] a generalisation of the result in [7] was provided solving these shortcomings; in addition, can deal with sum-aggregative games. Below we even go a step further, by further generalising such that results apply to aggregative variational inequalities. In addition we improve them intrinsically (by using the \(\hat{s}_i\) besides the \(\hat{b}_i\)). However, we only do this for the case where every i is of type \(I^+\) and \(t_i(0,y)> 0 \, (y > 0)\).Footnote 21
Theorem 4.4
Suppose Assumptions DIFF, LFH, DIR’, RA0 and EC hold, \(N_> \ne \emptyset \) and for every \(i \in N\): i is of type \(I^+\) and \(t_i(0,y)> 0 \; (y > 0)\). Then
-
1.
For every \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet }\), it holds that \(x_i^{\star } > 0 \; (i \in N)\) and \( D_1 t_i(x^{\star }_i, x^{\star }_N) < 0\).
-
2.
\(\textrm{AVI}^{\bullet }\) is a non-empty compact subset of \({\mathbb {R}}^n_+\) that contains a nonzero element.
-
3.
\(- \sum _{i \in N} \frac{ x_i^{\star } D_1 t_i(x^{\star }_i, x^{\star }_N) + x^{\star }_N D_2 t_i(x^{\star }_i, x^{\star }_N)}{ D_1 t_i( x^{\star }_i, x^{\star }_N ) } < 0 \; (\textbf{x}^{\star } \in \textrm{AVI}^{\bullet }) \;\; \Rightarrow \;\; \# \textrm{AVI}^{\bullet } = 1\).
-
4.
\( \textrm{AVI}^{\bullet } = \{ \textbf{x}^{\star } \} \; \Rightarrow \; - \sum _{i \in N} \frac{ x_i^{\star } D_1 t_i(x^{\star }_i, x^{\star }_N) + x^{\star }_N D_2 t_i(x^{\star }_i, x^{\star }_N)}{ D_1 t_i( x^{\star }_i, x^{\star }_N ) } \le 0\). \(\diamond \)
Proof
The assumptions imply (by Lemma 4.1(1)) that RA1 holds; so Lemma 3.10 applies. By the latter lemma, it holds for every \(i \in N\) that \(W_i^{\star } = {[ {\underline{x}_i},{+ \infty } \,[}\) and \(W_i^{++} = {] {\underline{x}_i},{+ \infty } \, [ }\). So, with \(\underline{x} = \max _i \underline{x}_i\), the domain of \(\hat{s}\) is \( W^{\star } = {[ {\underline{x}},{+\infty } \,[}\). Note that \( {] {\underline{x}},{+\infty } \, [ } \subseteq W_i^{++} \; (i \in N)\). Proposition 4.1(2) implies that \(\hat{s}\) is differentiable at every \(y \in {] {\underline{x}},{+\infty } \, [ }\) with
1. Suppose \(\textbf{x}^{\star } \in \textrm{AVI}^{\bullet }\) and let \(i \in N\). As \(N_> \ne \emptyset \), we have by Proposition 3.1(1) that \( x_N^{\star } \ne 0\). By Theorem 3.2(4), \(x^{\star }_i = \hat{b}_i(x_N^{\star })\). If \(x^{\star }_i = 0\), then \(t_i(0,x_N^{\star }) = t_i( \hat{b}_i(x_N^{\star }), x_N^{\star }) \le 0\) which thus is impossible. So we have \(x^{\star }_i > 0\) and therefore \(t_i(x^{\star }_i, x_N^{\star }) = t_i ( \hat{b}_i(x_N^{\star }), x_N^{\star } ) = 0 \). Now LFH[i] implies \( D_1 t_i(x^{\star }_i, x^{\star }_N) < 0\).
2. By Theorem 5.1.
3. Suppose \(- \sum _{i \in N} \frac{ x_i^{\star } D_1 t_i(x^{\star }_i, x^{\star }_N) + x^{\star }_N D_2 t_i(x^{\star }_i, x^{\star }_N)}{ D_1 t_i( x^{\star }_i, x^{\star }_N ) } < 0 \; (\textbf{x}^{\star } \in \textrm{AVI}^{\bullet })\). By part 1, it is sufficient to prove that \(\# \textrm{AVI}^{\bullet } = 1\). By Theorem 3.2(3) and part 1, \( \textrm{AVI}^{\bullet } = \{\hat{\textbf{b}}(y) \; | \; \underline{x} \le y < +\infty \text{ with } \hat{s}(y) = 1 \}\). We prove that there exists at most one \(y \in {[ {\underline{x}},{+\infty } \,[}\) with \(\hat{s}(y) = 1\). As \(\hat{s}\) is (by Proposition 3.5) continuous, this in turn can be done by showing that \(\hat{s} - 1\) has the AMSCFA-property. For this in turn it is sufficient that \( \hat{s}'( y) < 0\) for every \(y \in {[ {\underline{x}},{+\infty } \,[}\) with \(\hat{s}(y) = 1\). So suppose \(y \in {[ {\underline{x}},{+\infty } \,[}\) with \(\hat{s}(y) = 1\). Let \(\textbf{x}^{\star } = \hat{\textbf{b}}(y)\). By Theorem 3.2(1) and part 1, \(\textbf{x}^{\star }\) is a nonzero solution of AVI. This implies \(x^{\star }_N = \sum _i \hat{b}_i(y) = \hat{b}(y) = y\). As \(x^{\star }_i = \hat{b}_i(y) = \hat{b}_i(x^{\star }_N)\), we obtain \(- \sum _{i \in N} \frac{ \hat{b}_i(y) D_1 t_i( \hat{b}_i(y), y) + x^{\star }_N D_2 t_i( \hat{b}_i(y), y)}{ y^2 D_1 t_i( \hat{b}_i(y), y) } < 0\). By Lemma 3.16 we have \(y > \underline{x}\). Now Proposition 4.1(3) implies \({\hat{s}}'( y ) < 0\).
4. Suppose \( \textrm{AVI}^{\bullet } = \{ \textbf{x}^{\star } \}\). By part 1, \(\textbf{x}^{\star } \ne \textbf{0}\). By Theorem 3.2(2), \(\textrm{fix}(\hat{b}) = \{ x^{\star }_N \}\). This implies that \(\hat{s}- 1\) has \( x^{\star }_N\) as unique zero. By Lemma 3.16, \(x_N^{\star } > \underline{x}\). So \(\hat{s}\) is differentiable at \(x_N^{\star }\). We now prove by contradiction that \(\hat{s}'(x_N^{\star }) \le 0\). Well, suppose \(\hat{s}'(x_N^{\star }) > 0\). Let \(g {{\; \mathrel {\mathop {:}}= \;}}\hat{s} - 1\). So \(g(x_N^{\star }) = 0\) and \(g'(x_N^{\star }) = \hat{s}'(x_N^{\star }) > 0\); this implies that there exists \(x' \in {] { \underline{x} },{ x_N^{\star } } \, [ }\) with \(g(x') < 0\). Also, by Lemma 3.16, \(g(\underline{x}) = \hat{s}(\underline{x}) - 1 > 0\). As g is continuous, g has a zero in \({] {\underline{x}},{x_N^{\star }} \, [ }\), which is a contradiction. As by Theorem 3.2(4), \(x^{\star }_i= \hat{b}_i(x^N_{\star }) \; (i \in N)\), we obtain by Proposition 4.1(2), \( - \sum _{i \in N} \frac{ x_i^{\star } D_1 t_i(x^{\star }_i, x^{\star }_N) + x^{\star }_N D_2 t_i(x^{\star }_i, x^{\star }_N)}{ D_1 t_i( x^{\star }_i, x^{\star }_N ) } = {( x^{\star }_N )}^2 \hat{s}'(x_N^{\star }) \le 0\). \(\square \)
5 Variational Inequalities and Nash Equilibria
5.1 Setting
Consider a game in strategic form with player set \(N {{\; \mathrel {\mathop {:}}= \;}}\{ 1, \ldots , n \}\), for player \(i \in N\) a strategy set \(X_i\) and payoff function \(f_i\). So every \(X_i\) is a non-empty set and every \(f_i\) a function \(X_1 \times \cdots \times X_n \rightarrow \mathbb {R}\). We denote the set of strategy profiles \( X_1 \times \cdots \times X_n\) also by \(\textbf{X}\). For \(i \in N\), define \( \textbf{X}_{\hat{\imath }} {{\; \mathrel {\mathop {:}}= \;}}X_1 \times \cdots \times X_{i-1} \times X_{i+1} \times \cdots \times X_n\). Further assume \(n \ge 2\). We denote such a game by \(\varGamma \).
Given \(i \in N\), we sometimes identify \(\textbf{X}\) with \(X_i \times \textbf{X}_{\hat{\imath }}\) and then write \(\textbf{x} \in \textbf{X}\) as \(\textbf{x} = ( x_i; \textbf{x}_{\hat{\imath }} )\). For \(i \in N\) and \(\textbf{z} \in \textbf{X}_{\hat{\imath }}\), the conditional payoff function \(f_i^{(\textbf{z})}: X_i \rightarrow \mathbb {R}\) is defined by \( f_i^{(\textbf{z})}(x_i) {{\; \mathrel {\mathop {:}}= \;}}f_i(x_i;\textbf{z})\) and the best-reply correspondence \(R_i: \textbf{X}_{\hat{\imath }} \multimap X_i\) is defined by \(R_i(\textbf{z}) {{\; \mathrel {\mathop {:}}= \;}}\textrm{argmax}_{x_i \in X_i}\; f_i^{(\textbf{z})}(x_i)\).
Remember: a strategy profile \(\textbf{x} \in \textbf{X}\) is called a Nash equilibrium if \(x_i \in R_i(\textbf{x}_{\hat{\imath }}) \; (i \in N)\).
5.2 Associated Variational Inequality
First suppose that each strategy set \(X_i\) of \(\varGamma \) is a proper real interval and that each payoff function \(f_i\) is partially differentiable with respect to its i-th variable. Now for \(\textbf{x} = (x_i; \textbf{z}) \in \textbf{X}\) one has
Definition 5.1
Consider \(\varGamma \). The associated variational inequality VI\([\varGamma ]\) is the variational inequality \(\textrm{VI}(\textbf{X},\textbf{F})\), i.e.
where \(\textbf{F} = (F_1,\ldots ,F_n): \textbf{X} \rightarrow {\mathbb {R}}^n\) is given by
Proposition 5.1
Consider the associated variational inequality VI\([\varGamma ]\).
-
1.
Suppose \(\textbf{e}\) is a Nash equilibrium. Then \(\textbf{e}\) is a solution of VI\([\varGamma ]\).
-
2.
Suppose \(\textbf{e}\) is a solution of VI\([\varGamma ]\) and \(i \in N\). If the conditional payoff function \(f_i^{( \textbf{e}_{\hat{\imath }} )}\) is pseudo-concave,Footnote 22 then \(e_i \in R_i( \textbf{e}_{\hat{\imath }} )\).
-
3.
Suppose \(\textbf{e}\) is a solution of VI\([\varGamma ]\). If every \(f_i^{(\textbf{e}_{\hat{\imath }})}\) is pseudo-concave, then \(\textbf{e}\) is a Nash equilibrium.
-
4.
Suppose each conditional payoff function is pseudo-concave and let \(\textbf{e} \in \textbf{X}\). Then: \(\textbf{e}\) is a Nash equilibrium if and only if \(\textbf{e}\) is a solution of VI\([\varGamma ]\). \( \diamond \)
Proof
1. As \(\textbf{e}\) is a Nash equilibrium, we have for every i that \(e_i\) is a maximiser of the conditional payoff function \(f_i^{(\textbf{e}_{\hat{\imath }})}: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\). Its differentiability at \(e_i\) together with Fermat’s theorem implies \({( f_i^{(\textbf{e}_{\hat{\imath }})} )}'(e_i) (x_i - e_i ) \le 0 \; (x_i \in X_i)\), i.e. that \(D_i f_i(\textbf{e}) \cdot (x_i - e_i ) \le 0 \; (x_i \in X_i)\). As \( D_i f_i(\textbf{e}) = - F_i(\textbf{e})\), this becomes \(F_i(\textbf{e}) \cdot (x_i - e_i ) \ge 0 \; (x_i \in X_i)\). Summing over \(i \in N\) gives \(\textbf{F}(\textbf{e}) \cdot (\textbf{x} - \textbf{e}) \ge 0 \; (\textbf{x} \in \textbf{X})\).
2. We prove that \(e_i\) is a maximiser of \(f_i^{(\textbf{e}_{\hat{\imath }})}\). We have \(\textbf{F}(\textbf{e}) \cdot (\textbf{x} - \textbf{e}) \ge 0 \; (\textbf{x} \in \textbf{X})\). By taking an \(\textbf{x} \in \textbf{X}\) with \(x_j = e_j\) if \(j \ne i\), we see that \(F_i(\textbf{e}) \cdot (x_i - e_i ) \ge 0 \; (x_i \in X_i)\), i.e. that \(D_i f_i(\textbf{e}) \cdot (x_i - e_i ) \le 0 \; (x_i \in X_i)\). Thus, \( {( f_i^{(\textbf{e}_{\hat{\imath }})} )}'(e_i) \cdot (x_i - e_i ) \le 0 \; (x_i \in X_i)\). As \(f_i^{ (\textbf{e}_{\hat{\imath }} )}\) is pseudo-concave, it follows that \(e_i\) is a maximiser of this function.
3. By part 2. 4. By parts 1 and 3. \(\square \)
Next let us consider a more subtle situation dealing with games that we simply refer to as ‘almost smooth’. This type of game allows for a possible discontinuity at the origin which is useful for various specific games, like that in Sect. 5.4.
Definition 5.2
\(\varGamma \) is called almost smooth if for every \(i \in N\):
-
a.
\(X_i = {\mathbb {R}}_+\);
-
b.
\(f_i\) is partially differentiable with respect to its i-th variable at every \(\textbf{x} \ne \textbf{0}\);
-
c.
the partial derivative \(D_i f_i(\textbf{0})\) exists as an element of \( {\mathbb {R}}\cup \{ + \infty \}\). \(\diamond \)
Note that for an almost smooth \(\varGamma \) for every \(i \in N\): each conditional payoff function \(f_i^{(\textbf{z})}\) with \(\textbf{z} \ne \textbf{0}\) is differentiable, the conditional payoff function \(f_i^{(\textbf{0})}\) is differentiable on \({\mathbb {R}}_{++}\) and its derivative at 0 exists as element of \({\mathbb {R}}\cup \{ + \infty \}\). Also note that the payoff functions \(f_i\) are not supposed to be continuous. Finally note that for \(\textbf{x} = (x_i; \textbf{z}) \in \textbf{X}\), formula (13) holds.
Definition 5.3
Consider an almost smooth game \(\varGamma \). The associated variational inequality VI’\([\varGamma ]\) is the variational inequality \(\textrm{VI}(\textbf{X},\textbf{F})\), i.e. \( F(\textbf{x}^{\star } ) \cdot (\textbf{x} - \textbf{x}^{\star }) \ge 0 \; (\textbf{x} \in \textbf{X})\) where \(\textbf{F} = (F_1,\ldots ,F_n): \textbf{X} \rightarrow {\mathbb {R}}^n\) is given by \( F_i(\textbf{x}) = {\left\{ \begin{array}{l} - D_i f_i (\textbf{x}) \quad \text{ if } \textbf{x} \ne \textbf{0}, \\ - D_i f_i (\textbf{0}) \quad \text{ if } \textbf{x} = \textbf{0} \text{ and } D_i f_i (\textbf{0}) \ne + \infty , \\ -137 \quad \text{ if } \textbf{x} = \textbf{0} \text{ and } D_i f_i (\textbf{0}) = + \infty . \;\; \diamond \end{array} \right. } \)
Note that for an almost smooth \(\varGamma \) where \(D_i f_i(\textbf{0}) \ne + \infty \; (i \in N)\), the associated variational inequality VI’\([\varGamma ]\) is the same as the associated variational inequality VI\([\varGamma ]\) and then Proposition 5.1 holds. For VI’\([\varGamma ]\) the following variant of Proposition 5.1 holds.
Proposition 5.2
Suppose \(\varGamma \) is almost smooth.
-
1.
Suppose \(\textbf{e}\) is a nonzero Nash equilibrium. Then \(\textbf{e}\) is a solution of VI’\([\varGamma ]\).
-
2.
Suppose \(\textbf{e}\) is a nonzero solution of VI’\([\varGamma ]\) and \(i \in N\). If the conditional payoff function \(f_i^{(\textbf{e}_{\hat{\imath }})}\) is pseudo-concave, then \(e_i \in R_i(\textbf{e}_{\hat{\imath }})\).
-
3.
Suppose \(\textbf{e}\) is a nonzero solution of VI’\([\varGamma ]\). If every \(f_i^{(\textbf{e}_{\hat{\imath }})}\) is pseudo-concave, then \(\textbf{e}\) is a Nash equilibrium.
-
4.
Suppose each conditional payoff function is pseudo-concave and let \(\textbf{e}\) be a nonzero strategy profile. Then: \(\textbf{e}\) is a Nash equilibrium if and only if \(\textbf{e}\) is a solution of VI’\([\varGamma ]\). \( \diamond \)
Proof
The proof is the same as in Proposition 5.1 by noting that \( D_i f_i(\textbf{e}) = - F_i(\textbf{e})\) as \(\textbf{e} \ne \textbf{0}\). \(\square \)
Verifying pseudo-concavity in applications for the conditional payoff functions may be not so easy. For a broad class of sum-aggregative games we shall derive practical results (i.e. Proposition 5.5) in terms of marginal reductions (see Definition 5.5) guaranteeing pseudo-concavity.
5.3 Sum-aggregative Games
Definition 5.4
Consider a game \(\varGamma \) in the case when each strategy set is a subset of \({\mathbb {R}}\). For \(i \in N\), let \(Z_i {{\; \mathrel {\mathop {:}}= \;}}\sum _{j \ne i} X_j\) (Minkowski sum). \(\varGamma \) is sum-aggregative if there exists functions \( \tilde{f}_i^{(z)}: X_i \rightarrow {\mathbb {R}}\; (z \in Z_i)\), referred to as reduced conditional payoff functions, such that \( f_i^{( \textbf{z} )} = \tilde{f}^{( \sum _j z_j )}_i\). \(\diamond \)
Note that reduced conditional payoff functions, if well-defined, are uniquely determined.
Remember the definition of \(\varDelta \) and \(\varDelta ^+\) in (6).
Definition 5.5
Suppose \(\varGamma \) is almost smooth and sum-aggregative. The marginal reductions of \(\varGamma \) are defined as the functions \(t_i: \varDelta \rightarrow {\mathbb {R}}\; (i \in N)\) given by
Proposition 5.3
Consider an almost smooth sum-aggregative game \(\varGamma \) together with its marginal reductions \(t_i\).
-
1.
\(t_i(x_i,x_N) = {\left\{ \begin{array}{l} D_i f_i(\textbf{x}) \quad \text{ if } \textbf{x} \ne \textbf{0}, \\ D_i f_i(\textbf{x}) \quad \text{ if } \textbf{x} =\textbf{0} \text{ and } D_i f_i(\textbf{x}) \ne + \infty , \\ 137 \quad \text{ if } \textbf{x} =\textbf{0} \text{ and } D_i f_i(\textbf{x}) = + \infty . \end{array} \right. }\)
-
2.
For all \((x_i;\textbf{z}) \in \textbf{X}\) with \(x_i + \sum _j z_j \ne 0\): \( {( f_i^{(\textbf{z})} )}'(x_i) = t_i(x_i,x_i +\sum _j z_j)\). \(\diamond \)
-
3.
If \( {( f_i^{(\textbf{0})} )}'(0) \ne + \infty \), then \( {( f_i^{(\textbf{0})} )}'(0) = t_i(0,0)\).
Proof
1. For \(\textbf{x} \ne \textbf{0}\), by Definition 5.5 and (13), \(t_i(x_i,x_N) = {( \tilde{f}_i^{(x_N -x_i)})}'(x_i) = {( f_i^{ (\textbf{x}_{\hat{\imath }}) } )}'(x_i) = D_i f_i(\textbf{x})\). Now suppose \(\textbf{x} = \textbf{0}\) and \(D_i f_i(\textbf{x}) \ne + \infty \). Then, as \(+ \infty \ne D_i f_i(\textbf{x}) = {( f_i^{ (\textbf{0})} )}'(0) = {( \tilde{f}_i^{(0)} )}'(0)\), we obtain \(t_i(x_i,x_N) = t_i(0,0) = {( \tilde{f}_i^{(0)} )}'(0) = D_i f_i(\textbf{0})\). Finally, suppose \(\textbf{x} = \textbf{0}\) and \(D_i f_i(\textbf{x}) = + \infty \). Then, as \(+ \infty = D_i f_i(\textbf{x}) = {( f_i^{ (\textbf{0})} )}'(0) = {( \tilde{f}_i^{(0)} )}'(0)\), we obtain \(t_i(x_i,x_N) = t_i(0,0) = 137\).
2. By part 1, \( t_i(x_i,x_i +\sum _j z_j) = D_i f_i(x_i;\textbf{z}) = {( f_i^{(\textbf{z})} )}'(x_i)\).
3. Suppose \( {( f_i^{(\textbf{0})} )}'(0) \ne + \infty \). By Definition 5.5, \(t_i(0,0) = {( \tilde{f}_i^{(0)} )}'(0) = {( f_i^{(\textbf{0})} )}'(0) \). \(\square \)
Lemma 5.1
Consider an almost smooth sum-aggregative \(\varGamma \) together with a marginal reduction \(t_i\). Suppose \(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuous and continuously partially differentiable.
-
1.
Each conditional payoff function \(f_i^{(\textbf{z})} \; (\textbf{z} \ne \textbf{0})\) is twice differentiable, \(f_i^{(\textbf{0})}\) is twice differentiable on \({\mathbb {R}}_{++}\) and the formula \({( f_i^{(\textbf{z})} )}''(x_i) = (D_1+D_2) t_i(x_i,x_i+\sum _l z_l)\) holds.
-
2.
Sufficient for the conditional payoff functions \(f_i^{(\textbf{z})} \; (\textbf{z} \ne \textbf{0})\) to be strictly pseudo-concave is that for every \(0 \le x_i \le y\), \( t_i(x_i,y) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,y) < 0\).
-
3.
Sufficient for the conditional payoff function \( f_i^{(\textbf{0})}\) to be strictly pseudo-concave on \({\mathbb {R}}_{++}\) is that for every \(x_i> 0\), \( t_i(x_i,x_i) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,x_i) < 0\). \(\diamond \)
Proof
1. First and third statement: let \(a {{\; \mathrel {\mathop {:}}= \;}}\sum _l z_l\). By Proposition 5.3(2), \( {( f_i^{(\textbf{z})} )}'(x_i) = t_i(x_i,x_i + a)\). Thus, \( {( f_i^{(\textbf{z})} )}''(x_i)\) is nothing else than the derivative of the function \({\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) defined by \(\lambda \mapsto t_i(\lambda , \lambda + a)\) at \(\lambda =x_i\). Note that \(a > 0\). As \(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuously partially differentiable, it follows that \(t_i\) is continuously differentiable on \({\textrm{Int}(\varDelta ^+)}\). If \(x_i \ne 0\), then \((x_i,x_i+a) \in {\textrm{Int}(\varDelta ^+)}\) and therefore the chain rule can be applied implying \( {( f_i^{(\textbf{z})} )}''(x_i) = (D_1+D_2) t_i(x_i,x_i+a)\). If \(x_i=0\), then \( {( f_i^{(\textbf{z})} )}''(0) = \lim _{h \downarrow 0} \frac{t_i(h,h+a) - t_i(0,a) }{h}\). Applying Lemma B.2 in “Appendix B” gives \( {( f_i^{(\textbf{z})} )}''(x_i) = (D_1+D_2) t_i(0,a)\).
Second and third statement: by Proposition 5.3(2), \( {( f_i^{(\textbf{0})} )}'(x_i) = t_i(x_i,x_i)\). Applying Lemma B.1 in “Appendix B” gives \( {( f_i^{(\textbf{0})} )}''(x_i) = (D_1+D_2) t_i(x_i,x_i)\).
2. In order to prove the strict pseudo-concavity of \(f_i^{(\textbf{z})}\), we show (having part 1 and footnote 22) that for every \(x_i \ge 0\) the implication \( {( f_i^{(\textbf{z})} )}'(x_i) = 0 \; \Rightarrow \; {( f_i^{(\textbf{z})} )}''(x_i) < 0\) holds. Well, with part 1 and \(y = x_i+a\), this becomes \( t_i(x_i,y) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,y) < 0\).
3. Consider the function \(f_i^{(\textbf{0})}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\). In order to prove the strict pseudo-concavity (having part 1 and footnote 22), we show that for every \(x_i > 0\) the implication \( {( f_i^{(\textbf{0})} )}'(x_i) = 0 \; \Rightarrow \; {( f_i^{(\textbf{0})} )}''(x_i) < 0\) holds. Well, with part 1 this becomes \( t_i(x_i,x_i) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,y) < 0 \). \(\square \)
Having the above, now consider for an almost smooth sum-aggregative game \(\varGamma \) its associated variational inequality VI’\([\varGamma ]\) (see Definition 5.3). With the \(t_i\) the marginal reductions of \(\varGamma \), Proposition 5.3 implies \(F_i(\mathbf {x)} = - t_i(x_i,x_N)\). Thus, VI’\([\varGamma ]\) is nothing else than the aggregative variational inequality given by (7).
Proposition 5.4
Consider an almost smooth sum-aggregative game \(\varGamma \) together with its marginal reductions \(t_i\). Then \(N_> \ne \emptyset \; \Rightarrow \; \textbf{0}\) is a not Nash equilibrium. \(\diamond \)
Proof
Suppose \(N_> \ne \emptyset \). Fix \(i \in N_>\). By Fermat’s theorem, if \(\textbf{0}\) is an equilibrium, then \( {( f_i^{(\textbf{0} )} )}'(0) \le 0\). So, if \( {( f_i^{(\textbf{0} )} )}'(0) = + \infty \), then \(\textbf{0}\) is not an equilibrium. Next suppose \( {( f_i^{(\textbf{0} )} )}'(0) \ne + \infty \). Then, by Proposition 5.3(3), \( {( f_i^{(\textbf{0} )} )}'(0) = t_i(0,0) > 0\) and thus \(\textbf{0}\) is not an equilibrium. \(\square \)
Proposition 5.5
Consider an almost smooth sum-aggregative \(\varGamma \) together with a marginal reduction \(t_i: \varDelta \rightarrow {\mathbb {R}}\). Suppose that for every \(i \in N\)
-
(a).
\(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuous and continuously partially differentiable;
-
(b).
for every \((x_i,y)\in \varDelta ^+\): \( t_i(x_i,y)= 0 \; \Rightarrow \; D_1 t_i(x_i,y) < 0\);
-
(c).
for every \((x_i,y) \in \varDelta ^+\) with \(x_i > 0\): \(t_i(x_i,y)=0 \Rightarrow (x_i D_1 + y D_2)t_i(x_i,y)<0\).
Then:
-
1.
The conditional payoff functions \(f_i^{(\textbf{z})} \; (\textbf{z} \ne \textbf{0})\) are strictly pseudo-concave.
-
2.
The conditional payoff function \( f_i^{(\textbf{0})}\) is strictly pseudo-concave on \({\mathbb {R}}_{++}\). \(\diamond \)
Proof
We apply Lemma 5.1. So in part 1 we have to prove that for every \(0 \le x_i \le y\): \( t_i(x_i,y) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,y) < 0\). And in part 2 that for every \(x_i> 0\): \( t_i(x_i,x_i) =0 \; \Rightarrow \; (D_1+D_2) t_i(x_i,x_i) < 0\). Well, these implications are guaranteed by Lemma 4.2. \(\square \)
Part 2 of the next theorem provides a full result that applies to many concrete sum-aggregative games in the literature.
Theorem 5.1
Consider for the marginal reductions \(t_i: \varDelta \rightarrow {\mathbb {R}}\) of an almost smooth sum-aggregative game \(\varGamma \) the following six assumptions that are supposed to hold for every \(i \in N\).
-
(a).
\(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuous and continuously partially differentiable.
-
(b).
for every \((x_i,y)\in \varDelta ^+\): \( t_i(x_i,y)= 0 \; \Rightarrow \; D_1 t_i(x_i,y) < 0\).
-
(c).
for every \((x_i,y) \in \varDelta ^+\) with \(x_i > 0\): \(t_i(x_i,y)=0 \Rightarrow (x_i D_1 + y D_2)t_i(x_i,y)<0\).
-
(d).
there exists \(\overline{x}_i > 0\) such that \(t_i(x_i,y) < 0\) for every \((x_i,y) \in {\mathbb {R}}^2_+\) with \(\overline{x}_i \le x_i \le y\).
-
(e).
\(t_i(0,y) > 0\) for some \(y > 0\) \(\Rightarrow \) \(t_i(0,0) > 0\).
-
(f).
\(t_i(0,y) < 0\) for \(y > 0\) large enough.
Then:
-
1.
Suppose Assumptions (a), (b) and (c) hold. Then for \(\textbf{e} \ne \textbf{0}\): \(\textbf{e}\) is a Nash equilibrium \(\Leftrightarrow \) \(\textbf{e}\) is a solution of VI’\([\varGamma ]\).
-
2.
Suppose Assumptions (a), (b), (c), (e) hold and for at least one \(i \in N \) that \(t_i(\lambda ,\lambda ) > 0\) for \(\lambda \) small enough. If Assumpt. (d) or (f) holds, then \(\varGamma \) has a unique nonzero Nash equilibrium. \(\diamond \)
Proof
First note that (a), (b), (c), (d) respectively concern DIFF, LFH, DIR, EC, that (e) says \(\tilde{N} \subseteq N_>\) and that (f) means that i is of type \(II^-\). Also remember DIR \(\Rightarrow \) DIR’, LFH \(\Rightarrow \) LFH’, [DIFF \(\wedge \) DIR] \(\Rightarrow \) RA and [LFH’ \(\wedge \) RA] \(\Rightarrow \) RA0 (see Lemmas 4.1(2) and 3.1).
1. First suppose \(\textbf{e}\) is a Nash equilibrium. By Proposition 5.2(1), \(\textbf{e}\) is a solution of VI’\([\varGamma ]\). Next suppose \(\textbf{e}\) is a solution of VI’\([\varGamma ]\). As \(\textbf{e} \ne \textbf{0}\), there are two cases.
Case where \(\# \{ i \in N \; | \; e_i \ne 0 \} \ge 2\): now \(\textbf{e}_{\hat{\imath }} \ne \textbf{0} \; (i \in N)\). By Proposition 5.5(1), every conditional payoff function \(f_i^{(\textbf{e}_{\hat{\imath }})}\) is pseudo-concave. So Proposition 5.2(3) guarantees that \(\textbf{e}\) is a Nash equilibrium.
Case where \(\# \{ i \in N \; | \; e_i \ne 0 \} =1\): let k be such that \(e_i = 0 \; (i \ne k)\) and \(e_k > 0\). By Proposition 3.1(2), \(k \in \tilde{N}\); so \(k \in N_>\). By Proposition 5.5(1), every \(f_i^{(\textbf{e}_{\hat{\imath }})} \; (i \ne k)\) is pseudo-concave. So, by Proposition 5.2(2), \(e_i \in R_i( \textbf{e}_{\hat{\imath }} ) \; (i \ne k)\). We now prove that also, \(e_k \in R_k( \textbf{e}_{\hat{k}})\), i.e. that \(e_k\) is a maximiser of \(f_k^{( \textbf{0} )}\), and then the proof is complete. Well, as \(\textbf{e}\) is a solution of VI’\([\varGamma ]\), we have by (9) and Proposition 5.3(2) that \(0 = t_k(e_k, e_N ) = t_k(e_k,e_k) = {( f_k^{( \textbf{0} )} )}'(e_k)\). As, by Proposition 5.5(2), \(f_k^{( \textbf{0} )}\) is pseudo-concave on \({\mathbb {R}}_{++}\), \(e_k\) is a maximiser of the function \(f_k^{( \textbf{0} )}: {\mathbb {R}}_{++} \rightarrow \mathbb {R}\). By contradiction we now prove that \(e_k\) also is a maximiser of \(f_k^{( \textbf{0} )}: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\). So suppose it is not. Then \( f_k^{( \textbf{0} )}(0) > f_k^{( \textbf{0} )}(e_k)\) and 0 is a maximiser of \( f_k^{( \textbf{0} )}\). This implies \( {( f_k^{(\textbf{0})} )}'(0) \le 0\). Thus, by Proposition 5.3(3), \(t_k(0,0) = {( f_k^{(\textbf{0})} )}'(0) \le 0\), which is a contradiction with \(k \in N_>\).
2. We prove that VI’\([\varGamma ]\) has a unique nonzero solution; then we are done by part 1. Well, Theorem 4.2 applies and proves the desired result. \(\square \)
5.4 Application to Cournot Equilibria
In this subsection we illustrate the power of our general theory by giving a short proof of an important result in [23]. As therein each firm is of type \(I^-\), Theorem 5.1 does not apply; we have to rely on Theorem 4.3.
So consider, as in Sect. 2.6, a Cournot oligopoly game \(\varGamma \) with at least two firms with price function p and cost functions \(c_i\). Suppose that \(p(y)= 1/y \; (y > 0)\) and that the \(c_i\) are twice continuously differentiable with \(c'_i > 0\) and \(c''_i > 0\). With formula (5) we see that \(\varGamma \) is an almost smooth sum-aggregative game. Consider the associated aggregative variational inequality VI’\([\varGamma ]\). As \({( f_i^{(\textbf{0})} )}'(0) = \lim _{h \downarrow 0} (1 -c_i(h) + c_i(0) ) /h = + \infty \), we have for the marginal reductions
One very quickly verifies that Assumptions DIFF, LFH, DIR and EC hold. Now let us apply Theorem 4.3. We there have \(\tilde{N} = N\) and as \(t_i(\lambda ,\lambda ) = - c'_i(\lambda ) < 0\), each player is of type \(I^-\). Solving \(0= t_i(\xi _i(y),y) = - \frac{\xi _i(y)}{y^2} + \frac{1}{y} - c'_i(\xi _i(y))\) gives
Theorem 4.3(2) guarantees that \(\overline{s}_i = \lim _{y \downarrow 0} \xi _i(y)/ y\) exists. Taking this limit (14) gives \(\overline{s}_i = 1 \; (i \in N)\). Thus, with Theorem 4.3(3) it follows that VI’\([\varGamma ]\) has a unique nonzero solution, say \(\textbf{e}\). Now Theorem 5.1(1) implies that \(\textbf{e}\) is a unique nonzero Cournot equilibrium. Further as \(N_> \ne \emptyset \), \(\textbf{0}\) is by Proposition 5.4 not a Cournot equilibrium. Thus, the game has a unique Cournot equilibrium and this equilibrium is nonzero.
6 Conclusions
Finite-dimensional variational inequalities over product sets with an aggregative structure are dealt with. New results concerning existence and especially concerning semi-uniqueness, uniqueness and computation of solutions are obtained for the case of \({\mathbb {R}}^n_+\). This is achieved by generalising the Selten–Szidarovszky technique and by exploiting the At Most Single Crossing From Above property. This technique transforms the original n-dimensional problem into a 1-dimensional fixed point problem. We allow, as this is important for various applications, for a possible discontinuity at the origin. An application to Nash equilibria of sum-aggregative games that does not need explicit pseudo-concavity assumptions for the conditional payoff functions follows in a natural way. The used mathematics is relatively elementary (although it is technical) when compared to standard approaches. We corrected various errors in the literature that occurred by applying the standard approach to Cournot oligopolies and illustrate the power of our results with such games. In order to make the article more alluring for a broader audience, also a nearly self-contained presentation of the very basic theory of variational inequalities is added in “Appendix A”.
Notes
Also see [24].
This is equivalent with that \(- \textbf{F}(\textbf{x}^{\star })\) belongs to the normal cone of X at \(\textbf{x}^{\star }\).
See (10) and Sect. 4.5.
For correspondences we use the symbol \(\multimap \).
When dealing with this assumption, we automatically fix some \(\overline{x}_i\).
The name of the acronyms \(\overline{\textrm{CONT}}\) and \(\overline{\textrm{DIFF}}\) is clear. And \(\overline{\textrm{EC}}\) stands for ‘effective compactness’.
‘Par abus de notation’ we just write \(\textbf{T}\) instead of the restrictions of \(\textbf{T}\) to B and \(\textbf{T}\) to \(\textbf{K}\).
Do not worry about ‘137’ in this article!
In order to avoid any confusion, see “Appendix C”.
Equilibrium semi-uniqueness in [19] is based on Proposition 2.3(1) by referring to a false statement in [16] (see footnote 26 while our is based on Proposition 2.3(2) and so relies on P-matrices instead of on positive definite matrices. Also a further article on this topic by [14] refers to this false statement. Equilibrium existence in [19] refers to a result in [13] that in our opinion does not apply here. Furthermore in [19] the relation between solutions of the nonlinear complementarity problem and the Nash equilibria set is not addressed.
AMSV stands for ‘at most single-valued’ (see Lemma 3.4). LFH’ stands for ‘local Fisher–Hahn’ (see [29] for Fisher–Hahn conditions). In Sect. 4.1 we introduce the stronger LFH assumption. RA stands for ‘\(t_i\) radial direction’, RA1 for ‘radial direction with \(\mu =1\)’ and RA0 for ‘\(t_i\) radial direction with \(\mu =0\). EC (again) stands for ‘effective compactness’.
The sum here is the Minkowski sum.
See (15) for \(P_{\pi }\).
May be, see “Appendix A” for this notion.
It does not make sense to say that \(t_i\) is partially differentiable on \(\varDelta \). Indeed, \(D_1 t_i(0,0)\) does not make sense.
The name of the acronym DIFF is clear. DIR stands for ‘directional (derivative)’ and DIR’ for a special case of the DIR assumption. Also see footnote 13.
See [29] for details.
We think that handling the other cases should be possible, but will entail various additional quite technical boundary and differentiability issues.
We recall the definition of pseudo-concavity for a differentiable function \(h: I \rightarrow {\mathbb {R}}\) where I is a proper real interval. h is (strictly) pseudo-concave if for all \(x, y \in I\) with \(x \ne y\): \( h'(x) (y - x) \; (<) \le 0 \; \Rightarrow \; h(y) \; (<) \le h(x)\). We note that for a strictly pseudo-concave h, its derivative \( h' \) has the AMSCFA-property (see Definition 3.1). Also important for us is Theorem 3.1 in [8]), which states for a twice differentiable h: if for all \(x \in I\) the implication \(h'(x) = 0 \; \Rightarrow \; h'' (x) < 0\) holds, then h is strictly pseudo-concave. (Note that here I may be closed and h may not be twice continuously differentiable.)
The exceptions concern the use of Brouwer’s fixed point theorem and the Gale–Nikaido theorem.
Denoting by \(\textbf{e}_i\) the \(i{\textrm{th}}\) basis vector.
Lemma A.5 also holds, with the same proof if S a non-empty open convex subset of \({\mathbb {R}}^n\) with \(S \subseteq {\mathbb {R}}^n_+\).
In [16] it is stated that for a row diagonally dominant matrix \(\textbf{M}\) with positive diagonal entries, the matrix \(\frac{ \textbf{M} + {}^t \textbf{M} }{2}\) is positive definite. This is false as \(\textbf{M} = \left( \begin{array}{ccc} 1 &{} 1-a &{} 0 \\ 1-a &{} 1 &{} 0 \\ 1-a &{} 0 &{} 1 \end{array} \right) \) with \(a < 1/10\) and \(\textbf{x} = (-2, 2, 1)\) shows. This false result is also cited in the articles [19] and [14] which are relevant to our topics.
References
Bischi, G., Chiarella, C., Kopel, M.O., Szidarovszky, F.: Nonlinear Oligopolies: Stability and Bifurcations. Springer, Heidelberg (2010)
Corchón, L.C.: Theories of Imperfectly Competitive Markets. Lecture Notes in Economics and Mathematical Systems, vol. 442, 2nd edn. Springer, Berlin (1996)
Cornes, R., Hartley, R.: Well-Behaved Aggregative Games. Economic Discussion Paper May 24, School of Social Sciences. The University of Manchester, Manchester (2011)
Cornes, R., Sato, T.: Existence and uniqueness of Nash equilibrium in aggregative games: an expository treatment. In: von Mouche, P.H.M., Quartieri, F. (eds.) Equilibrium Theory for Cournot Oligopolies and Related Games: Essays in Honour of Koji Okuguchi, pp. 47–61. Springer, Cham (2016)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operational Research and Financial Engineering, Springer, Berlin (2011)
Gale, D., Nikaido, H.: The Jacobian matrix and global univalence of mappings. Math. Ann. 159, 81–93 (1965)
Gaudet, G., Salant, S.W.: Uniqueness of Cournot equilibrium: new results from old methods. Rev. Econ. Stud. 58(2), 399–404 (1991)
Ginchev, I., Ivanov, V.I.: Second-order characterizations of convex and pseudoconvex functions. J. Appl. Anal. 9(2), 261–273 (2003)
Hartman, P., Stampacchia, G.: On some nonlinear elliptic differential functional equations. Acta Math. 115, 271–310 (1966)
Hirai, S., Szidarovszky, F.: Existence and uniqueness of equilibrium in asymmetric contests with endogenous prizes. Int. Game Theory Rev. 15(1), 1350,005 (2013)
Itaya, J.I., von Mouche, P.H.M.: Equilibrium uniqueness in aggregative games: very practical conditions. Optim. Lett. 16, 2033–2058 (2022)
Kanzow, C., Schwartz, A.: Spieltheorie. Birkhauser, Basel (2018)
Karamardian, S.: The nonlinear complementarity problem with applications, part 2. J. Optim. Theory Appl. 4, 167–181 (1969)
Kolstad, C.D., Mathiesen, L.: Necessary and sufficient conditions for uniqueness of a Cournot equilibrium. Rev. Econ. Stud. 54(4), 681–690 (1987)
Matei, A., Sofonea, M.: Variational Inequalities with Applications. Advances in Mechanics and Mathematics, Springer, Berlin (2009)
Namatame, A., Tse, E.: Adaptive expectations and dynamic adjustments in non-cooperative games with incomplete information. J. Optim. Theory Appl. 34, 243–261 (1981)
Noor, M.A., Noor, K.I., Mohsen, B., Rassias, M.T., Raigorodskii, A.: General preinvex functions and variational-like inequalities. In: Daras, N.J., Rassias, T.M. (eds.) Approximation and Computation in Science and Engineering, pp. 643–666. Springer, Berlin (2022)
Noor, M.A., Noor, K.I., Rassia, M.T.: New trends in general variational inequalities. Acta Appl. Math. 170, 981–1064 (2020)
Okuguchi, K.: The Cournot oligopoly and competitive equilibria as solutions to non-linear complementarity problems. Econ. Lett. 12, 127–133 (1983)
Okuguchi, K., Szidarovszky, F.: The Theory of Oligopoly with Multi-product Firms, 2nd edn. Springer, Berlin (2012)
Selten, R.: Preispolitik der Mehrproduktunternehmung in der Statischen Theorie. Springer, Berlin (1970)
Szidarovszky, F.: On the oligopol game. Technical Report 1970-1. Karl Marx University of Economics, Budapest (1970)
Szidarovszky, F., Okuguchi, K.: On the existence and uniqueness of pure Nash equilibrium in rent-seeking games. Games Econ. Behav. 18, 135–140 (1997)
Szidarovszky, F., Yakowitz, S.: A new proof of the existence and uniqueness of the Cournot equilibrium. Int. Econ. Rev. 18, 787–789 (1977)
Vives, X.: Oligopoly Pricing: Old Ideas and New Tools. MIT Press, Cambridge (2001)
von Mouche, P.H.M.: The Selten–Szidarovszky technique: the transformation part. In: Petrosyan, L.A., Mazalov, V.V. (eds.) Recent Advances in Game Theory and Applications, pp. 147–164. Birkhäuser, Cham (2016)
von Mouche, P.H.M., Quartieri, F.: Equilibrium Theory for Cournot Oligopolies and Related Games: Essays in Honour of Koji Okuguchi. Springer, Cham (2016)
von Mouche, P.H.M., Quartieri, F.: Cournot equilibrium uniqueness via demi-concavity. Optimization 67(4), 441–455 (2017)
von Mouche, P.H.M., Yamazaki, T.: Sufficient and necessary conditions for equilibrium uniqueness in aggregative games. J. Nonlinear Convex Anal. 16(2), 353–364 (2015)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jafar Zafarani.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Variational Inequalities
Results in this section concern the general variational inequality \(\textrm{VI}(X,\textbf{F})\) in (1):
where X is a non-empty subset of \({\mathbb {R}}^n\) and \(\textbf{F}= (F_1,\ldots ,F_n): X \rightarrow {\mathbb {R}}^n\). These results essentially can be found in the literature; in particular see [12]. However, below we present a nearlyFootnote 23 self-contained presentation which especially may be useful for readers that are not (so) familiar with variational inequalities.
Lemma A.1
Suppose \(\mathbf {\overline{F}}: X \rightarrow {\mathbb {R}}^n\). Consider the variational inequality \(\textrm{VI}(X,\mathbf {\overline{F}})\) and let \(\textbf{u} \in X\). If \(\mathbf {\overline{F}}(\textbf{x}) = \textbf{F}(\textbf{x})\) for every \(\textbf{x} \ne \textbf{u}\), then for every \(\textbf{x}^{\star } \in X \setminus \{ \textbf{u} \}\): \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\mathbf {\overline{F}}) \;\; \Leftrightarrow \;\; \) \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\). \(\diamond \)
Proof
As \(\mathbf {\overline{F}}(\textbf{x}^{\star }) = \textbf{F}(\textbf{x}^{\star })\). \(\square \)
The next two lemmas are very fundamental. Lemma A.2 relates \(\textrm{VI}(X,\textbf{F})\) to a nonlinear complementarity problem and Lemma A.3 relates \(\textrm{VI}(X,\textbf{F})\) to a, what one may call, a mixed nonlinear complementarity problem.
Lemma A.2
Let \(X = {\mathbb {R}}^n_+\). For \(\textbf{x}^{\star } \in {\mathbb {R}}^n_+\) the following four statements are equivalent.
-
(a).
\(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}({\mathbb {R}}^n_+,\textbf{F})\).
-
(b).
\(\textbf{x}^{\star }\) is a solution of: \( \textbf{x} \ge \textbf{0} \;\; \wedge \;\; \textbf{F}(\textbf{x}) \cdot \textbf{x} = 0 \;\; \wedge \;\; \textbf{F}(\textbf{x}) \ge \textbf{0}\).
-
(c).
For every \(i \in N\): \( x_i^{\star } \ge 0, \;\; x_i^{\star } F_i(\textbf{x}^{\star }) = 0 \;\; \wedge \;\; F_i(\textbf{x}^{\star }) \ge 0\).
-
(d).
For every \(i \in N\) exactly one the following holds: \( [x_i^{\star } = 0 \; \wedge \; F_i(\textbf{x}^{\star }) \ge 0] \text{ and } [x_i^{\star } > 0 \; \wedge \; F_i(\textbf{x}^{\star }) = 0]\). \(\diamond \)
Proof
‘\((a) \Rightarrow (b)\)’: suppose \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}({\mathbb {R}}^n_+,\textbf{F})\). Then \(\textbf{x}^{\star } \ge \textbf{0}\) and \(\textbf{F}(\textbf{x}^{\star }) \cdot ( \textbf{x} - \textbf{x}^{\star }) \ge 0 \; (\textbf{x} \ge \textbf{0})\). In particular forFootnote 24\(\textbf{x} = \textbf{x}^{\star } + \textbf{e}_i\) we have \(F_i(\textbf{x}^{\star }) \ge 0\). So \(\textbf{F}(\textbf{x}^{\star }) \ge \textbf{0}\). This implies \(\textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } \ge 0\). If there exists an i with \( F_i(\textbf{x}^{\star }) x_i^{\star } < 0\), then for \(\textbf{x} = \textbf{x}^{\star } + x_i^{\star } \textbf{e}_i\) we find the contradiction \(0 > F_i(\textbf{x}^{\star }) x_i^{\star } = \textbf{F}(\textbf{x}^{\star } ) \cdot ( \textbf{x}- \textbf{x}^{\star }) \ge 0\). Thus \(\textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } = 0\) follows.
‘\((b) \Rightarrow (a)\)’: suppose \( \textbf{x}^{\star } \ge \textbf{0} \;\; \wedge \;\; \textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } = 0 \;\; \wedge \;\; \textbf{F}(\textbf{x}^{\star }) \ge \textbf{0}\). Then for every \(\textbf{x} \ge \textbf{0}\), we obtain, as desired, that \(\textbf{F}(\textbf{x}^{\star }) \cdot ( \textbf{x} - \textbf{x}^{\star }) = \textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x} - \textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } = \textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x} \ge 0\).
‘\(b \Rightarrow c\)’: suppose \(\textbf{x}^{\star } \ge \textbf{0} \;\; \wedge \;\; \textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } = 0 \;\; \wedge \;\; \textbf{F}(\textbf{x}^{\star }) \ge \textbf{0}\). Concerning c we show that \(x_i^{\star } F_i(\textbf{x}^{\star }) = 0 \; (i \in N)\). Well, suppose \(x_j^{\star } F_j(\textbf{x}^{\star }) \ne 0\) for some j. Then \(x_j^{\star } > 0\) and \(F_j(\textbf{x}^{\star }) > 0\) and the contradiction \(\textbf{F}(\textbf{x}^{\star }) \cdot \textbf{x}^{\star } > 0\) would follow.
‘\((c) \Rightarrow (b)\)’: clear. ‘\((c) \Leftrightarrow (d)\)’: clear. \(\square \)
Lemma A.3
Let \(X = [0,m_1] \times \cdots \times [0,m_n]\). For \(\textbf{x}^{\star } \in X\) the following two statements are equivalent.
-
(a).
\(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\).
-
(b).
For every \(i \in N\) exactly one of the following holds:
\([x_i^{\star } = 0 \; \wedge \; F_i(\textbf{x}^{\star }) \ge 0], \;\;\;\;\; [0< x_i^{\star } < m_i \; \wedge \; F_i(\textbf{x}^{\star }) = 0], \;\;\;\;\; [x_i^{\star } = m_i \; \wedge \; F_i(\textbf{x}^{\star }) \le 0]. \; \; \diamond \)
Proof
‘\((a) \Rightarrow (b)\)’: suppose \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\).
If \({x}^{\star }_i = 0\), then for \( \textbf{x} = \textbf{x}^{\star } + \epsilon \textbf{e}_i\), \(\textbf{F}(\textbf{x}^{\star }) \cdot ( \textbf{x} - \textbf{x}^{\star }) = F_i(\textbf{x}^{\star }) \epsilon \ge 0\). So \(F_i(\textbf{x}^{\star }) \ge 0\).
If \(0< {x}^{\star }_i < m_i\), then for \( \textbf{x} = \textbf{x}^{\star } + \epsilon \textbf{e}_i\), again \(F_i(\textbf{x}^{\star }) \ge 0\) follows and for \( \textbf{x} = \textbf{x}^{\star } - \epsilon \textbf{e}_i\), \(- F_i(\textbf{x}^{\star }) \ge 0\) follows. So \(F_i(\textbf{x}^{\star }) = 0\).
If \({x}^{\star }_i = m_i\), then for \( \textbf{x} = \textbf{x}^{\star } - \epsilon \textbf{e}_i\), again \(F_i(\textbf{x}^{\star }) \le 0\) follows.
‘\((b) \Rightarrow (a)\)’: suppose (b) holds; fix \(\textbf{x} \in \textbf{X}\). We prove that for every \(i \in N\), \(F_i(\textbf{x}^{\star }) ( x_i - x^{\star }_i ) \ge 0\); then (a) follows.
If \({x}^{\star }_i = 0\), then \(F_i(\textbf{x}^{\star }) \ge 0\) and therefore \(F_i(\textbf{x}^{\star }) ( x_i - x^{\star }_i ) = F_i(\textbf{x}^{\star }) x_i \ge 0\).
If \(0< {x}^{\star }_i < m_i\), then \(F_i(\textbf{x}^{\star }) = 0\) and therefore \(F_i(\textbf{x}^{\star }) ( x_i - x^{\star }_i ) = 0\).
If \({x}^{\star }_i = m_i\), then \(F_i(\textbf{x}^{\star }) \le 0\) and therefore \(F_i(\textbf{x}^{\star }) ( x_i - m_i) \ge 0\). \(\square \)
For \(S \subseteq X\), \(\textbf{F}\) is said to be strictly monotone on S if for all \(\textbf{x}, \textbf{x}' \in S\) with \(\textbf{x} \ne \textbf{x}'\)
And \(\textbf{F}\) is said to be a P-function on S if for all \(\textbf{x}, \textbf{x}' \in S\) with \(\textbf{x} \ne \textbf{x}'\) there exists an index k such that
Of course, if \(\textbf{F}\) is strictly monotone on S, then it is a P-function on S.
Lemma A.4
Let \(X = {\mathbb {R}}^n_+\). Suppose \(S \subseteq X\).
-
1.
If \(\textbf{F}\) is a P-function on S, then \(\textrm{VI}(X,\textbf{F})\) has at most one solution in S.
-
2.
If \(\textbf{F}\) is strictly monotone on S, then \(\textrm{VI}(X,\textbf{F})\) has at most one solution in S. \(\diamond \)
Proof
1. Suppose \(\textbf{F}\) is a P-function on S and \(\textbf{x}, \textbf{x}' \in S\) are solutions. By Lemma A.2(a,c), for every i
Since \(\textbf{F}\) is a P-function on S, this implies \(\textbf{x} = \textbf{x}'\).
2. By part 1. \(\square \)
In the following two lemmas we deal with the situation where S is a proper rectangle in \({\mathbb {R}}^n_+\), i.e. where \(S = S_1 \times \cdots \times S_n\) with each \(S_i\) is a proper real interval with \(S_i \subseteq {\mathbb {R}}_+\).Footnote 25
Lemma A.5
Let \(X = {\mathbb {R}}^n_+\). Suppose S is a proper rectangle in \({\mathbb {R}}^n_+\), every \(F_i: S \rightarrow {\mathbb {R}}\) is continuously differentiable. If for every \(\textbf{x} \in S\) the Jacobi matrix \(\textbf{J}(\textbf{x})\) of \(\textbf{F}\) at \(\textbf{x}\) is positive quasi-definite, then \(\textbf{F}\) is strictly monotone on S. \(\diamond \)
Proof
Suppose for every \(\textbf{x} \in S\) the matrix \(\textbf{J}(\textbf{x})\) is positive quasi-definite. Fix \(\textbf{x}, \breve{\textbf{x}} \in S\) with \(\textbf{x} \ne \breve{\textbf{x}}\). We have to prove that \( (\textbf{x} - \breve{\textbf{x}}) \cdot ( \textbf{F}( \textbf{x} ) - \textbf{F}( \breve{\textbf{x}}) ) > 0\). Well, let \(\textbf{y}: [0,1] \rightarrow {\mathbb {R}}^n\) be defined by \( \textbf{y}(\lambda ) {{\; \mathrel {\mathop {:}}= \;}}\lambda \textbf{x} + (1 -\lambda ) \breve{\textbf{x}}\) and let \(\textbf{H} {{\; \mathrel {\mathop {:}}= \;}}\textbf{F} \circ \textbf{y}: [0,1] \rightarrow {\mathbb {R}}^n\). Note that \(\textbf{H}\) is continuously differentiable with \( \textbf{H}' (\lambda ) = \textbf{J}(\textbf{y}(\lambda ) ) \star ( \textbf{x} - \breve{\textbf{x}} ). \) We obtain
\(\square \)
Lemma A.6
Let \(X = {\mathbb {R}}^n_+\). Suppose S is a proper rectangle in \({\mathbb {R}}^n_+\), every \(F_i: S \rightarrow {\mathbb {R}}\) is continuously differentiable and for all \(\textbf{x} \in S\), the Jacobi matrix \(\textbf{J}(\textbf{x})\) of \(\textbf{F}\) at \(\textbf{x}\) is a P-matrix. Then \(\textbf{F}\) is a P-function on S. \(\diamond \)
Proof
This is a quite technical and deep result, due to Gale and Nikaido, which essentially can be found in [6]. Also see [5, Proposition 3.5.9]. \(\square \)
Lemma A.7
Suppose \(\textbf{F}: X \rightarrow {\mathbb {R}}^n\) is continuous. Then \( \textrm{VI}^{\bullet }(X,\textbf{F})\) is a closed subset of \({\mathbb {R}}^n\). \(\diamond \)
Proof
Let \((\textbf{x}_m)\) be a sequence in \(\textrm{VI}^{\bullet }(X,\textbf{F})\) which is convergent with limit \(\textbf{x}_{\star }\). So we have for every m that \(\textbf{F}(\textbf{x}_m) \cdot (\textbf{x} - \textbf{x}_m ) \ge 0 \; (\textbf{x} \in X)\). As \(\textbf{F}\) is continuous, we obtain, by taking limits, \(\textbf{F}(\textbf{x}_{\star }) \cdot (\textbf{x} - \textbf{x}_{\star }) \ge 0 \; (\textbf{x} \in X)\). Thus \(\textbf{x}_{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\), which completes the proof. \(\square \)
Lemma A.8
Suppose X is convex and closed. Denote by \(\mathcal {P}_X: {\mathbb {R}}^n \rightarrow X\) the (now well-defined) metric projection of \({\mathbb {R}}^n\) on X, i.e. \(\mathcal {P}_X(\textbf{y})\) denotes the unique \(\textbf{z} \in X\) with \({{\parallel \textbf{y}- \textbf{z} \parallel }} \le {{\parallel \textbf{y} - \textbf{x} \parallel }} \; (\textbf{x} \in X)\). Define \(H: X \rightarrow {\mathbb {R}}^n\) by
Let \(\textbf{x}^{\star } \in X\). Then: \( \textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\) \(\Leftrightarrow \) \(\textbf{x}^{\star } = H(\textbf{x}^{\star })\), i.e. \(\textbf{x}^{\star }\) is a fixed point of H. \(\diamond \)
Proof
\(\textbf{x}\) is a solution of \(\textrm{VI}(X,\textbf{F})\) \(\Leftrightarrow \) \(\textbf{F}(\textbf{x}^{\star }) \cdot ( \textbf{x} - \textbf{x}^{\star }) \ge 0 \; (\textbf{x} \in X)\) \(\Leftrightarrow \) \( ( ( \textbf{x}^{\star }- \textbf{F}(\textbf{x}^{\star }) ) - \textbf{x}^{\star } ) \cdot ( \textbf{x} - \textbf{x}^{\star }) \le 0 \; (\textbf{x} \in X)\) \(\Leftrightarrow \) \( \textbf{x}^{\star } = \mathcal {P}_X ( \textbf{x}^{\star } - \textbf{F}(\textbf{x}^{\star }) ) \Leftrightarrow \textbf{x}^{\star } = H(\textbf{x}^{\star })\). \(\square \)
Lemma A.9
Suppose X is convex and compact and \(\textbf{F}: X \rightarrow {\mathbb {R}}^n\) is continuous. Then \( \textrm{VI}^{\bullet }(X,\textbf{F})\) is a non-empty compact subset of \({\mathbb {R}}^n\). \(\diamond \)
Proof
As X is compact, X is bounded and therefore also \(\textrm{VI}^{\bullet }(X,\textbf{F})\) is bounded. As \(\textrm{VI}^{\bullet }(X,\textbf{F})\) is by Lemma A.7 closed, this set is compact. So we still have to prove that a solution exists.
By Lemma A.8, \(\textbf{x}\) is a solution of \(\textrm{VI}(X,\textbf{F})\) if and only if \(\textbf{x}^{\star }\) is a fixed point of H. As \(\textbf{F}\) and \(\mathcal {P}_X\) are continuous, also H is continuous. Brouwer’s fixed point theorem guarantees the existence of a fixed point of H. \(\square \)
Lemma A.10
Suppose X is convex and \(\textbf{F}: X \rightarrow {\mathbb {R}}^n\) is continuous. For \(r > 0\), let \(X_r = \{ \textbf{x} \in {\mathbb {R}}^n \; | \; {{\parallel \textbf{x} \parallel }} \le r \} \cap X\). Then for \(\textbf{x}^{\star } \in X\) the following two statements are equivalent.
-
(a).
\( \textbf{x}^{\star } \) is a solution of \( \textrm{VI}(X,\textbf{F})\).
-
(b).
There exists \(r > {{\parallel \textbf{x}^{\star } \parallel }}\) such that \( \textbf{x}^{\star } \) is a solution of \(\textrm{VI}(X_r,\textbf{F})\). \(\diamond \)
Proof
‘\((a) \Rightarrow (b)'\): suppose \( \textbf{x}^{\star } \) is a solution of \( \textrm{VI}(X,\textbf{F})\). Take \(r > {{\parallel \textbf{x}^{\star } \parallel }}\) arbitrary. As \({{\parallel \textbf{x}^{\star } \parallel }} \in X_r \subseteq X\), \(\textbf{x}^{\star }\) also is a solution of \(\textrm{VI}(X_r,\textbf{F})\).
‘\((b) \Rightarrow (a)'\): suppose \(r > {{\parallel \textbf{x}^{\star } \parallel }}\) is such that \( \textbf{x}^{\star } \) is a solution of \(\textrm{VI}(X_r,\textbf{F})\). Let \(\textbf{x} \in X\). For \(\lambda > 0\) small enough, we have, using that X is convex, \(\textbf{y} {{\; \mathrel {\mathop {:}}= \;}}\textbf{x}^{\star } + \lambda ( \textbf{x} - \textbf{x}^{\star } ) \in X_r\). As \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X_r,\textbf{F})\), we obtain \(\lambda \textbf{F}(\textbf{x}^{\star } )\cdot (\textbf{x} - \textbf{x}^{\star } ) = \textbf{F}(\textbf{x}^{\star }) \cdot (\textbf{y} - \textbf{x}^{\star } ) \ge 0\) and therefore, as desired, \(\textbf{F}(\textbf{x}^{\star } ) \cdot (\textbf{x} - \textbf{x}^{\star } ) \ge 0\). \(\square \)
Notation: for \(\pi \in S_n\), i.e. a permutation of N, define \(P_{\pi }: X \rightarrow {\mathbb {R}}^n\) by
We call the general variational equality \(\textrm{VI}(X,\textbf{F})\) symmetric if \(P_{\pi }(X) = X\) and for every \(\pi \in S_n\) and \(\textbf{x} \in X\)
Lemma A.11
Suppose \(\textrm{VI}(X,\textbf{F})\) is symmetric. If \(\textbf{x}^{\star }\) is a solution of \(\textrm{VI}(X,\textbf{F})\) and \(\pi \in S_n\), then \(P_{\pi }(\textbf{x}^{\star })\) is also a solution. \(\diamond \)
Proof
As \(\textbf{x}^{\star }\) is a solution, we have \(\textbf{F}( \textbf{x}^{\star } ) \cdot (\textbf{x} - \textbf{x}^{\star } ) \ge 0 \; (\textbf{x} \in X)\). And from this, as desired,
\(\square \)
Appendix B: Smoothness Issues
In this appendix we consider the aggregative variational inequality AVI (7) as dealt with in Sect. 4.1. In the next lemma we consider the function \(t_i^{(\mu )}\) defined in (8).
Lemma B.1
If Assumption DIFF holds, then for every \(0 < \mu \le 1\), the function \(\overline{t}_i^{(\mu )}: {\mathbb {R}}_{++} \rightarrow {\mathbb {R}}\) is continuously differentiable with \( {( \overline{t}_i^{(\mu )} )}' (\lambda ) = (\mu D_1 + D_2) t_i(\mu \lambda ,\lambda )\). \(\diamond \)
Proof
For \(\mu \ne 1\) differentiability of \(\overline{t}_i^{(\mu )}\) and the formula for the derivative follows from the chain rule. The formula in turn shows that \(\overline{t}_i^{(\mu )}\) is even continuously differentiable. So we still have to prove that \(\overline{t}_i^{(1}\) is continuously differentiable and the formula \( {( \overline{t}_i^{(1)} )}' (\lambda ) = (D_1 + D_2) t_i(\lambda ,\lambda )\) holds. Note that here we cannot apply the (standard) chain rule as \((\lambda , \lambda )\) does not belong to the interior of \(\varDelta ^+\). The proof is complete if we show that
Well for \(h > 0\), we have
By the first mean value theorem, there exist \(\epsilon _1(h), \epsilon _2(h) \in {] {0},{1} \, [ }\) such that this becomes
As \(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuously partially differentiable, the desired result follows by taking \(\lim _{h \downarrow 0}\). And for \(-\lambda< h < 0\) we have \( \frac{ t_i(\lambda +h,\lambda +h) - t_i(\lambda ,\lambda ) }{h} = \frac{ t_i(\lambda +h,\lambda +h) - t_i(\lambda +h,\lambda ) }{h} + \frac{ t_i(\lambda +h,\lambda ) - t_i(\lambda ,\lambda ) }{h}\). By the first mean value theorem, there exist \(\epsilon _1(h), \epsilon _2(h) \in {] {0},{1} \, [ }\) such that this becomes \(D_1 t_i(\lambda + h, \lambda + \epsilon _1(h) h) + D_2 t_i(\lambda + \epsilon _2(h) h, \lambda )\). As \(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuously partially differentiable, the desired result follows by taking \(\lim _{h \uparrow 0}\). \(\square \)
Lemma B.2
Suppose Assumption DIFF holds. Then for \(a > 0\): \( \lim _{h \downarrow 0} \frac{t_i(h,a+h) - t_i(0,a) }{h} = (D_1 +D_2) t_i(0,a)\). \(\diamond \)
Proof
For \(h > 0\) small enough we have \(\frac{t_i(h,a+h) - t_i(0,a) }{h} = \frac{t_i(h,a) - t_i(0,a) }{h} + \frac{t_i(h,a+h) - t_i(h,a) }{h}\). By the first mean value theorem, there exist \(\epsilon _1(h), \epsilon _2(h) \in {] {0},{1} \, [ }\) such that this becomes \( D_1 t_i( \epsilon _1(h) h, a) + D_2 t_i(h, a+ \epsilon _2(h) h)\). As \(t_i: \varDelta ^+ \rightarrow {\mathbb {R}}\) is continuously partially differentiable, the desired result follows by taking \(\lim _{h \downarrow 0}\). \(\square \)
Appendix C: Various Types of Matrices
In this appendix we recall some useful well-known results about matrices.
Consider an \(n \times n\) matrix \(\textbf{M}\) with real coefficients. Denote by \(\star \) the matrix product and consider elements of \({\mathbb {R}}^n\) as row vectors. Let \({}^t \textbf{M}\) be the transpose of \(\textbf{M}\). \(\textbf{M}\) is called
– positive quasi-definite if \(\textbf{x} \star \textbf{M} \star {}^t \textbf{x} > 0\) for every \(\textbf{x} \in {\mathbb {R}}^n {\setminus } \{ \textbf{0} \}\),
– positive definite if \(\textbf{M}\) is positive quasi-definite and symmetric,
– row diagonally dominant, if for all i: \(| M_{ii} | > \sum _{j \ne i} |M_{ij}|\),
– column diagonally dominant, if for all i: \(| M_{ii} | > \sum _{j \ne i} |M_{ji}|\),
– a P-matrix if all of its principal minors are positive.
Some results:
– \(\textbf{x} \star \textbf{M} \star {}^t \textbf{x} = \textbf{x} \star \frac{\textbf{M}+ {}^t \textbf{M} }{2} \star {}^t \textbf{x}\),
– \(\textbf{M}\) is positive quasi-definite if and only if \(\frac{\textbf{M}+ {}^t \textbf{M} }{2}\) is positive definite,
– if \(\textbf{M}\) is row diagonally dominant and column diagonally dominant with positive diagonal entries, then \(\textbf{M}\) is positive quasi-definite,
– each positive definite matrix is a P-matrix,
– if \(\textbf{M}\) is row diagonally dominant with positive diagonal entries or column diagonally dominant with positive diagonal entries, then it is a P-matrix.Footnote 26
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
von Mouche, P.H.M., Szidarovszky, F. Aggregative Variational Inequalities. J Optim Theory Appl 196, 1056–1092 (2023). https://doi.org/10.1007/s10957-023-02164-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-023-02164-w
Keywords
- Variational inequality
- Nonlinear complementarity problem
- Selten–Szidarovszky technique
- Pseudo-concavity
- Sum-aggregative game