Skip to main content
Log in

Ordinal potentials in smooth games

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

In the class of smooth non-cooperative games, exact potential games and weighted potential games are known to admit a convenient characterization in terms of cross-derivatives (Monderer and Shapley in Games Econ Behav 14:124–143, 1996a). However, no analogous characterization is known for ordinal potential games. The present paper derives necessary conditions for a smooth game to admit an ordinal potential. First, any ordinal potential game must exhibit pairwise strategic complements or substitutes at any interior equilibrium. Second, in games with more than two players, a condition is obtained on the (modified) Jacobian at any interior equilibrium. Taken together, these conditions are shown to correspond to a local analogue of the Monderer–Shapley condition for weighted potential games. We identify two classes of economic games for which our necessary conditions are also sufficient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Both exact and ordinal concepts have been considered in the literature. For a function P on the space of strategy profiles to be an exact potential (a weighted potential), the difference in a player’s payoff resulting from a unilateral change of her strategy must precisely (up to a positive factor) equal the corresponding difference in P. For a function to be an ordinal potential (a generalized ordinal potential), any weak or strict gain (any strict gain) in a player’s payoff resulting from a unilateral change of her strategy must be reflected by a corresponding gain in P.

  2. Applications of potential methods are vast and include, for example, the analysis of oligopolistic markets (Slade 1994), learning processes (Monderer and Shapley 1996b; Fudenberg and Levine 1998; Young 2004), population dynamics (Sandholm 2001, 2009; Cheung 2014), the robustness of equilibria (Frankel et al. 2003; Morris and Ui 2005; Okada and Tercieux 2012), the decomposition of games (Candogan et al. 2011), imitation strategies (Duersch et al. 2012), dynamics (Candogan et al. 2013a, b), equilibrium existence (Voorneveld 1997; Kukushkin 1994, 2011), solution concepts (Peleg et al. 1996; Tercieux and Voorneveld 2010), games with monotone best-response selections (Huang 2002; Dubey et al. 2006; Jensen 2010), supermodular and zero-sum games (Brânzei et al. 2003), and even mechanism design (Jehiel et al. 2008).

  3. Since any ordinal potential game is, in particular, a generalized ordinal potential game, we also obtain necessary conditions for the existence of an ordinal potential.

  4. A pantograph is a mechanical drawing instrument that allows creating copies of a plan on a different scale.

  5. To caution the reader, we stress that, despite a similarity in terminology, the present analysis is not directly related to the use of local potentials in the analysis of informational robustness (Morris 1999; Frankel et al. 2003; Morris and Ui 2005; Okada and Tercieux 2012).

  6. The relevant elements of the theory of semipositive matrices will be reviewed below.

  7. For a rigorous statement of this important result, we refer the reader to Voorneveld and Norde (1997).

  8. Monderer and Shapley (1996a, p. 135) wrote: “Unlike (weighted) potential games, ordinal potential games are not easily characterized. We do not know of any useful characterization, analogous to the one given in (4.1), for differentiable ordinal potential games.” Since then, the problem has apparently remained unaddressed. See, e.g., the recent surveys by Mallozzi (2013), González-Sánchez and Hernández-Lerma (2016), or Lã et al. (2016).

  9. Thus, we restrict attention to one-dimensional strategy spaces. The extension to multi-dimensional strategy spaces is discussed in the conclusion.

  10. This point is worth being mentioned because, as pointed out by Voorneveld (1997), a continuous ordinal potential game need not, in general, admit a continuous ordinal potential. See also Peleg et al. (1996).

  11. Similar local necessary conditions may be obtained from the consideration of boundary equilibria, as will be explained in Sect. 6.

  12. See, e.g., Corchón (1994).

  13. As usual, provided that \(\partial ^{2}u_{i}/\partial x_{i}^{2}<0\) holds globally, player i’s marginal condition defines a function \(\beta _{i}=\beta _{i}(x_{-i})\) that maps any vector \(x_{-i}\in X_{-i}\) to a unique \(\beta _{i}(x_{-i})\in X_{i}\) such that \(\partial u_{i}(\beta _{i}(x_{-i}),x_{-i})/\partial x_{i}=0\) holds in the interior. We will refer to \(\beta _{i}\) as player i’s best-response function. In particular, for any other player \(j\ne i\), we may refer to

    $$\begin{aligned} \sigma _{ij}(x_{N})=\frac{\partial \beta _{i}(x_{-i})}{\partial x_{j}} =-\dfrac{\partial ^{2}u_{i}(x_{N})/\partial x_{j}\partial x_{i}}{\partial ^{2}u_{i}(x_{N})/\partial x_{i}^{2}} \end{aligned}$$
    (4)

    as the slope of player i’s best-response function with respect to player j (in the interior). Thus, \(\sigma _{ij}(x_{N})\ge 0\) (\(\le 0\)) if and only if \(\varGamma \) exhibits strategic complements (strategic substitutes) between i and j at \(x_{N}\).

  14. There is a minor technical subtlety here in so far that the payoff difference approaches zero as \(\varepsilon \) goes to zero. However, as shown in the Appendix with the help of a careful limit consideration, the payoff difference approaches zero from above since the corresponding cross-derivative is positive. This turns out to be sufficient to settle the trade-off for a sufficiently small but still positive \(\varepsilon \).

  15. In contrast to the previous section, we will allow for edges that are not necessarily of equal length.

  16. As will be explained in Sect. 6, no additional conditions can be obtained by considering strategy profiles at which less than two first-order conditions hold. Similarly, allowing for more complicated local paths does not tighten our conditions. For a discussion of this point, see likewise the working paper version (Ewerhart 2017).

  17. For a recent application, see Nocke and Schutz (2018).

  18. Making use of player j’s first-order condition

    $$\begin{aligned} x_{j}^{\ast}\frac{\partial \phi (x_{N}^{\ast})}{\partial x_{j}}+\phi (x_{N} ^{\ast})=0, \end{aligned}$$
    (14)

    we see that

    $$\begin{aligned} w_{i}(x_{N}^{\ast})\cdot \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}=\frac{1}{x_{i}^{\ast}}\cdot \left\{ x_{i}^{\ast} \frac{\partial ^{2}\phi (x_{N}^{\ast})}{\partial x_{j}\partial x_{i}} +\frac{\partial \phi (x_{N}^{\ast})}{\partial x_{j}}\right\} =\frac{\partial ^{2}\phi (x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}-\frac{\phi (x_{N}^{\ast})}{x_{j}^{\ast}x_{i}^{\ast}} \end{aligned}$$
    (15)

    is symmetric with respect i and j. Thus, (10) indeed holds for the suggested local weights.

  19. As usual, the derivative in (21) is understood to be one-sided if \(x_{j}^{\ast}={\underline{x}}_{j}\) or \(x_{j}^{\ast}={\overline{x}}_{j}\).

  20. Further generalizations are left for future work.

  21. Additional illustrations can be found in the working paper (Ewerhart 2017).

  22. This example illustrates likewise the local nature of our conditions. Indeed, for \(x_{i}\ne x_{j}\), the product of the cross-derivatives is negative in the lottery contest.

  23. If \(\theta _{ij}=0\) for some \(i\ne j\), then the necessary and sufficient conditions (31, 32) identified below must be complemented by additional conditions analogous to (32) for any number \(m\in \{4,\ldots ,n\}\) of players. Moreover, the functional form of the potential may differ somewhat from (33). In a nutshell, one first notes that, for a generalized ordinal potential to exist in the Bertrand game, it is necessary that, for any \(i,j\in N\) with \(i\ne j\), it holds that \(\theta _{ij}=0\) if and only if \(\theta _{ji}=0\). Then, in any connected component of the directed weighted graph defined by the bilateral price externalities \(\{\theta _{ij}\}\), one chooses a spanning tree and defines a potential weight for a firm (or node) analogous to (35) by taking the product over all absolute price externalities represented as weights in the tree directed toward the firm. The details are omitted.

  24. This is the case, for instance, if \(\max _{j\ne i}\left| \theta _{ij}\right| \) is sufficiently small.

  25. As usual, sgn(.) denotes the sign function, with sgn\((z)=1\) if \(z>0\), sgn\((z)=0\) if \(z=0\), and sgn\((z)=-\,1\) if \(z<0\).

  26. In addition, the example shows that our necessary conditions may indeed be indicative of sufficient conditions.

  27. A multi-dimensional variant of the Monderer–Shapley condition for exact potential games can be found in Deb (2008, 2009).

  28. See Neyman (1997), Ui (2008), and Hofbauer and Sandholm (2009).

  29. Smooth symmetric games admit at least one symmetric equilibrium under standard assumptions (Moulin 1986, p. 115). However, there are also large classes of economically relevant symmetric games that admit only asymmetric pure-strategy Nash equilibria (cf. Amir et al. 2010).

  30. The author is presently exploring the validity of these conjectures.

References

Download references

Acknowledgements

The idea for this work originated during an inspiring conversation with Nikolai Kukushkin. Two anonymous referees provided valuable feedback on an earlier version. Material contained herein has been presented at the Department of Business of the University of Zurich and at the National University of Singapore. For useful discussions, I am grateful to Rabah Amir, Georgy Egorov, Josef Hofbauer, Felix Kübler, Stephen Morris, Georg Nöldeke, Marek Pycia, Karl Schmedders, and Satoru Takahashi.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Ewerhart.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

Proof of Proposition 1

By contradiction. Suppose that, at some profile \(x_{N}^{\ast}\in X_{N}\), and for some players i and j with \(i\ne j\), we have

$$\begin{aligned}&x_{i}^{\ast}\in {\mathring{X}} _{i}, \quad x_{j}^{\ast}\in {\mathring{X}} _{j}, \quad \frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}} =\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0,\quad {\hbox {and}} \end{aligned}$$
(36)
$$\begin{aligned}&\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}} \cdot \frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}} <0. \end{aligned}$$
(37)

By renaming players, if necessary, we may assume w.l.o.g. that

$$\begin{aligned} \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}>0>\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j} }, \end{aligned}$$
(38)

which corresponds to the case shown in Fig. 1. It is claimed now that, for any sufficiently small \(\varepsilon >0\), the payoff difference corresponding to the upper side of the square satisfies

$$\begin{aligned} \varDelta _{i}^{+}(\varepsilon )\equiv u_{i}\left( x_{i}^{\ast}+\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast}\right) -u_{i} \left( x_{i}^{\ast}-\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast}\right) >0. \end{aligned}$$
(39)

To prove this, we determine the second-order Taylor approximation of \(\varDelta _{i}^{+}(\varepsilon )\) at \(\varepsilon =0\). Writing \(f(\varepsilon )\) for \(\varDelta _{i}^{+}(\varepsilon )\), our differentiability assumptions combined with Taylor’s theorem imply that there is a remainder term r(.) with \(\lim _{\varepsilon \rightarrow 0}r(\varepsilon )=0\) such that for any sufficiently small \(\varepsilon >0\),

$$\begin{aligned} f(\varepsilon )=f(0)+f^{\prime }(0)\varepsilon +\frac{1}{2}f^{\prime \prime }(0)\varepsilon ^{2}+r(\varepsilon )\varepsilon ^{2}. \end{aligned}$$
(40)

Clearly, \(f(0)=0\). As for the first derivative \(f^{\prime }(0)\), one obtains

$$\begin{aligned} \frac{\partial \varDelta _{i}^{+}(\varepsilon )}{\partial \varepsilon }&=\left\{ \frac{\partial u_{i}(x_{i}^{\ast}+\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast})}{\partial x_{i}}+\frac{\partial u_{i}(x_{i}^{\ast}+\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast})}{\partial x_{j} }\right\} \nonumber \\&\quad -\left\{ -\frac{\partial u_{i}(x_{i}^{\ast}-\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast})}{\partial x_{i}}+\frac{\partial u_{i} (x_{i}^{\ast}-\varepsilon ,x_{j}^{\ast}+\varepsilon ,x_{-i,j}^{\ast})}{\partial x_{j}}\right\} . \end{aligned}$$
(41)

Evaluating at \(\varepsilon =0\), and subsequently exploiting the necessary first-order condition for player i at the interior equilibrium \(x_{N}^{\ast} \), we find

$$\begin{aligned} f^{\prime }(0)=\frac{\partial \varDelta _{i}^{+}(0)}{\partial \varepsilon } =2\cdot \frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}}=0. \end{aligned}$$
(42)

Next, consider the second derivative of \(\varDelta _{i}^{+}(\varepsilon )\) at \(\varepsilon =0\), i.e.,

$$\begin{aligned} \frac{\partial ^{2}\varDelta _{i}^{+}(0)}{\partial \varepsilon ^{2}}&=\left\{ \dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}^{2}}+\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}+\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}}+\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}^{2}}\right\} \end{aligned}$$
(43)
$$\begin{aligned}&\qquad -\left\{ \dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}^{2}} -\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}} -\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}} +\dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}^{2}}\right\} \nonumber \\&\quad =2\cdot \dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}+2\cdot \dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}}. \end{aligned}$$
(44)

Invoking Schwarz’s theorem regarding the equality of cross-derivatives for twice continuously differentiable functions, and subsequently using (38), one obtains

$$\begin{aligned} f^{\prime \prime }(0)=\frac{\partial ^{2}\varDelta _{i}^{+}(0)}{\partial \varepsilon ^{2}}=4\cdot \dfrac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}>0. \end{aligned}$$
(45)

In sum, we have shown that \(f^{\prime }(0)=f(0)=0\) and \(f^{\prime \prime }(0)>0\). Thus, using (40), it follows that \(\varDelta _{i}^{+}(\varepsilon )>0\) for any sufficiently small \(\varepsilon >0\). Analogous arguments can be used to deal with the other three sides of the square. Specifically, one defines

$$\begin{aligned} \varDelta _{j}^{+}(\varepsilon )&=u_{j}(x_{j}^{\ast}-\varepsilon ,x_{i}^{\ast}+\varepsilon ,x_{-i,j}^{\ast})-u_{j}(x_{j}^{\ast}+\varepsilon ,x_{i}^{\ast}+\varepsilon ,x_{-i,j}^{\ast}), \end{aligned}$$
(46)
$$\begin{aligned} \varDelta _{i}^{-}(\varepsilon )&=u_{i}(x_{i}^{\ast}-\varepsilon ,x_{j}^{\ast}-\varepsilon ,x_{-i,j}^{\ast})-u_{i}(x_{i}^{\ast}+\varepsilon ,x_{j}^{\ast}-\varepsilon ,x_{-i,j}^{\ast}), \end{aligned}$$
(47)
$$\begin{aligned} \varDelta _{j}^{-}(\varepsilon )&=u_{j}(x_{j}^{\ast}+\varepsilon ,x_{i}^{\ast}-\varepsilon ,x_{-i,j}^{\ast})-u_{j}(x_{j}^{\ast}-\varepsilon ,x_{i}^{\ast}-\varepsilon ,x_{-i,j}^{\ast}), \end{aligned}$$
(48)

and now readily verifies that

$$\begin{aligned} \frac{\partial \varDelta _{j}^{+}(0)}{\partial \varepsilon }&=(-\,2)\cdot \frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0, \end{aligned}$$
(49)
$$\begin{aligned} \frac{\partial \varDelta _{i}^{-}(0)}{\partial \varepsilon }&=(-\,2)\cdot \frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}}=0, \end{aligned}$$
(50)
$$\begin{aligned} \frac{\partial \varDelta _{j}^{-}(0)}{\partial \varepsilon }&=2\cdot \frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0, \end{aligned}$$
(51)

and that

$$\begin{aligned} \frac{\partial ^{2}\varDelta _{j}^{+}\left( 0\right) }{\partial \varepsilon ^{2}}&=\left( -\,4\right) \cdot \dfrac{\partial ^{2}u_{j}\left( x_{N}^{\ast}\right) }{\partial x_{i}\partial x_{j}}>0, \end{aligned}$$
(52)
$$\begin{aligned} \frac{\partial ^{2}\varDelta _{i}^{-}\left( 0\right) }{\partial \varepsilon ^{2}}&=4\cdot \dfrac{\partial ^{2}u_{i}\left( x_{N}^{\ast}\right) }{\partial x_{j}\partial x_{i} }>0, \end{aligned}$$
(53)
$$\begin{aligned} \frac{\partial ^{2}\varDelta _{j}^{-}\left( 0\right) }{\partial \varepsilon ^{2}}&=\left( -\,4\right) \cdot \dfrac{\partial ^{2}u_{j}\left( x_{N}^{\ast}\right) }{\partial x_{i}\partial x_{j}}>0. \end{aligned}$$
(54)

It follows that \(\varDelta _{i}^{+}(\varepsilon )>0\), \(\varDelta _{j}^{+} (\varepsilon )>0\), \(\varDelta _{i}^{-}(\varepsilon )>0\), and \(\varDelta _{j} ^{-}(\varepsilon )>0\) all hold for \(\varepsilon >0\) small enough. But then, the finite sequence (6) is a strict improvement cycle, which is incompatible with the existence of a generalized ordinal potential by Lemma 1. □

Proof of Proposition 2

Suppose that the modified Jacobian \(J=J(x_{N})\) is semipositive. Then, by definition, there exists a vector \(\lambda _{N}=(\lambda _{1},\ldots ,\lambda _{n})^{T}\in {\mathbb{R}} ^{n}\) with \(\lambda _{N}>0\) such that \(J\lambda _{N}>0\). Consider the finite sequence

$$\begin{aligned}&\cdots \rightarrow x_{N}^{(1,+)}(\varepsilon )=(x_{1}^{\ast}+\lambda _{1}\varepsilon ,x_{2}^{\ast}-\lambda _{2}\varepsilon ,x_{3}^{\ast}-\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}-\lambda _{n-1}\varepsilon ,x_{n}^{\ast}-\lambda _{n}\varepsilon )\nonumber \\&\quad \rightarrow x_{N}^{(2,+)}(\varepsilon )=(x_{1}^{\ast}+\lambda _{1} \varepsilon ,x_{2}^{\ast}+\lambda _{2}\varepsilon ,x_{3}^{\ast}-\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}-\lambda _{n-1}\varepsilon ,x_{n}^{\ast}-\lambda _{n}\varepsilon )\rightarrow \nonumber \\&\quad \vdots \nonumber \\&\quad \rightarrow x_{N}^{(n,+)}(\varepsilon )=(x_{1}^{\ast}+\lambda _{1} \varepsilon ,x_{2}^{\ast}+\lambda _{2}\varepsilon ,x_{3}^{\ast}+\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}+\lambda _{n-1}\varepsilon ,x_{n}^{\ast}+\lambda _{n}\varepsilon )\nonumber \\&\quad \rightarrow x_{N}^{(1,-)}(\varepsilon )=(x_{1}^{\ast}-\lambda _{1} \varepsilon ,x_{2}^{\ast}+\lambda _{2}\varepsilon ,x_{3}^{\ast}+\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}+\lambda _{n-1}\varepsilon ,x_{n}^{\ast}+\lambda _{n}\varepsilon )\nonumber \\&\quad \rightarrow x_{N}^{(2,-)}(\varepsilon )=(x_{1}^{\ast}-\lambda _{1} \varepsilon ,x_{2}^{\ast}-\lambda _{2}\varepsilon ,x_{3}^{\ast}+\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}+\lambda _{n-1}\varepsilon ,x_{n}^{\ast}+\lambda _{n}\varepsilon )\rightarrow \nonumber \\&\quad \vdots \nonumber \\&\quad \rightarrow x_{N}^{(n,-)}(\varepsilon )=(x_{1}^{\ast}-\lambda _{1} \varepsilon ,x_{2}^{\ast}-\lambda _{2}\varepsilon ,x_{3}^{\ast}-\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast}-\lambda _{n-1}\varepsilon ,x_{n}^{\ast}-\lambda _{n}\varepsilon )\rightarrow \cdots , \end{aligned}$$
(55)

where \(\varepsilon >0\) is a small constant as before. Figure 2 illustrates this path for \(n=3\), where the rectangular-shaped box has sides of respective length \(\varepsilon _{i}=\lambda _{i}\varepsilon \) for \(i=1,2,3\). It is claimed that, for any \(\varepsilon >0\) sufficiently small, the following four conditions hold:

  1. 1.

    player 1’s payoff at \(x_{N}^{(1,+)}(\varepsilon )\) is strictly higher than at \(x_{N}^{(n,-)}(\varepsilon )\);

  2. 2.

    for \(i=2,\ldots ,n\), player i’s payoff at \(x_{N}^{(i,+)} (\varepsilon )\) is strictly higher than at \(x_{N}^{(i-1,+)}(\varepsilon )\);

  3. 3.

    player 1’s payoff at \(x_{N}^{(1,-)}(\varepsilon )\) is strictly higher than at \(x_{N}^{(n,+)}(\varepsilon )\);

  4. 4.

    for \(i=2,\ldots ,n\), player i’s payoff at \(x_{N}^{(i,-)} (\varepsilon )\) is strictly higher than at \(x_{N}^{(i-1,-)}(\varepsilon )\).

To establish claim (1), we proceed as in the proof of Proposition 1, and consider the first two derivatives of the payoff difference

$$\begin{aligned} \varDelta ^{(1,+)}(\varepsilon )&=u_{1}(x_{N}^{(1,+)}(\varepsilon ))-u_{1} (x_{N}^{(n,-)}(\varepsilon )) \end{aligned}$$
(56)
$$\begin{aligned}&=u_{1}(x_{1}^{\ast}+\lambda _{1}\varepsilon ,x_{2}^{\ast}-\lambda _{2}\varepsilon ,x_{3}^{\ast}-\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast} -\lambda _{n-1}\varepsilon ,x_{n}^{\ast}-\lambda _{n}\varepsilon )\nonumber \\&\quad -\,u_{1}(x_{1}^{\ast}-\lambda _{1}\varepsilon ,x_{2}^{\ast}-\lambda _{2}\varepsilon ,x_{3}^{\ast}-\lambda _{3}\varepsilon ,\ldots ,x_{n-1}^{\ast} -\lambda _{n-1}\varepsilon ,x_{n}^{\ast}-\lambda _{n}\varepsilon ) \end{aligned}$$
(57)

at \(\varepsilon =0\). The first derivative of \(\varDelta ^{(1,+)}(\varepsilon )\) at \(\varepsilon =0\) is given by

$$\begin{aligned} \frac{\partial \varDelta ^{(1,+)}(0)}{\partial \varepsilon }&=\left( \lambda _{1}\frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{1}}-\lambda _{2} \frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{2}}-\cdots -\lambda _{n} \frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{n}}\right) \nonumber \\&\quad -\left( -\lambda _{1}\frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{1} }-\lambda _{2}\frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{2}} -\cdots -\lambda _{n}\frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{n}}\right) \end{aligned}$$
(58)
$$\begin{aligned}&=2\lambda _{1}\frac{\partial u_{1}(x_{N}^{\ast})}{\partial x_{1}}. \end{aligned}$$
(59)

Hence, from player 1’s first-order condition,

$$\begin{aligned} \frac{\partial \varDelta ^{(1,+)}(0)}{\partial \varepsilon }=0. \end{aligned}$$
(60)

Next, one considers the second derivative of \(\varDelta ^{(1,+)}(\varepsilon )\) at \(\varepsilon =0\), i.e.,

$$\begin{aligned} \frac{\partial ^{2}\varDelta ^{(1,+)}(0)}{\partial \varepsilon ^{2}}&=\left\{ (\lambda _{1})^{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{1}^{2} }-\lambda _{2}\lambda _{1}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{2}\partial x_{1}}-\cdots -\lambda _{n}\lambda _{1}\dfrac{\partial ^{2}u_{1} (x_{N}^{\ast})}{\partial x_{n}\partial x_{1}}\right. \nonumber \\&\qquad -\lambda _{1}\lambda _{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{1}\partial x_{2}}+(\lambda _{2})^{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{2}^{2}}+\cdots +\lambda _{n}\lambda _{2}\dfrac{\partial ^{2} u_{1}(x_{N}^{\ast})}{\partial x_{n}\partial x_{2}}\nonumber \\&\qquad \vdots \nonumber \\&\qquad \left. -\lambda _{1}\lambda _{n}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{1}\partial x_{n}}+\lambda _{2}\lambda _{n}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{2}\partial x_{n}}+\cdots +(\lambda _{n} )^{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{n}^{2}}\right\} \nonumber \\&\qquad -\left\{ (\lambda _{1})^{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{1}^{2}}+\lambda _{2}\lambda _{1}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{2}\partial x_{1}}+\cdots +\lambda _{n}\lambda _{1}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{n}\partial x_{1}}\right. \nonumber \\&\qquad +\lambda _{1}\lambda _{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{1}\partial x_{2}}+(\lambda _{2}^{2})\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{2}^{2}}+\cdots +\lambda _{n}\lambda _{2}\dfrac{\partial ^{2} u_{1}(x_{N}^{\ast})}{\partial x_{n}\partial x_{2}}\nonumber \\&\qquad \vdots \nonumber \\&\qquad +\left. \lambda _{1}\lambda _{n}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{1}\partial x_{n}}+\lambda _{2}\lambda _{n}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{2}\partial x_{n}}+\cdots +(\lambda _{n} )^{2}\dfrac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{n}^{2}}\right\} . \end{aligned}$$
(61)

Collecting terms, one obtains

$$\begin{aligned} \frac{\partial ^{2}\varDelta ^{(1,+)}(0)}{\partial \varepsilon ^{2}}=2\lambda _{1}\left\{ -\lambda _{2}\frac{\partial ^{2}u_{1}(x_{N}^{\ast})}{\partial x_{2}\partial x_{1}}-\lambda _{3}\frac{\partial ^{2}u_{1}(x_{N}^{\ast} )}{\partial x_{3}\partial x_{1}}-\cdots -\lambda _{n}\frac{\partial ^{2}u_{1} (x_{N}^{\ast})}{\partial x_{n}\partial x_{1}}\right\} . \end{aligned}$$
(62)

Thus, using \(\lambda _{1}>0\), and noting that the expression in the curly brackets corresponds to the first component of the vector \(J\lambda _{N}>0\), one arrives at

$$\begin{aligned} \frac{\partial ^{2}\varDelta ^{(1,+)}(0)}{\partial \varepsilon ^{2}}>0. \end{aligned}$$
(63)

It follows that \(\varDelta ^{(1,+)}(\varepsilon )>0\) for any \(\varepsilon >0\) sufficiently small, which proves claim (1). To verify claims (2) through (4), define payoff differences

$$\begin{aligned} \varDelta ^{(i,+)}(\varepsilon )&=u_{i}(x_{N}^{(i,+)}(\varepsilon ))-u_{i} (x_{N}^{(i-1,+)}(\varepsilon ))\quad (i=2,\ldots ,n), \end{aligned}$$
(64)
$$\begin{aligned} \varDelta ^{(1,-)}(\varepsilon )&=u_{1}(x_{N}^{(1,-)}(\varepsilon ))-u_{1} (x_{N}^{(n,+)}(\varepsilon )), \end{aligned}$$
(65)
$$\begin{aligned} \varDelta ^{(i,-)}(\varepsilon )&=u_{i}(x_{N}^{(i,-)}(\varepsilon ))-u_{i} (x_{N}^{(i-1,+)}(\varepsilon ))\quad (i=2,\ldots ,n). \end{aligned}$$
(66)

Using players’ necessary first-order conditions, it is straightforward to validate that

$$\begin{aligned} \frac{\partial \varDelta ^{(i,+)}(0)}{\partial \varepsilon }&=0\quad (i=2,\ldots ,n), \end{aligned}$$
(67)
$$\begin{aligned} \frac{\partial \varDelta ^{(1,-)}(0)}{\partial \varepsilon }&=0, \end{aligned}$$
(68)
$$\begin{aligned} \frac{\partial \varDelta ^{(i,-)}(0)}{\partial \varepsilon }&=0\quad (i=2,\ldots ,n). \end{aligned}$$
(69)

Moreover, for \(i=2,\ldots ,n\), calculations analogous to (61, 62) yield

$$\begin{aligned}&\frac{\partial ^{2}\varDelta ^{(i,+)}(0)}{\partial \varepsilon ^{2}} =2\lambda _{i}\left\{ \lambda _{1}\frac{\partial ^{2}u_{i}(x_{N}^{\ast} )}{\partial x_{1}\partial x_{i}}+\cdots +\lambda _{i-1}\frac{\partial ^{2} u_{i}(x_{N}^{\ast})}{\partial x_{i-1}\partial x_{i}}\right. \end{aligned}$$
(70)
$$\begin{aligned}&\quad \quad \left. -\lambda _{i+1}\frac{\partial ^{2}u_{i} (x_{N}^{\ast})}{\partial x_{i+1}\partial x_{i}}-\cdots -\lambda _{n}\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{n}\partial x_{i}}\right\} \nonumber \\&\quad >0, \end{aligned}$$
(71)

where the expression in the curly brackets corresponds to the i’s component of the vector \(J\lambda _{N}>0\). Finally, one notes that, from \(d(-\,\varepsilon )^{2}=d\varepsilon ^{2}\), it follows that

$$\begin{aligned} \frac{\partial ^{2}\varDelta ^{(i,-)}(0)}{\partial \varepsilon ^{2}}=\frac{\partial ^{2}\varDelta ^{(i,+)}(0)}{\partial \varepsilon ^{2}} \quad (i=1,\ldots ,n). \end{aligned}$$
(72)

In sum, this proves claims (2) through (4). Thus, there exists a strict improvement cycle in the generalized ordinal potential game \(\varGamma \). Since this is impossible, the proposition follows. □

The three lemmas below will be used in the proof of Theorem 1. Following the literature, we will call a square matrix \(A\in {\mathbb{R}} ^{n\times n}\) inverse nonnegative if the matrix inverse \(A^{-1}\) exists and if, in addition, all entries of \(A^{-1}\) are nonnegative. The following lemma provides a useful recursive characterization of semipositivity.

Lemma 2

(Johnson et al. 1994). A square matrix \(A\in {\mathbb{R}} ^{n\times n}\) is semipositive if and only if at least one of the following two conditions holds:

  1. 1.

    A is inverse nonnegative;

  2. 2.

    there exists \(m\in \{1,\ldots ,n-1\}\) and a submatrix \({\widehat{A}}\in {\mathbb{R}} ^{n\times m}\) obtained from A via deletion of \(n-m\) columns, such that all \(m\times m\) submatrices of \({\widehat{A}}\) are semipositive.

Using the lemma above, we may derive the following implication of Proposition 2.

Lemma 3

Suppose that the smooth n-player game \(\varGamma \) admits a generalized ordinal potential. Then, at any profile \(x_{N}^{\ast}\), and for any set \(\{i,j,k\}\subseteq N\) of pairwise different players,

$$\begin{aligned}&\left\{ x_{i}^{\ast}\in {\mathring{X}} _{i}{,}\, x_{j}^{\ast}\in {\mathring{X}} _{j}{,}\, x_{k}^{\ast}\in {\mathring{X}} _{k}{,} \quad{{and}}\quad \frac{\partial u_{i}(x_{N}^{\ast} )}{\partial x_{i}}=\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}} =\frac{\partial u_{k}(x_{N}^{\ast})}{\partial x_{k}}=0\right\} \nonumber \\&\quad \Rightarrow \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}\cdot \frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{k}\partial x_{j}}\cdot \frac{\partial ^{2}u_{k}(x_{N}^{\ast})}{\partial x_{i}\partial x_{k}}=\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j} }\cdot \frac{\partial ^{2}u_{k}(x_{N}^{\ast})}{\partial x_{j}\partial x_{k} }\cdot \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{k}\partial x_{i} }. \end{aligned}$$
(73)

Proof

Fix some profile \(x_{N}^{\ast}\) and a set \(\{i,j,k\}\subseteq N\) of pairwise different players such that

$$\begin{aligned}&x_{i}^{\ast} \in {\mathring{X}} _{i},\quad x_{j}^{\ast}\in {\mathring{X}} _{j},\quad x_{k}^{\ast}\in {\mathring{X}} _{k}, \quad {\text {and}} \end{aligned}$$
(74)
$$\begin{aligned}&\frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}} =\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=\frac{\partial u_{k}(x_{N}^{\ast} )}{\partial x_{k}}=0. \end{aligned}$$
(75)

Using the notation

$$\begin{aligned} \chi _{ij}=\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i} },\quad \chi _{jk}=\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{k}\partial x_{j}},\ldots \end{aligned}$$
(76)

for cross-derivatives, it needs to be shown that

$$\begin{aligned} \chi _{ij}\cdot \chi _{jk}\cdot \chi _{ki}=\chi _{ji}\cdot \chi _{kj}\cdot \chi _{ik}. \end{aligned}$$
(77)

Suppose first that \(\varGamma \) exhibits weak strategic complements at \(x_{N}^{\ast}\), i.e.,

$$\begin{aligned} \chi _{ij}\ge 0, \quad \chi _{ji}\ge 0, \quad \chi _{ik}\ge 0,\quad \chi _{ki}\ge 0,\quad \chi _{jk}\ge 0,\quad {\text {and}}\quad \chi _{kj}\ge 0. \end{aligned}$$
(78)

Consider now a small circular path along the edges of a small three-dimensional rectangular-shaped box around \(x_{N}^{\ast}\). Along the path, players i, j, and k move in this order, with i and k initially increasing their strategies, while j initially decreases her strategy. Since this corresponds to flipping around player j’s strategy space, all cross-derivatives involving player j change sign, so that the corresponding modified Jacobian reads

$$\begin{aligned} J_{3}=\left( \begin{array}{ccc} 0 &{}\quad \chi _{ij} &{}\quad -\chi _{ik}\\ -\chi _{ji} &{}\quad 0 &{}\, \chi _{jk}\\ \chi _{ki} &{}\quad -\chi _{kj} &{}\, 0 \end{array} \right) . \end{aligned}$$
(79)

By Proposition 2, \(J_{3}\) cannot be inverse nonnegative. To prove (77), it suffices to show that the determinant of \(J_{3}\),

$$\begin{aligned} \left| J_{3}\right| =\chi _{ij}\chi _{jk}\chi _{ki}-\chi _{ji}\chi _{kj}\chi _{ik}, \end{aligned}$$
(80)

vanishes. To provoke a contradiction, suppose first that \(\left| J_{3}\right| >0\). Then, from weak strategic complements at \(x_{N}^{\ast}\), all the entries of the matrix inverse of \(J_{3}\),

$$\begin{aligned} (J_{3})^{-1}=\frac{1}{\left| J_{3}\right| } \left( \begin{array}{ccc} \chi _{jk}\chi _{kj} &{}\quad \chi _{ik}\chi _{kj} &{}\quad \chi _{ij}\chi _{jk}\\ \chi _{ki}\chi _{jk} &{}\quad \chi _{ik}\chi _{ki} &{}\quad \chi _{ji}\chi _{ik}\\ \chi _{ji}\chi _{kj} &{}\quad \chi _{ij}\chi _{ki} &{}\quad \chi _{ij}\chi _{ji} \end{array} \right) , \end{aligned}$$
(81)

are nonnegative, in contradiction to the fact that \(J_{3}\) is not inverse nonnegative. Hence, \(\left| J_{3}\right| \le 0\). Suppose next that \(\left| J_{3}\right| <0\). Then, by running through the above-considered path in the opposite direction (i.e., by exchanging the roles of players i and k), Proposition 2 implies that

$$\begin{aligned} {\widehat{J}}_{3}=\left( \begin{array}{ccc} 0 &{}\quad \chi _{kj} &{}\, -\chi _{ki}\\ -\,\chi _{jk} &{}\, 0 &{}\, \chi _{ji}\\ \chi _{ik} &{}\, -\chi _{ij} &{}\, 0 \end{array} \right) \end{aligned}$$
(82)

is not inverse nonnegative. Clearly, the determinant of \({\widehat{J}}_{3}\) is

$$\begin{aligned} \left| {\widehat{J}}_{3}\right| =\chi _{kj}\chi _{ji}\chi _{ik}-\chi _{jk}\chi _{ij}\chi _{ki}=-\left| J_{3}\right| >0. \end{aligned}$$
(83)

Hence, using an expression for the matrix inverse analogous to (81), all entries of \(({\widehat{J}}_{3})^{-1}\) are seen to be nonnegative, in contradiction to the fact that \({\widehat{J}}_{3}\) is not inverse nonnegative. It follows that \(\left| J_{3}\right| =0\), which proves the claim in the case where \(\varGamma \) exhibits weak strategic complements at \(x_{N}^{\ast}\). The case of weak strategic substitutes, where

$$\begin{aligned} \chi _{ij}\le 0,\quad \chi _{ji}\le 0, \quad \chi _{ik}\le 0,\quad \chi _{ki}\le 0,\quad \chi _{jk}\le 0,\quad {\text {and}}\quad \chi _{kj}\le 0, \end{aligned}$$
(84)

is entirely analogous, and therefore omitted. We are now in the position to address the general case. From Proposition 1, we know that \(\varGamma \) exhibits pairwise weak strategic complements or substitutes at \(x_{N}^{\ast}\). Hence, up to a renaming of the players, there are only two remaining cases:

  1. 1.

    Weak strategic complements at \(x_{N}^{\ast}\) between player i and each of players j and k, as well as strategic substitutes at \(x_{N}^{\ast}\) between players j and k;

  2. 2.

    Weak strategic substitutes at \(x_{N}^{\ast}\) between player i and each of players j and k, as well as weak strategic complements at \(x_{N}^{\ast}\) between players j and k.

In either case, by flipping around the strategy space of player i, the game may be transformed into a game that exhibits either weak strategic substitutes at \(x_{N}^{\ast}\) or weak strategic complements at \(x_{N}^{\ast}\). Since the operation of flipping around individual strategy spaces does not affect the validity of Eq. (77), we find that the conclusion indeed holds in the general case. □

Lemma 4

Suppose that the smooth n-player game \(\varGamma \) admits a generalized ordinal potential. Then, at any interior strategy profile \(x_{N}^{\ast}\) at which all first-order conditions hold, and for any set of pairwise distinct players \(\{i_{1},\ldots ,i_{m}\}\subseteq N\) with \(m\ge 3\), using the notation introduced in (76), it holds that

$$\begin{aligned} \chi _{i_{1}i_{2}}\cdot \chi _{i_{2}i_{3}}\cdot \ldots \cdot \chi _{i_{m-1}i_{m}} \cdot \chi _{i_{m}i_{1}}=\chi _{i_{2}i_{1}}\cdot \chi _{i_{3}i_{2}}\cdot \ldots \cdot \chi _{i_{m}i_{m-1}}\cdot \chi _{i_{1}i_{m}}{,} \end{aligned}$$
(85)

provided that \(\chi _{i_{1}i_{3}}\cdot \chi _{i_{3}i_{1}}\ne 0,\ldots ,\chi _{i_{1}i_{m-1}}\cdot \chi _{i_{m-i}i_{1}}\ne 0 . \)

Proof

The proof proceeds by induction. The case \(m=3\) follows directly from Lemma 3. Suppose that \(m\ge 4\), and let \(\{i_{1} ,i_{2},\ldots ,i_{m}\}\) be an arbitrary set of pairwise different players. Suppose that the claim has been shown for any \(m^{\prime }\in \{3,4,\ldots ,m-1\}\). Then, a consideration of the two subsets \(\{i_{1},i_{2},\ldots ,i_{m-1}\}\) and \(\{i_{1},i_{m-1},i_{m}\}\) = \(\{i_{m-1},i_{m},i_{1}\}\) shows that

$$\begin{aligned}&\chi _{i_{1}i_{2}}\cdot \ldots \cdot \chi _{i_{m-2}i_{m-1}}\cdot \chi _{i_{m-1}i_{1}} =\chi _{i_{2}i_{1}}\cdot \ldots \cdot \chi _{i_{m-1}i_{m-2}}\cdot \chi _{i_{1} i_{m-1}}, \quad {\text {and}} \end{aligned}$$
(86)
$$\begin{aligned}&\chi _{i_{m-1}i_{m}}\cdot \chi _{i_{m}i_{1}}\cdot \chi _{i_{1}i_{m-1}} =\chi _{i_{m}i_{m-1}}\cdot \chi _{i_{1}i_{m}}\cdot \chi _{i_{m-1}i_{1}}. \end{aligned}$$
(87)

Combining the two equations via multiplication yields

$$\begin{aligned}&\left( \chi _{i_{1}i_{2}}\cdot \ldots \cdot \chi _{i_{m-2}i_{m-1}}\cdot \chi _{i_{m-1}i_{1}}\right) \cdot \left( \chi _{i_{m-1}i_{m}}\cdot \chi _{i_{m}i_{1}}\cdot \chi _{i_{1}i_{m-1}}\right) \nonumber \\&\quad =\left( \chi _{i_{2}i_{1}}\cdot \ldots \cdot \chi _{i_{m-1}i_{m-2}}\cdot \chi _{i_{1}i_{m-1}}\right) \cdot \left( \chi _{i_{m}i_{m-1}}\cdot \chi _{i_{1}i_{m}}\cdot \chi _{i_{m-1}i_{1}}\right) . \end{aligned}$$
(88)

By assumption, \(\chi _{i_{1}i_{m-1}}\cdot \chi _{i_{m-1}i_{1}}\ne 0\). Hence, eliminating these common nonzero factors, (88) implies

$$\begin{aligned} \chi _{i_{1}i_{2}}\cdot \ldots \cdot \chi _{i_{m-1}i_{m}}\cdot \chi _{i_{m}i_{1}} =\chi _{i_{2}i_{1}}\cdot \ldots \cdot \chi _{i_{m}i_{m-1}}\cdot \chi _{i_{1}i_{m} }, \end{aligned}$$
(89)

as claimed. This concludes the induction step, and therefore proves the lemma. □

Proof of Theorem 1

Let \(x_{N}^{\ast}\) be an interior strategy profile such that all first-order conditions hold at \(x_{N}^{\ast}\) and such that \(\chi _{ij}\ne 0\) for all \(i\ne j\). We need to find positive constants \(w_{1}>0,\ldots ,w_{n}>0\) such that

$$\begin{aligned} \chi _{ij}w_{i}=\chi _{ji}w_{j}\quad (i,j\in N,j\ne i). \end{aligned}$$
(90)

It is claimed that

$$\begin{aligned} w_{i}=\left( \left| \chi _{12}\right| \cdot \ldots \cdot \left| \chi _{i-1i}\right| \right) \cdot \left( \left| \chi _{i+1i}\right| \cdot \ldots \cdot \left| \chi _{nn-1}\right| \right) \quad (i\in N) \end{aligned}$$
(91)

does the job, with the usual convention that any empty product equals one. Clearly, it suffices to check (90) for \(i<j\), because if \(i>j\), one may just exchange the two sides of Eq. (90). Suppose first that \(n=2\). Then, (91) implies \(w_{1}=\left| \chi _{21}\right| \) and \(w_{2}=\left| \chi _{12}\right| \), and the claim follows directly from Proposition 1. Suppose next that \(n\ge 3\). Splitting the product in the second bracket of (91), one obtains

$$\begin{aligned} \chi _{ij}w_{i}&=\left( \left| \chi _{12}\right| \cdot \ldots \cdot \left| \chi _{i-1i}\right| \right) \nonumber \\&\quad \cdot {\text {sgn}}(\chi _{ij})\cdot \left( \left| \chi _{i+1i}\right| \cdot \ldots \cdot \left| \chi _{jj-1}\right| \cdot \left| \chi _{ij}\right| \right) \nonumber \\&\quad \cdot \left( \left| \chi _{j+1j}\right| \cdot \ldots \cdot \left| \chi _{nn-1}\right| \right) . \end{aligned}$$
(92)

From Proposition 1,

$$\begin{aligned} {\text {sgn}}(\chi _{ij})={\text {sgn}}(\chi _{ji}). \end{aligned}$$
(93)

Moreover, from Lemma 4,

$$\begin{aligned} \chi _{i+1i}\cdot \ldots \cdot \chi _{jj-1}\cdot \chi _{ij}=\chi _{ii+1}\cdot \ldots \cdot \chi _{j-1j}\cdot \chi _{ji}. \end{aligned}$$
(94)

Hence, using (93) and (94) in relationship (92) delivers

$$\begin{aligned} \chi _{ij}w_{i}&=\left( \left| \chi _{12}\right| \cdot \ldots \cdot \left| \chi _{i-1i}\right| \right) \end{aligned}$$
(95)
$$\begin{aligned}&\quad \cdot {\text {sgn}}(\chi _{ji})\cdot \left( \left| \chi _{ii+1}\right| \cdot \ldots \cdot \left| \chi _{j-1j}\right| \cdot \left| \chi _{ji}\right| \right) \end{aligned}$$
$$\begin{aligned}&\quad \cdot \left( \left| \chi _{j+1j}\right| \cdot \ldots \cdot \left| \chi _{nn-1}\right| \right) \end{aligned}$$
$$\begin{aligned}&=\chi _{ji}\cdot \left( \left| \chi _{12}\right| \cdot \ldots \cdot \left| \chi _{j-1j}\right| \right) \cdot \left( \left| \chi _{j+1j}\right| \cdot \ldots \cdot \left| \chi _{nn-1}\right| \right) \end{aligned}$$
(96)
$$\begin{aligned}&=\chi _{ji}w_{j}. \end{aligned}$$
(97)

This proves the claim and, hence, the theorem. □

Proof of Proposition 3

To construct an ordinal potential locally at \(x_{N}^{\ast}\), start by letting

$$\begin{aligned} S_{j}\equiv {\text {sgn}}\left( \frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}\right) \in \{1,-1\}\quad (j\in N\backslash \{i\}), \end{aligned}$$
(98)

where the sign function sgn(.) is defined in Sect. 7. Next, choose a constant \(b_{j}\), for each \(j\in N\backslash \{i\}\), such that \(\left| \partial u_{i}(x_{N}^{\ast})/\partial x_{j}\right| <b_{j}\). Then, since \(\varGamma \) is smooth, there is a small rectangular neighborhood \(U_{N}(x_{N}^{\ast})\) of \(x_{N}^{\ast}\) such that

$$\begin{aligned} U_{N}(x_{N}^{\ast})\subseteq \left\{ x_{N}\in X_{N}\,{\text {s.t. sgn}}\left( \frac{\partial u_{j}(x_{N})}{\partial x_{j}}\right) =S_{j}\quad {\text {and}}\quad \left| \frac{\partial u_{i}(x_{N})}{\partial x_{j}}\right| <b _{j}\quad {\text {for all}} \quad j\ne i\right\} . \end{aligned}$$
(99)

We claim that

$$\begin{aligned} P(x_{N})=u_{i}(x_{N})+\sum _{j\in N\backslash \{i\}}S_{j}b _{j} x_{j} \end{aligned}$$
(100)

is an ordinal potential for \(\varGamma \) when joint strategy choices are restricted to \(U_{N}(x_{N}^{\ast})\). To see this, note first that

$$\begin{aligned} P(x_{i},x_{-i})-P({\widehat{x}}_{i},x_{-i})=u_{i}(x_{i},x_{-i})-u_{i} ({\widehat{x}}_{i},x_{-i}), \end{aligned}$$
(101)

for any \(x_{i}\in X_{i}\), \({\widehat{x}}_{i}\in X_{i}\), and \(x_{-i}\in X_{-i}\). Next, let \(j\in N\backslash \{i\}\). Take any \(x_{j}\in X_{j}\), \({\widehat{x}} _{j}\in X_{j}\), and \(x_{-j}\in X_{-j}\) such that \((x_{j},x_{-j})\in U_{N}(x_{N}^{\ast})\) and \(({\widehat{x}}_{j},x_{-j})\in U_{N}(x_{N}^{\ast})\). Then, from (100),

$$\begin{aligned} P(x_{j},x_{-j})-P({\widehat{x}}_{j},x_{-j})=u_{i}(x_{j},x_{-j})-u_{i} ({\widehat{x}}_{j},x_{-j})+b_{j}S_{j}(x_{j}-{\widehat{x}}_{j}). \end{aligned}$$
(102)

Invoking the mean value theorem, there exists a strict convex combination \(\widetilde{x}_{j}\ \) of \({\widehat{x}}_{j}\) and \(x_{j}\) such that

$$\begin{aligned} u_{i}(x_{j},x_{-j})-u_{i}({\widehat{x}}_{j},x_{-j})=\frac{\partial u_{i}(\widetilde{x}_{j},x_{-j})}{\partial x_{j}}(x_{j}-{\widehat{x}} _{j}). \end{aligned}$$
(103)

Hence,

$$\begin{aligned} P(x_{j},x_{-j})-P({\widehat{x}}_{j},x_{-j})=\left\{ \frac{\partial u_{i}(\widetilde{x}_{j},x_{-j})}{\partial x_{j}}+b_{j}S_{j}\right\} (x_{j}-{\widehat{x}}_{j}). \end{aligned}$$
(104)

To check the definition of an ordinal potential, suppose that \(u_{j} (x_{j},x_{-j})-u_{j}({\widehat{x}}_{j},x_{-j})>0\). Then, using (99), \(S_{j}(x_{j}-{\widehat{x}}_{j})>0\). Moreover, since \(U_{N}(X_{N}^{\ast})\) is convex, \(\left| \partial u_{i}(\widetilde{x}_{j},x_{-j})/\partial x_{j}\right| <b_{j}\). It follows that \(P(x_{j},x_{-j})-P({\widehat{x}} _{j},x_{-j})>0\). Since these steps may be reversed, P is indeed an ordinal potential for \(\varGamma \) in the neighborhood \(U_{N}(x_{N}^{\ast})\). This proves the proposition. □

Proof of Proposition 4

We adapt the proof of Proposition 1. To provoke a contradiction, suppose that

$$\begin{aligned}&x^{\ast}_{i}=\underline{x}_{i},x^{\ast}_{j} \in \mathring{X}_{j},\quad \frac{\partial u_{i} (x_{N}^{\ast})}{\partial x_{i}}=\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0, \end{aligned}$$
(105)
$$\begin{aligned}&\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}} \cdot \frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j} }<0,\quad {\text {and}} \end{aligned}$$
(106)
$$\begin{aligned}&\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}} \cdot \frac{\partial ^{3}u_{j}(x_{N}^{\ast})}{\partial x_{j}^{3}}<0. \end{aligned}$$
(107)

There are two cases. Suppose first that, as illustrated in Fig. 3,

$$\begin{aligned} \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}>0>\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j} }. \end{aligned}$$
(108)

Then, condition (107) implies \(\partial ^{3}u_{j}(x_{N}^{\ast} )/\partial x_{j}^{3}>0\). Given constants \(\lambda _{i}>0\) and \(\lambda _{j} >0\), let

$$\begin{aligned} {\widehat{\Delta }}_{i}^{+}\left( \varepsilon \right)&=u_{i}\left( x_{i}^{\ast}+\lambda _{i}\varepsilon ,x_{j}^{\ast}+\lambda _{j}\varepsilon ,x_{-i,j}^{\ast} \right) -u_{i}\left( x_{i}^{\ast},x_{j}^{\ast}+\lambda _{j}\varepsilon ,x_{-i,j}^{\ast}\right) , \end{aligned}$$
(109)
$$\begin{aligned} {\widehat{\Delta }}_{j}^{+}\left( \varepsilon \right)&=u_{j}\left( x_{j}^{\ast}-\lambda _{j}\varepsilon ,x_{i}^{\ast}+\lambda _{i}\varepsilon ,x_{-i,j}^{\ast} \right) -u_{j}\left( x_{j}^{\ast}+\lambda _{j}\varepsilon ,x_{i}^{\ast}+\lambda _{i}\varepsilon ,x_{-i,j}^{\ast}\right) , \end{aligned}$$
(110)
$$\begin{aligned} {\widehat{\Delta }}_{i}^{-}\left( \varepsilon \right)&=u_{i}\left( x_{i}^{\ast},x_{j}^{\ast}-\lambda _{j}\varepsilon ,x_{-i,j}^{\ast}\right) -u_{i}\left( x_{i}^{\ast}+\lambda _{i}\varepsilon ,x_{j}^{\ast}-\lambda _{j}\varepsilon ,x_{-i,j}^{\ast}\right) , \end{aligned}$$
(111)
$$\begin{aligned} {\widehat{\Delta }}_{j}^{-}\left( \varepsilon \right)&=u_{j}\left( x_{j}^{\ast}+\lambda _{j}\varepsilon ,x_{i}^{\ast},x_{-i,j}^{\ast}\right) -u_{j}\left( x_{j}^{\ast}-\lambda _{j}\varepsilon ,x_{i}^{\ast},x_{-i,j}^{\ast}\right) . \end{aligned}$$
(112)

Straightforward calculations deliver

$$\begin{aligned} \frac{\partial {\widehat{\Delta }}_{i}^{+}(0)}{\partial \varepsilon }&=\lambda _{i}\frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}}=0, \end{aligned}$$
(113)
$$\begin{aligned} \frac{\partial ^{2}{\widehat{\Delta }}_{i}^{+}(0)}{\partial \varepsilon ^{2}}&=\lambda _{i}\left( \lambda _{i}\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{i}^{2}}+2\lambda _{j}\frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}\right) , \end{aligned}$$
(114)

and

$$\begin{aligned} \frac{\partial {\widehat{\Delta }}_{j}^{+}(0)}{\partial \varepsilon }&=-\,2\lambda _{j}\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0, \end{aligned}$$
(115)
$$\begin{aligned} \frac{\partial ^{2}\varDelta _{j}^{+}(0)}{\partial \varepsilon ^{2}}&=-\,4\lambda _{j}\lambda _{i}\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}}>0. \end{aligned}$$
(116)

Similarly,

$$\begin{aligned} \frac{\partial {\widehat{\Delta }}_{i}^{-}(0)}{\partial \varepsilon }&=-\,\lambda _{i}\frac{\partial u_{i}(x_{N}^{\ast})}{\partial x_{i}}=0, \end{aligned}$$
(117)
$$\begin{aligned} \frac{\partial ^{2}{\widehat{\Delta }}_{i}^{-}(0)}{\partial \varepsilon ^{2}}&=\lambda _{i}\left( -\lambda _{i}\frac{\partial ^{2}u_{i}(x_{N}^{\ast} )}{\partial x_{i}^{2}}+2\lambda _{j}\frac{\partial ^{2}u_{i}(x_{N}^{\ast} )}{\partial x_{j}\partial x_{i}}\right) , \end{aligned}$$
(118)

and

$$\begin{aligned} \frac{\partial {\widehat{\Delta }}_{j}^{-}(0)}{\partial \varepsilon }&=2\lambda _{j}\frac{\partial u_{j}(x_{N}^{\ast})}{\partial x_{j}}=0,\end{aligned}$$
(119)
$$\begin{aligned} \frac{\partial ^{2}{\widehat{\Delta }}_{j}^{-}(0)}{\partial \varepsilon ^{2}}&=0, \end{aligned}$$
(120)
$$\begin{aligned} \frac{\partial ^{3}{\widehat{\Delta }}_{j}^{-}(0)}{\partial \varepsilon ^{3}}&=2\lambda _{j}^{3}\frac{\partial ^{3}u_{j}(x_{N}^{\ast})}{\partial x_{j}^{3} }>0. \end{aligned}$$
(121)

Choose now \(\lambda _{i}=1\), and \(\lambda _{j}>0\) sufficiently large so that the bracket terms in (114) and (118) are positive. Then, second-order and third-order Taylor approximations at \(\varepsilon =0\) show that \({\widehat{\Delta }}_{i}^{+}(\varepsilon )>0\), \({\widehat{\Delta }}_{j} ^{+}(\varepsilon )>0\), \({\widehat{\Delta }}_{i}^{-}(\varepsilon )>0\), and \({\widehat{\Delta }}_{j}^{-}(\varepsilon )>0\) all hold for \(\varepsilon >0\) small enough, which yields the desired contradiction. In the second case,

$$\begin{aligned} \frac{\partial ^{2}u_{i}(x_{N}^{\ast})}{\partial x_{j}\partial x_{i}}<0<\frac{\partial ^{2}u_{j}(x_{N}^{\ast})}{\partial x_{i}\partial x_{j}}, \end{aligned}$$
(122)

and consequently \(\partial ^{3}u_{j}(x_{N}^{\ast})/\partial x_{j}^{3}<0\). It is now straightforward to check that, again for \(\lambda _{i}=1\) and \(\lambda _{j}>0\) sufficiently large, inequalities \({\widehat{\Delta }}_{i}^{+} (\varepsilon )<0\), \({\widehat{\Delta }}_{j}^{+}(\varepsilon )<0\), \(\widehat{\Delta }_{i}^{-}(\varepsilon )<0\), and \({\widehat{\Delta }}_{j}^{-}(\varepsilon )<0\) all hold for \(\varepsilon >0\) small enough. The remainder of the argument stays as before. This completes the proof. □

Proof of Proposition 5

It will be shown that P is a weighted potential. Since any weighted potential is, in particular, a generalized ordinal potential, this is sufficient to prove the proposition. So consider some player \(i\in N\), prices \(p_{i}^{\prime }\ge 0\) and \(p_{i} ^{\prime \prime }\ge 0\), as well as a price vector \(p_{-i}\in {\mathbb{R}} _{+}^{n-1}\). It is claimed that

$$\begin{aligned} P(p_{i}^{\prime },p_{-i})-P(p_{i}^{\prime \prime },p_{-i})=\frac{{\widehat{w}}_{i} }{1+2s_{i}c_{i}}\cdot \left( u_{i}(p_{i}^{\prime },p_{-i})-u_{i}(p_{i} ^{\prime \prime },p_{-i})\right) . \end{aligned}$$
(123)

To see this, note first that

$$\begin{aligned} u_{i}(p_{i},p_{-i})&=\left( p_{i}-c_{i}\left( Q_{i}-s_{i}p_{i} +\sum _{j\ne i}\theta _{ij}p_{j}\right) \right) \left( Q_{i}-s_{i}p_{i} +\sum _{j\ne i}\theta _{ij}p_{j}\right) \end{aligned}$$
(124)
$$\begin{aligned}&=(1+2s_{i}c_{i})Q_{i}p_{i}-(1+s_{i}c_{i})s_{i}p_{i}^{2}+(1+2s_{i}c_{i} )\sum _{j\ne i}\theta _{ij}p_{i}p_{j}\nonumber \\&\quad +\left\{ {\text {terms constant in}} \,p_{i}\right\} \end{aligned}$$
(125)
$$\begin{aligned}&=(1+2s_{i}c_{i})p_{i}\left\{ Q_{i}\left( 1-\frac{p_{i}}{2p_{i}^{0} }\right) +\sum _{j\ne i}\theta _{ij}p_{j}\right\} \nonumber \\&\quad +\left\{ {\text {terms constant in}} \,p_{i}\right\} . \end{aligned}$$
(126)

Therefore,

$$\begin{aligned} u_{i}(p_{i}^{\prime },p_{-i})-u_{i}(p_{i}^{\prime \prime },p_{-i})&=(1+2s_{i}c_{i})Q_{i}\left\{ p_{i}^{\prime }\left( 1-\frac{p_{i}^{\prime } }{2p_{i}^{0}}\right) -p_{i}^{\prime \prime }\left( 1-\frac{p_{i}^{\prime \prime }}{2p_{i}^{0}}\right) \right\} \nonumber \\&\quad +(1+2s_{i}c_{i})\sum _{j\ne i}\theta _{ij}\left( p_{i}^{\prime }-p_{i}^{\prime \prime }\right) p_{j}. \end{aligned}$$
(127)

On the other hand,

$$\begin{aligned} P(p_{i}^{\prime },p_{-i})-P(p_{i}^{\prime \prime },p_{-i})&={\widehat{w}} _{i}Q_{i}\left\{ p_{i}^{\prime }\left( 1-\frac{p_{i}^{\prime }}{2p_{i}^{0}} \right) -p_{i}^{\prime \prime }\left( 1-\frac{p_{i}^{\prime \prime }}{2p_{i}^{0}}\right) \right\} \nonumber \\&\ \ \ +{\widehat{w}}_{i}\sum _{j=1}^{i-1}\theta _{ij}\left( p_{i}^{\prime }-p_{i}^{\prime \prime }\right) p_{j}+\sum _{j=i+1}^{n}{\widehat{w}}_{j} \theta _{ji}p_{j}(p_{i}^{\prime }-p_{i}^{\prime \prime }). \end{aligned}$$
(128)

We will show below that

$$\begin{aligned} {\widehat{w}}_{j}\theta _{ji}={\widehat{w}}_{i}\theta _{ij}\quad (j=i+1,\ldots ,n). \end{aligned}$$
(129)

Using this in (128), and recalling (127), we get

$$\begin{aligned} P(p_{i}^{\prime },p_{-i})-P(p_{i}^{\prime \prime },p_{-i})&={\widehat{w}} _{i}Q_{i}\left\{ p_{i}^{\prime }\left( 1-\frac{p_{i}^{\prime }}{2p_{i}^{0}} \right) -p_{i}^{\prime \prime }\left( 1-\frac{p_{i}^{\prime \prime }}{2p_{i}^{0}}\right) \right\} \nonumber \\&\quad +{\widehat{w}}_{i}\sum _{j\ne i}\theta _{ij}\left( p_{i} ^{\prime }-p_{i}^{\prime \prime }\right) p_{j}\end{aligned}$$
(130)
$$\begin{aligned}&=\frac{{\widehat{w}}_{i}}{1+2s_{i}c_{i}}\left( u_{i}(p_{i}^{\prime } ,p_{-i})-u_{i}(p_{i}^{\prime \prime },p_{-i})\right) , \end{aligned}$$
(131)

as claimed. So it remains to be checked that (129) holds true, or equivalently, that

$$\begin{aligned}&\left| \theta _{12}\right| \cdot \ldots \cdot \left| \theta _{j-1j}\right| \cdot \left| \theta _{j+1j}\right| \cdot \ldots \cdot \left| \theta _{nn-1}\right| \cdot \theta _{ji}\nonumber \\&\quad =\left| \theta _{12}\right| \cdot \ldots \cdot \left| \theta _{i-1i}\right| \cdot \left| \theta _{i+1i}\right| \cdot \ldots \cdot \left| \theta _{nn-1}\right| \cdot \theta _{ij}\quad (j=i+1,\ldots ,n). \end{aligned}$$
(132)

But this was shown in the proof of Theorem 1. Since \({\widehat{w}}_{i}>0\) for \(i\in N\), we conclude that P is indeed a weighted potential for the differentiated Bertrand game. The proposition follows. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ewerhart, C. Ordinal potentials in smooth games. Econ Theory 70, 1069–1100 (2020). https://doi.org/10.1007/s00199-020-01257-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-020-01257-1

Keywords

JEL Classification

Navigation