An impossibility theorem in game dynamics

Significance The Nash equilibrium is the fundamental solution concept of game theory, playing a central role in fields beyond economics. Nash’s existence proof is famously nonconstructive, and the dynamics employed in the proof fail to converge to a Nash equilibrium. We show that this failure is a special case of a much more sweeping impossibility result for any game dynamics satisfying a minimal set of desiderata. Thus, the solution concept of Nash equilibria, while universally applicable, is also simultaneously a fundamentally incomplete description of the possible long-term agent behavior.

The Nash equilibrium-a, combination of choices by the players of a game from which no self-interested player would deviate-is the predominant solution concept in game theory.Even though every game has a Nash equilibrium, it is not known whether there are deterministic behaviors of the players who play a game repeatedly that are guaranteed to converge to a Nash equilibrium of the game from all starting points.If one assumes that the players' behavior is a discrete-time or continuous-time rule whereby the current mixed strategy profile is mapped to the next, this question becomes a problem in the theory of dynamical systems.We apply this theory, and in particular Conley index theory, to prove a general impossibility result: There exist games, for which all game dynamics fail to converge to Nash equilibria from all starting points.The games which help prove this impossibility result are degenerate, but we conjecture that the same result holds, under computational complexity assumptions, for nondegenerate games.We also prove a stronger impossibility result for the solution concept of approximate Nash equilibria: For a set of games of positive measure, no game dynamics can converge to the set of approximate Nash equilibria for a sufficiently small yet substantial approximation bound.Our results establish that, although the notions of Nash equilibrium and its computation-inspired approximations are universally applicable in all games, they are fundamentally incomplete as predictors of long-term player behavior.The Nash equilibrium, defined and shown universal by John F. Nash in 1950 (1), is paramount in game theory, routinely considered as the default solution conceptthe "meaning of the game."Over the years-and especially in the past three decades during which game theory has come under intense computational scrutiny-the Nash equilibrium has been noted to suffer from a number of disadvantages of a computational nature.There are no known efficient algorithms for computing the Nash equilibrium of a game, and in fact, the problem has been shown to be intractable (2)(3)(4).In addition, there are typically many Nash equilibria in a game, and the selection problem leads to conceptual complications and further intractability; see, for example, ref. 5.
A common defense of the Nash equilibrium in the face of such shortcomings is the informal argument that "the players will eventually get there."However, no learning behavior has been shown to converge to equilibrium in all games (6)(7)(8)(9)(10)(11)(12)(13).In fact, refs.8 and 9 show that for wide classes of games, numerous learning dynamics fail to converge to a fixed point of any kind, and instead exhibit chaotic dynamics.The celebrated work of Hart and Mas-Colell (14) shows that even in games with a single Nash equilibrium, learning dynamics which are uncoupled (players only know their own payoff function) fail to converge to the Nash equilibrium.However, the uncoupledness assumption is strong and crucial for this result, and the problem remains open for general dynamics.
Given a game, the deterministic way players move from one mixed strategy profile to the next can be formalized in terms of dynamical systems as a continuous function assigning to each point x in the strategy space and each time t ≥ 0 another point (t, x): the point where the players will be after time t; naturally, this function must satisfy (t , (t, x)) = (t + t , x).If t ∈ R, this setup is called continuous time dynamics, while if t ∈ Z, it is referred to as discrete time dynamics.For to qualify as game dynamics for a game g, it must satisfy Nash stationarity: The Nash equilibria are a subset of the fixed points of .This is a reasonable assumption, since game dynamics should encode the strategic stability of Nash equilibria, i.e., no rational player will ever move away from a Nash equilibrium.
Having defined what we mean by "game dynamics," we turn to defining "convergence."In the theory of dynamical systems, a standard tool for convergence is Lyapunov functions, which provide an intuitive optimization point of view in the analysis of long-term behavior.Informally, these are real-valued functions that are always decreasing except when the dynamics have reached a final set of configurations.The standard

Significance
The Nash equilibrium is the fundamental solution concept of game theory, playing a central role in fields beyond economics.Nash's existence proof is famously nonconstructive, and the dynamics employed in the proof fail to converge to a Nash equilibrium.We show that this failure is a special case of a much more sweeping impossibility result for any game dynamics satisfying a minimal set of desiderata.Thus, the solution concept of Nash equilibria, while universally applicable, is also simultaneously a fundamentally incomplete description of the possible long-term agent behavior.
example of such dynamics is gradient flows, where the dynamics move in the direction of the steepest descent, and the final configurations are the critical points of the function.The surprising power of Lyapunov arguments lies in explaining the behavior of dynamics that are not gradient-like.Specifically, an arbitrary dynamical system can be described by a decomposition of its domain into the chain recurrent components, on which the dynamics stay within the component, and outside of which the dynamics flows from one component to another (15), guided by a Lyapunov function: a function that is constant along orbits within the chain recurrent components, and strictly decreasing outside of them.
In view of this mathematical framework, we are interested in this question: Are there game dynamics which converge to Nash equilibria for all games and from all starting points?Our main result is an impossibility theorem: Main Result (informal statement) there are games in which any game dynamics will fail to converge to the set of Nash equilibria.
That is to say, we exhibit games in which any game dynamics must fail to converge to the set of Nash equilibria-or even approximate Nash equilibria-from certain starting points.This suggests that the Nash equilibrium concept is plagued by a form of incompleteness: It is incapable of capturing the long-term behaviors of the players in all games.
This result complements existing literature, in which the nonexistence of Nash-convergent dynamics is shown from stronger assumptions (14,16).Also note that our impossibility result is complementary to the known intractability of the Nash equilibrium (2-4) and cannot be derived from it.Indeed, it is well known that there are dynamical systems whose next-step function can be computed efficiently and which converge to points of extremely high computational complexity (PSPACE-complete); see, for example, ref. 17; therefore, computational complexity in and by itself does not preclude convergent dynamics.
Two objections can be raised to the impossibility result: Degenerate games (as we use in our example) are known to have measure zero-so are there dynamics that work for almost all games?(As we shall see soon, the answer is "yes, but.")Second, in view of the known intractability of the Nash equilibrium, exact equilibria may be asking too much; are there dynamics that converge to an arbitrarily good approximation of the Nash equilibrium?We show how our result can be extended so that it addresses these questions.
For the first question, it turns out that, in some sense, degeneracy is required for the impossibility result: We give an algorithm which, given any nondegenerate game, specifies somewhat trivial game dynamics which are Nash convergent.The downside of this positive result is that the algorithm requires exponential time unless P = PPAD-a complexity collapse that is considered unlikely; see, for example, ref. 2. In fact, we conjecture that such intractability is inherent.In other words, we suspect that, in nondegenerate games, it is complexity theory, and not topology, that provides the proof of the impossibility result.Proving this conjecture would require the development of a complexity-theoretic treatment of dynamics, which seems to us a very attractive research direction.
Second, we exhibit a family of games, with nonzero measure (in particular, perturbations of the game used for our main result), for which any game dynamics will fail to converge (in the above sense) to the set of -approximate Nash equilibria, for some fixed > 0. Note that, because of the nonzero measure claim, this result appears to be stronger than our main result; however, this is not true because it relies on the stronger assumption that all approximate equilibria are stationary.
Our impossibility result also applies to player behaviors with memory, that is, one in which the players maintain and update quantities which affect their choices.If memory is modeled by expanding the space to include memory components, then impossibility can be recovered under assumptions, even though this requires a more detailed treatment.An interesting example is recent work on optimistic dynamics, where agents keep track of the last two periods of play (18)(19)(20).Exploring the limits of the applicability of the impossibility result to such dynamics is an intriguing direction for future work.
Finally, we note that our impossibility results do not apply to stochastic dynamics-e.g., discrete-time dynamics in which (x) is a distribution of possible next points.In fact, it is easy to see that even trivial such schemes-e.g., "from any point x jump to a uniformly random point in the domain"-succeed in eventually reaching an arbitrarily good approximation of a Nash equilibrium, albeit in exponential time.

Game Dynamics and Nash Convergence
Before turning to our notion of convergence, we shall discuss what it means for a dynamical system to be compatible with a game g.Let us denote the Nash equilibria of g as NE(g) or simply NE when the context is clear.For nondegenerate games, there are an odd number of isolated Nash equilibria.In general, however, the Nash equilibria of a game are not isolated, and instead form closed, connected components, of which there are finitely many (21).That is, NE is the disjoint union NE = 1≤i≤n NE i , where NE i is a component.Recall that a Nash equilibrium is by definition a point from which no player will want to deviate.Hence, for a dynamics to be compatible with game g, we only require that satisfy Nash stationarity: NE(g) ⊆ Fix( ), where by Fix( ), we denote the fixed points of , the set {x ∈ X : (t, x) = x for all t ≥ 0}.That is, the dynamics is compatible with a game if it does not move away from any of the Nash equilibria of the game.Namely, game dynamics encode the strategically stability of all Nash equilibria, i.e., that self-interested agents would not deviate from them.This is a very natural definition, and game dynamics have been studied extensively under this assumption, or variants; see for instance, refs.21-27; these studies often result in a characterization of Nash components in terms of fixed-point index theory.Now, given such a dynamical system, what is an appropriate notion of convergence?This is a deep question within topological dynamical systems theory, and a classical tool used to characterize convergence is that of Lyapunov functions.In fact, Lyapunov function arguments are the archetypal strategy for proving convergence to Nash equilibria in games, cf.refs.28-34.Furthermore, the topological theory of dynamical systems implies that any dynamical system has an appropriate Lyapunov function, as the existence of a complete Lyapunov function is guaranteed by Conley's fundamental theorem of dynamical systems (15): This theorem states that any dynamical system can be decomposed into the so-called chain recurrent set-the set to which the dynamics converges and on which the Lyapunov function is constant-and a gradient-like part, upon which a Lyapunov function strictly decreases along trajectories; see refs.15 and 35.This leads us to our definition: Definition 1: Given game dynamics for a game g, we say that is Nash convergent if there exists a continuous function See Fig. 1 for an illustration.That is to say, V is a continuous function on X which is strictly decreasing on trajectories outside NE and constant on trajectories within any Nash component; cf.ref. 15.
Now that we know what convergence means, the statement of our main result becomes clearer: There is a game g which does not admit any Nash convergent game dynamics.The techniques we use to prove this result come from Conley index theory (15) and require minimal technical assumptions on the space and dynamics.These techniques have not been previously applied to the setting of game dynamics, and provide a powerful framework for analysing the global dynamics of games, the implications of which will be explored in future work.Moreover, up to a basic knowledge of algebraic topology, our paper is self-contained, as all the necessary results are provided in the Notes.

Dynamical Systems Theory
To prove our result, we make use of the relationship between Lyapunov functions, attractors, and Conley indices.To this end, we introduce all these mathematical concepts and their connections to the extent that they are needed in the proof.We will consider a compact metric space X (for game dynamics, X is the product of the players' strategy simplices) and let T + = {t ∈ T : t ≥ 0} where T is either R or Z.A dynamical system is a continuous map : T + × X → X , such that i) (0, x) = x for all x ∈ X and ii) (t + t , x) = (t , (t, x)) for all t, t ∈ T + and x ∈ X .When T = R, i.e., for continuoustime, is called a semiflow; if T = Z, i.e., discrete time, is often called a map or discrete time dynamical system.
A basic notion in dynamical systems is that of an attractor, which intuitively is a set to which trajectories converge asymptotically.For a subset Y ⊂ X , the asymptotic behavior of Y under the dynamics is captured by the omega-limit set, formally defined as (Y ) = t>0 cl ([t, ∞), Y ) , where cl denotes topological closure.A set A ⊂ X is an attractor if there exists a neighborhood U of A such that (U ) = A.An attracting block U is a closed neighborhood of A such that (U ) = A and (t, U ) ⊂ int U for all t > 0. Attracting blocks always exist (36).Finally, the maximal attractor is (X ).
Conley Index Theory for Attractors.Conley index theory deals with the more general concept of an isolated invariant set; however, for our purposes, we need only introduce the Conley index in the limited setting of attractors.The Conley index takes a different form depending upon whether the time is discrete or continuous, and we introduce each as well as prove, separately, both corresponding theorems.We defer the formal exposition of Conley index theory to the Notes, providing for now an intuition for the continuous-time case.Fig. 2. A disk X with semiflow , attractor A (stable periodic orbit) and an unstable fixed point in the interior (Left).Here, (X ) = X , and the Conley index of (X ) is the homology of a point.Attracting block U for A which is homotopy equivalent to a circle (Right).The Conley index of A is the homology of a circle.
Let : T + × X → X be a semiflow, A be an attractor for , and U an attracting block of A. The Conley index of A is defined as: Here, H • denotes singular homology with integer coefficients.
As shown in Proposition 1 of the Notes, the Conley index is independent of the particular choice of U and, thus, provides an algebraic topological invariant for A. As X is an attracting block for the maximal attractor, (X ), its Conley index is defined as: Fig. 2 provides elementary Conley index computations for a semiflow.

Impossibility in Game Dynamics
In this section, we prove our main result, the impossibility theorem, splitting the proof into the cases of continuous time (Theorem 1) and discrete time (Theorem 2).
Theorem 1.There exists a game g that does not admit continuoustime Nash convergent game dynamics.
Proof: Define g to be the following bimatrix game as in (21,25): By way of contradiction, suppose that there exist game dynamics which are Nash convergent.For g given in Eq. 1, the set of Nash equilibria forms a topological circle, i.e., S 1 (21).Thus, there is a single component of Nash equilibria, which is homeomorphic to S 1 .As is Nash convergent, there exists a Lyapunov function V : X → [0, 1] such that for any x ∈ NE and any t > 0, V (x) > V (( (t, x)) > 0. Without loss of generality, we may assume that V −1 (0) = NE.For any ∈ (0, 1], setting U = V −1 [0, ]), we have that NE = (U ).Thus, NE is an attractor for .In fact, as X = V −1 ([0, 1]), and NE = (X ), NE is the maximal attractor.It follows from the continuity of V that U is a closed attracting block for NE for any ∈ (0, 1].In particular, we can first calculate the Conley index of NE as: Now, the inclusion : NE → U induces the homomorphism on homology Intuitively, we can choose small enough so that U is a tight enough neighborhood of NE so that the circle of Nash equilibria which generates the 1-dimensional hole in H 1 (NE) still exists as a 1-dimensional hole in U , as in Fig. 3.More formally, if w is the generator of H 1 (NE) = Z, for > 0, sufficiently small U is a small neighborhood of NE and the image 1 (w) cannot be the boundary of any 2-chain in U .Thus, the map 1 : As U is an attracting block, we can compute the Conley index of NE as: However, as the map 1 is injective, we have that However, by Proposition 1 the Conley index * is an invariant of NE and does not depend on the choice of attracting block.Therefore, this is a contradiction and such a cannot exist.
We now consider the discrete time case, which is proved in the same fashion, using the Conley index for maps, the background for which is provided in the Notes.

Theorem 2. There exists a game g that does not admit discrete time Nash convergent game dynamics.
Proof: again the game of Eq. 1. Assuming that there are Nash convergent discrete time game dynamics for g, there exists a Lyapunov function V : X → [0, 1], and without loss of generality, we may assume V −1 (0) = NE.Once again, it follows that NE is an attractor, and in fact the maximal attractor of and for any ∈ (0, 1], U = V −1 [0, ]) is an attracting block for NE.We will reach the desired contradiction by comparison of the Conley indices computed via X and U for sufficiently small .First, as X is an attracting block for NE, we can use it to compute the Conley index for NE.Note that H • (X ) is computed in Eq. 2 and induced map f X : H • (X ) → H • (X ) is the identity map (taking the generator of the connected component to itself).The Conley index of NE is given by (L • (X ), X ), which is: and the automorphism X is the identity map.In particular, L 1 (X ) = H 1 (X ) = 0. Again, the inclusion : NE → U induces a map on homology • : H • (NE) → H • (U ) and when is sufficiently small, the map 1 : H 1 (NE) → H 1 (U ) is injective.Thus, H 1 (U ) = 0.By Nash stationarity, NE is a circle of fixed points, and thus, the map f (x) := (1, x) induces the identity map f | NE : H • (NE) → H • (NE).In particular, there is a commutative diagram: Letting w be the generator of H 1 (NE) = Z, and w = 1 (w), the diagram implies f U (w ) = w .It is elementary that this implies that L 1 (U ) = 0.However, Proposition 2 implies that the Conley index is independent of attracting block.Thus, this is a contradiction, and there cannot exist such a .
Remark 1: An inspiration for our results is the work of Benaïm et al. (25), wherein they make the statement, without complete proof, that there are games g and such that for all within a class of multivalued dynamical systems, it cannot be the case that NE(g) = CR( ), where CR( ) is the chain recurrent set (see ref. 15) of .The argument sketched in ref. 25 ultimately rests on the development of a fixed point index for components of Nash equilibria and its comparison to the Euler characteristic of the component.However, the argument is incomplete (indeed, the reader is directed to two papers, one of which is a preprint that seems to have not been published).Instead, for our result, we assume Nash convergence, which is in fact a weaker condition as the chain recurrent set has such a Lyapunov function obeying (1) and ( 2), and thus, if CR( ) = NE, is Nash convergent.Thus, as a corollary of Theorem 1, there exists a game which does not admit game dynamics for which the chain recurrent set coincides with the set of Nash equilibria.

Remark 2:
In discussions with Gil Kalai and, independently, Venkat Anantharam, the following dynamics came up: In the two-dimensional disk D = {x, y : x 2 + y 2 ≤ 1}, consider the continuous dynamics, described by the vector field f (x, y) = (0, 1−x 2 −y 2 ).Obviously, the only fixed points are the boundary of the disk, and there are no cycles.Now, homeomorphically map this to a topological disk T whose boundary is the hexagonal Nash component of g and move every other point of the domain toward T , smoothening appropriately the transition to the motion of T .Is this not then a dynamical system that converges to the Nash equilibria of g, contradicting our main result?
The topological theory of dynamical systems provides a negative answer: There can be no Lyapunov function supporting this dynamics, and the chain recurrent set of the dynamics is all of T , instead of its boundary.Indeed, a similar example is contained in ref. 15.

Nondegenerate Games and Approximate Equilibria
Nondegenerate Games.Our impossibility result in the previous section is constructed around a degenerate normal-form game with a continuum of equilibria.What if the game is nondegenerate?Theorem 3.For any nondegenerate game g, there is dynamical system g satisfying Nash stationarity which is Nash-convergent.
Proof: Since g is nondegenerate, it has an odd number of isolated Nash equilibria (37).Thus, in this case, the Nash equilibria themselves are the maximal components.Fix one such equilibrium and call it y; e.g., y = NE 0 .We next define g in terms of y.We shall define it at point x implicitly, in terms of the direction of motion, and the speed of motion; if this is done, g (t, x) is easily computed through integration on t.The direction of motion is the unit vector of y − x: The dynamics heads to y.The speed of motion is defined to be c • D g (x), where c > 0 is a constant, and by D g (x), we denote the deficit at x: the sum, over all players, of the difference between the best-response utility at x, and the actual utility at x.It is clear that D g (x) ≥ 0, and it becomes zero precisely at any Nash equilibrium; thus, g has the property of Nash stationarity.Define V (x) = ||x −y|| 2 as the Lyapunov function.It is clear that V strictly decreases along trajectories off of NE (such trajectories head to y).Therefore, is Nash convergent.

Remark 3:
It's worth remarking on why g as constructed above is not Nash-convergent for the degenerate game in Eq. 1.In this case, there are points x ∈ NE that belong on orbits converging to the chosen Nash equilibrium y but come from a Nash equilibrium z = y.However, the Lyapunov function V must have V (z) = V (y) as both z and y belong to the same component.Therefore, V could not be strictly decreasing along such trajectories.In fact, these dynamics are similar to the example dynamics discussed in the previous remark.Now note that the algorithm for specifying g requires finding y, a PPAD-complete (and FIXP-complete for more than two players) problem.We believe that the dependence is inherent: Conjecture 1.The computational task of finding, from the description of a game g, either a degeneracy of g or an algorithm producing the direction of motion and speed of a dynamical system satisfying Nash stationarity and Nash convergence is PPAD-hard (and FIXPhard for three or more players).
We believe this is an important open question in the boundary between computational complexity, game theory, and the topology of dynamical systems, whose resolution is likely to require the development of complexity-theoretic techniques pertinent to dynamical systems.We further suspect that, if the sought dynamics are constrained to be utility-increasing, i.e., that given stationary opponents the agents move in the direction of increasing utility whenever possible, the difficulty of finding such a Nash convergent dynamics is increased to NP-hard.
Approximate Nash Equilibria.Next, it may seem plausible that the difficulty of designing Nash convergent game dynamics can be overcome when only -approximation is sought, for some > 0. Recall that an -Nash equilibrium is a mixed strategy profile in which all players' utilities are within an additive > 0 of their respective best response.We show that our impossibility theorem extends to this case as well.Let us denote by NE (g) the set of -approximate Nash equilibria of a game g; there are a finite number of components of -approximate Nash, so that once again, NE (g) = 1≤i≤n NE ,i .We say that a dynamical system has the property of -Nash stationarity with respect to a game g if NE (g) ⊂ Fix( ), and that is -Nash convergent if there is a Lyapunov function V that is constant on these components, and strictly decreasing along trajectories if x ∈ NE .We begin with a lemma.

Proof:
The argument for the first player follows, as for any (x, y) ∈ NE (g) any i: The inequality for the second player is proved similarly.Theorem 4.There is a game g and an > 0 so that g does not admit any dynamics that are both -Nash stationary and -Nash convergent.In fact, the set of such games has positive measure.
Proof: Consider again the Kohlberg-Mertens game Eq. 1. Define the function Then, h g is a continuous function such that for any > 0, we have that NE (g) = h −1 g [0, ]); in particular, NE(g) = h −1 (0).For a sufficiently small > 0, the inclusion NE(g) ⊂ NE (g) induces an injection on homology 1 : H 1 (NE(g)) → H 1 (NE (g)).Assuming that there exists -Nash convergent -Nash stationary dynamics , the arguments in the proofs of Theorem 1 and Theorem 2 can be repeated to show that NE (g) is a maximal attractor for , and that the Conley index for NE (g) computed via X and a small neighborhood of NE (g) do not agree, which is a contradiction.See Fig. 4 for a projection of NE (g).
We now show that the set of such games which admit no 2 -Nash convergent 2 -game dynamics has positive measure.Consider perturbations of the game g where each utility value of the normal form of g is perturbed independently, so that there is an 18-dimensional vector of perturbations whose 2-norm is no more than  The map ) is an injection, which implies that H ( 1 ) is injective and therefore H • (NE 2 (g )) = H • (X ).Therefore, a similar argument as above shows that if NE 2 (g ) is a maximal attractor, the Conley indices computed via a small neighborhood and by X do not agree, which completes the proof.

Conclusion
We have argued that the notion of the Nash equilibrium is fundamentally incomplete for describing the global dynamics of games.More precisely, we have shown that there are games in which no dynamics can be Nash convergent, and thus, the concept of the Nash equilibrium cannot completely account for the long-term dynamical behavior in games.Moreover, this is true even when one relaxes the focus from Nash equilibria to approximate Nash equilibria.
Ultimately, in view of the present results about the limitations of the Nash equilibrium, the meaning of the game, as well as the meaning of general economic systems, should be sought elsewhere, perhaps with a closer focus on agent dynamics as echoed in ref. 38.Indeed, detailed studies of several learning dynamics in ref. 10 suggest that the structure of best-reply paths and cycles holds important clues for making progress in this direction.Specifically, the existence of best-reply cycles predicts nonconvergence of six different learning dynamics supported by experiments with human subjects.See also refs.39 and 40 for proposed general solution concepts in game theory, alternatives to the Nash equilibrium, that are inspired by game dynamics and the topological theory of dynamical systems.Moving beyond game theory, a natural and exciting direction for future work is the pursuit of such dynamics-inspired solution concepts more generally in the social and even the natural sciences.
Regarding a more robust understanding of game dynamics, Conley theory provides a natural choice.For instance, as player utilities are not derived from first principles, but are presumably rough approximations of reality, it is natural that the appropriate mathematical objects for the analysis of game dynamics should be robust to perturbation-in the words of Conley (15), "...if such rough equations are to be of use it is necessary to study them in rough terms."This suggests the use of further ideas from dynamical systems theory, for instance, the concept of Morse decomposition, which allows for a coarse, multiscale description of global dynamics.Morse decompositions are partially ordered sets of isolated invariant sets which possess a duality theory with lattices of attractors and in addition have an associated homological theory using the Conley index (15).We believe that these ideas could provide a robust theory of the global dynamics of games.

Notes: Conley Index Theory
The classical reference for Conley index theory is Conley's original monograph (15).A standard reference for homotopy theory and singular homology theory is ref.41.Continuous Time.Let : T + × X → X be a semiflow, A an attractor for , and U an attracting block for A. The Conley index of the attractor A is defined as follows: where H • denotes singular homology with integer coefficients.Most importantly for this paper, the Conley index is an invariant of A and is independent of the particular choice of U (15,42).An elementary proof of this is as follows.
Proposition 1.If U and U are attracting blocks for attractor A, then U and U are homotopy equivalent.In particular, H • (U ) ∼ = H • (U ), and thus, the Conley index is well defined.
Proof: It is elementary that if U and U are attracting blocks, U ∩ U is an attracting block; see also ref. 36.Thus, we may assume without loss of generality that U ⊂ U .Now, since U and U are attracting blocks for the same attractor A, there exists some T > 0 such that (T, U ) ⊂ U .It is straightforward to check that the pair of maps (T, •) : U → U and inclusion : U → U form a homotopy equivalence between U and U , as for any t * > 0, the map (t * , •) is homotopic to the identity map via the homotopy (x, s) → (s • t * , x).
Discrete Time.In the discrete time case, the Conley index takes a slightly more complex form, as the dynamical system itself no longer provides a homotopy equivalence.Let : T + × X → X be a discrete time dynamical system, A be an attractor for , and U be an attracting block for A. Define f : X → X via f (x) := (1, x), then f (U ) ⊂ U , and, with slight abuse of notation, there is an induced map f U : H • (U ) → H • (U ), i.e., the well-defined homomorphism given by f U ( The map f U induces a surjection on the generalized image: U : L • (U ) → L • (U ).Moreover, as the restriction of f U , U is an injection.Thus, U is an automorphism.The Conley index of U is defined as the pair: and is independent of choice of attracting block U .
Proposition 2. Let be a discrete dynamical system, A be an attractor for and U and U attracting blocks for A. The Conley index is independent of choice of U .In other words, there exists an isomorphism : L • (U ) → L • (U ) so that the following diagram commutes: Proof: We may again assume U ⊂ U without loss of generality.Moreover, since U and U are attracting blocks for A, there is some n > 0 we have f n (U ) ⊂ U .It is straightforward that the inclusion U ⊂ U induces a map :L • (U ) → L • (U ) and the map f n induces a map s:L • (U ) → L • (U ) so the following diagram commutes: Now, U and U are automorphisms, and as n U = f n • , we have that is a injection, and further as n U = • f n , is a surjection.Thus, is an isomorphism, and the Conley index is independent of choice of attracting block.
Data, Materials, and Software Availability.There are no data underlying this work.
game theory | dynamical systems | Nash equilibrium | solution concept

Fig. 1 .
Fig.1.Rendering of Lyapunov function V for Nash convergent game dynamics .For game dynamics, the components of Nash equilibria are invariant sets.The function V is constant on the components while points which are not Nash equilibria lie on trajectories between components.

Fig. 3 .
Fig. 3. Rendering of circle of Nash equilibria (NE lives in four dimensions); NE = V −1 (0).Attracting block U for attractor NE and Lyapunov function V ensures that the dynamics converges to NE.

Fig. 4 .
Fig. 4. Three-dimensional projection of NE (g) when g is the Kohlberg-Mertens game Eq. 1, and for utility-normalized = 0.09.The object depicted is homotopy equivalent to S 1 (a circle).
[x]) = [f (x)].The construction, introduced in ref. 43, proceeds in two stages and depends on the notions of generalized kernel and generalized image.Define the generalized kernel of f U as: gker(f U ) := n>0 ker f n U , and M • (U ) := H • (U )/gker(f U ).The map f U induces an injective map f U : M • (U ) → M • (U ).Now consider the generalized image of f U : L • (U ) := n>0 f U M • (U ) .