Skip to main content

Reasoning About Strategies and Rational Play in Dynamic Games

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8972))

Abstract

We discuss the issues that arise in modeling the notion of common belief of rationality in epistemic models of dynamic games, in particular at the level of interpretation of strategies. A strategy in a dynamic game is defined as a function that associates with every information set a choice at that information set. Implicit in this definition is a set of counterfactual statements concerning what a player would do at information sets that are not reached, or a belief revision policy concerning behavior at information sets that are ruled out by the initial beliefs. We discuss the role of both objective and subjective counterfactuals in attempting to flesh out the interpretation of strategies in epistemic models of dynamic games.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Evolutionary game theory has been applied not only to the analysis of animal and insect behavior but also to studying the “most successful strategies” for tumor and cancer cells (see, for example, [32]).

  2. 2.

    On the other hand, in a situation of incomplete information at least one player lacks knowledge of some of the aspects of the game, such as the preferences of her opponents, or the actions available to them, or the possible outcomes, etc.

  3. 3.

    Surveys of the literature on the epistemic foundations of game theory can be found in [8, 25, 30, 39, 40].

  4. 4.

    The notion of rationality in dynamic games is also discussed in [41].

  5. 5.

    For a syntactic analysis see [12, 17, 22, 2729]. See also [37].

  6. 6.

    Thus \(\mathcal {B}_{i}\) can also be viewed as a function from \(\varOmega \) into \( 2^{\varOmega }\) (the power set of \(\varOmega \)). Such functions are called possibility correspondences (or information functions) in the game-theoretic literature.

  7. 7.

    For more details see the survey in [8].

  8. 8.

    In modal logic belief operators are defined as syntactic operators on formulas. Given a (multi-agent) Kripke structure, a model based on it is obtained by associating with every state an assignment of truth value to every atomic formula (equivalently, by associating with every atomic formula the set of states where the formula is true). Given an arbitrary formula \(\phi \), one then stipulates that, at a state \(\omega \), the formula \(B_{i}\phi \) (interpreted as ‘agent i believes that \(\phi \)’) is true if and only if \(\phi \) is true at every state \(\omega ^{\prime } \in \mathcal {B}_{i}(\omega )\) (that is, \(\mathcal {B}_{i}(\omega )\) is a subset of the truth set of \(\phi \)). If event E is the truth set of formula \(\phi \) then the event \(\mathbb {B}_{i}E\) is the truth set of the formula \(B_{i}\phi \).

  9. 9.

    That is, \(\omega ^{\prime }\in \mathcal {B}^{*}(\omega )\) if and only if there is a sequence \(\left\langle \omega _{1},...,\omega _{m}\right\rangle \) in \(\varOmega \) and a sequence \(\left\langle j_{1},...,j_{m-1}\right\rangle \) in N such that (1) \(\omega _{1}=\omega \), (2) \(\omega _{m}=\omega ^{\prime }\) and (3) for all \(k=1,...,m-1\), \(\omega _{k+1}\in \mathcal {B} _{j_{k}}(\omega _{k})\).

  10. 10.

    As can be seen from Fig. 1, the common belief relation \(\mathcal {B}^{*}\) is not necessarily euclidean, despite the fact that the \(\mathcal {B}_{i}\)’s are euclidean. In other words, in general, the notion of common belief does not satisfy negative introspection (although it does satisfy positive introspection). It is shown in [24] that negative introspection of common belief holds if and only if no agent has erroneous beliefs about what is commonly believed.

  11. 11.

    This is a local version of knowledge (defined as true belief) which is compatible with the existence of other states where some or all players have erroneous beliefs (see [23], in particular Definition 2 on page 9 and the example of Fig. 2 on page 6). Note that philosophical objections have been raised to defining knowledge as true belief; for a discussion of this issue see, for example, [52].

  12. 12.

    Strategic-form games can also be used to represent situations where players move sequentially, rather than simultaneously. This is because, as discussed later, strategies in such games are defined as complete, contingent plans of action. However, the choice of a strategy in a dynamic game is thought of as being made before the game begins and thus the strategic-form representation of a dynamic game can be viewed as a simultaneous game where all the players choose their strategies simultaneously before the game is played.

  13. 13.

    A preference relation over a set S is a binary relation \(\succsim \) on S which is complete or connected (for all \(s,s^{\prime }\in S\), either \( s\succsim s^{\prime }\) or \(s^{\prime }\succsim s\), or both) and transitive (for all \(s,s^{\prime },s^{\prime \prime }\in S\), if \(s\succsim s^{\prime }\) and \(s^{\prime }\succsim s^{\prime \prime }\) then \(s\succsim s^{\prime \prime }\)). We write \(s\succ s^{\prime }\) as a short-hand for \(s\succsim s^{\prime }\) and \(s^{\prime }\not \succsim s\) and we write \(s\sim s^{\prime }\) as a short-hand for \(s\succsim s^{\prime }\) and \(s^{\prime }\succsim s\). The interpretation of \(s\succsim _{i}s^{\prime }\) is that player i considers s to be at least as good as \(s^{\prime }\), while \(s\succ _{i}s^{\prime }\) means that player i prefers s to \(s^{\prime }\) and \(s\sim _{i}s^{\prime } \) means that she is indifferent between s and \(s^{\prime }\). The interpretation is that there is a set Z of possible outcomes over which every player has a preference relation. An outcome function \( o:S\rightarrow Z\) associates an outcome with every strategy profile, so that the preference relation over Z induces a preference relation over S.

  14. 14.

    Cardinal utility functions are also called Bernoulli utility functions or von Neumann-Morgenstern utility functions.

  15. 15.

    That is, \(\forall \omega \in \varOmega \), \(\forall \omega ^{\prime }\in \mathcal {B}^{*}(\omega )\), \(\omega ^{\prime }\in \mathcal {B}_{1}(\omega ^{\prime })\) and \(\omega ^{\prime }\in \mathcal {B}_{2}(\omega ^{\prime })\).

  16. 16.

    Thus, if at a state \(\omega \) there is common belief of rationality then, for every player i, \(\sigma _{i}(\omega )\) survives the iterated deletion of strictly dominated strategies. For more details on this result, which originates in [13] and [38], and relevant references, see [8, 22, 30, 40].

  17. 17.

    For an extensive discussion see [34]. In the game-theoretic literature (see, for example [16] and [56]) a simpler approach is often used (originally introduced by [48]) where \(f(\omega ,E)\) is always a singleton.

  18. 18.

    When \(\mathcal {E}\) coincides with \(2^{\varOmega } \backslash \varnothing \), Condition 4 implies that, for every \(\omega \in \varOmega \), there exists a complete and transitive “closeness to \(\omega \)” binary relation \(\preceq _{\omega }\) on \(\varOmega \) such that \(f(\omega ,E) = \{ \omega ^{\prime } \in E : \omega ^{\prime } \preceq _{\omega } x, \forall x \in E \} \) (see Theorem 2.2 in [54]) thus justifying the interpretation suggested above: \(\omega _{1} \preceq _{\omega } \omega _{2}\) is interpreted as ‘state \(\omega _{1}\) is closer to \(\omega \) than state \(\omega _{2}\) is’ and \(f(\omega ,E)\) is the set of states in E that are closest to \(\omega \).

  19. 19.

    As remarked in Footnote 17, both authors use the less general definition of selection function where \(f:\varOmega \times \mathcal {E}\rightarrow \varOmega \), that is, for every state \(\omega \) and event E, there is a unique state closest to \(\omega \) where E is true.

  20. 20.

    For this reason, some authors (see, for example, [40]), instead of using strategies, use the weaker notion of “plan of action” introduced by [44]. A plan of action for a player only contains choices that are not ruled out by his earlier choices. For example, the possible plans of action for Player 1 in the game of Fig. 3 are \(a_{1}, (a_{2},d_{1})\) and \((a_{2},d_{2})\). However, most of the issues raised below apply also to plans of action. The reason for this is that a choice of player i at a later decision history of his may be counterfactual at a state because of the choices of other players (which prevent that history from being reached).

  21. 21.

    This interpretation of strategies has in fact been put forward in the literature for the case of mixed strategies (which we do not consider in this chapter, given our non-probabilistic approach): see, for example, [6] and the references given there in Footnote 7.

  22. 22.

    [45] was the first to propose models of perfect-information games where states are described not in terms of strategies but in terms of terminal histories.

  23. 23.

    Note that, if at state \(\omega \) player i believes that history h will not be reached (\(\forall \omega ^{\prime }\in \mathcal {B}_{i}(\omega ) \), \(\omega ^{\prime }\notin [h]\)) then \(\mathcal {B}_{i}(\omega )\subseteq \lnot [h]\subseteq [h]\rightarrow [ha]\), so that \(\omega \in \mathbb {B}_{i}\left( [h]\rightarrow [ha]\right) \) and therefore (11) is trivially satisfied (even if \(\omega \in [h])\).

  24. 24.

    On the other hand, we have not represented the fact that \(f(\alpha ,\{\alpha ,\delta \}) = \{ \alpha \}\), which follows from point 3 of Definition 4 (since \(\alpha \in \{\alpha ,\delta \}\)) and the fact that \(f(\delta ,\{\alpha ,\delta \}) = \{ \delta \}\), which also follows from point 3 of Definition 4. We have also omitted other values of the selection function f, which are not relevant for the discussion below.

  25. 25.

    Recall that, by Definition 4, since \(\alpha \in [a_{2}]\), \(f(\alpha ,[a_{2}])=\{ \alpha \}\), so that, since \(\alpha \in [a_{2}c_{2}]\) (because \(a_{2}c_{2}\) is a prefix of \(\zeta (\alpha )=a_{2}c_{2}d_{2}\)), \(\alpha \in [a_{2}]\rightrightarrows [a_{2}c_{2}]\). Furthermore, since \(f(\beta ,[a_{2}])=\{ \alpha \}\), \(\beta \in [a_{2}]\rightrightarrows [a_{2}c_{2}]\). There is no other state \(\omega \) where \(f(\omega ,[a_{2}]) \subseteq [a_{2}c_{2}]\). Thus \([a_{2}]\rightrightarrows [a_{2}c_{2}]=\{\alpha ,\beta \}\). The argument for \([a_{2}]\rightrightarrows [a_{2}c_{1}]=\{\gamma ,\delta \}\) is similar.

  26. 26.

    Since \(\mathcal {B}_{2}(\beta )=\{ \gamma \}\) and \(\gamma \in [a_{2}]\rightrightarrows [a_{2}c_{1}]\), \(\beta \in \mathbb {B}_{2}\left( [a_{2}]\rightrightarrows [a_{2}c_{1}]\right) \). Recall that the material conditional ‘if E is the case then F is the case’ is captured by the event \(\lnot E\cup F\), which we denote by \( E\rightarrow F\). Then \([a_{2}]\rightarrow [a_{2}c_{1}]=\{\beta ,\gamma ,\delta \}\) and \([a_{2}]\rightarrow [a_{2}c_{2}]=\{\alpha ,\beta ,\gamma \}\), so that we also have, trivially, that \(\beta \in \mathbb {B}_{2}\left( [a_{2}]\rightarrow [a_{2}c_{1}]\right) \) and \(\beta \in \mathbb {B}_{2}\left( [a_{2}]\rightarrow [a_{2}c_{2}]\right) \).

  27. 27.

    Recall that a game is said to have complete information if the game itself is common knowledge among the players. On the other hand, in a situation of incomplete information at least one player lacks knowledge of some of the aspects of the game, such as the preferences of her opponents, or the actions available to them, or the possible outcomes, etc.

  28. 28.

    As shown above, at state \(\omega \) Player 1 chooses \(a_{2}\); \(f(\omega ,[a_{1}])\) is the set of states closest to \(\omega \) where Player 1 chooses \(a_{1}\); in these states Player 2’s prior beliefs must be the same as at \(\omega \), otherwise by switching from \(a_{2}\) to \(a_{1}\) Player 1 would cause a change in Player 2’s prior beliefs.

  29. 29.

    See, for example, [2, 7, 9, 14, 19, 28, 33, 45].

  30. 30.

    In [28] there is also an objective counterfactual selection function, but it is used only to encode the structure of the game in the syntax.

  31. 31.

    For example, in a perfect-information game one can take \(\mathcal {E} _{i}=\{[h]:h\in D_{i}\}\), that is, the set of propositions of the form “decision history h of player i is reached” or \(\mathcal {E}_{i}=\{[h]:h\in H\}\), the set of propositions corresponding to all histories (in which case \(\mathcal {E}_{i}= \mathcal {E}_{j}\) for any two players i and j).

  32. 32.

    Note that it follows from Condition 3 and seriality of \(\mathcal {B} _{i}\) that, for every \(\omega \in \varOmega \), \(f_{i}(\omega ,\varOmega )= \mathcal {B}_{i}(\omega )\), so that one could simplify the definition of model by dropping the relations \(\mathcal {B}_{i}\) and recovering the initial beliefs from the set \(f_{i}(\omega ,\varOmega )\). We have chosen not to do so in order to maintain continuity in the exposition.

  33. 33.

    Although widely accepted, this principle of belief revision is not uncontroversial (see [42] and [53]).

  34. 34.

    Equivalently, one can think of \(\mathcal {\rightrightarrows }_{i}\) as a conditional belief operator \(\mathbb {B}_{i}(\cdot |\cdot )\) with the interpretation of \(\mathbb {B}_{i}(F|E)\) as ‘player i believes F given information/supposition E’ (see, for example, [15] who uses the notation \(\mathbb {B}_{i}^{E}(F)\) instead of \(\mathbb {B}_{i}(F|E)\)).

  35. 35.

    The author goes on to say that “The models can be enriched by adding a temporal dimension to represent the dynamics, but doing so requires that the knowledge and belief operators be time indexed...” For a model where the belief operators are indeed time indexed and represent the actual beliefs of the players when actually informed that it is their turn to move, see [20].

  36. 36.

    (15) is implied by (11) whenever player i’s initial beliefs do not rule out h. That is, if \(\omega \in \lnot \mathbb {B}_{i} \lnot [h]\) (equivalently, \(\mathcal {B}_{i}(\omega )\cap [h] \ne \varnothing \)) then, for every \(a\in A(h)\),

    $$\begin{aligned} \text {if } \omega \in [ha] \text { then } \omega \in \left( [h] \rightrightarrows _{i} [ha]\right) .&~~~~(F1) \end{aligned}$$

    In fact, by Condition 3 of Definition 6 (since, by hypothesis, \(\mathcal {B}_{i}(\omega )\cap [h]\ne \varnothing \)),

    $$\begin{aligned} f_{i}(\omega ,[h])=\mathcal {B}_{i}(\omega )\cap [h].&~~~~(F2) \end{aligned}$$

    Let \(a\in A(h)\) be such that \(\omega \in [ha]\). Then, by (11), \(\omega \in \mathbb {B}_{i}([h]\rightarrow [ha])\), that is, \(\mathcal {B}_{i}(\omega )\subseteq \lnot [h]\cup [ha]\) . Thus \(\mathcal {B}_{i}(\omega )\cap [h]\subseteq \left( \lnot [h]\cap [h]\right) \cup \left( [ha]\cap [h]\right) =\varnothing \cup [ha]=[ha]\) (since \([ha]\subseteq [h]\)) and therefore, by (F2), \(f_{i}(\omega ,[h])\subseteq [ha]\), that is, \(\omega \in [h]\rightrightarrows _{i}[ha]\).

  37. 37.

    This is a “local” definition in that it only considers, for every decision history of player i, a change in player i’s choice at that decision history and not also at later decision histories of hers (if any). One could make the definition of rationality more stringent by simultaneously considering changes in the choices at a decision history and subsequent decision histories of the same player (if any).

  38. 38.

    Proof. Suppose that \(\omega \in [ha]\cap \lnot \mathbb {B}_{i} \lnot [h].\) As shown in Footnote 36 (see (F2)),

    $$\begin{aligned} \mathcal {B}_{i}(\omega )\cap [h]=f_{i}(\omega ,[h]).&~~~~(G1) \end{aligned}$$

    Since \([ha]\subseteq [h]\),

    $$\begin{aligned} \mathcal {B}_{i}(\omega )\cap [h] \cap [ha] = \mathcal {B}_{i}(\omega )\cap [ha].&~~~~ (G2) \end{aligned}$$

    As shown in Footnote 36, \(f_{i}(\omega ,[h])\subseteq [ha]\) and, by Condition 1 of Definition 6, \( f_{i}(\omega ,[h])\ne \varnothing \). Thus \(f_{i}(\omega ,[h])\cap [ha]=f_{i}(\omega ,[h])\ne \varnothing .\) Hence, by Condition 4 of Definition 6,

    $$\begin{aligned} f_{i}(\omega ,[h])\cap [ha] =f_{i}(\omega ,[ha]).&~~~~(G3) \end{aligned}$$

    By intersecting both sides of (G1) with [ha] and using (G2) and (G3) we get that \(\mathcal {B}_{i}(\omega )\cap [ha]=f_{i}(\omega ,[ha])\).

  39. 39.

    In fact, common belief of material rationality does not even imply a Nash equilibrium outcome. A Nash equilibrium is a strategy profile satisfying the property that no player can increase her payoff by unilaterally changing her strategy. A Nash equilibrium outcome of a perfect-information game is a terminal history associated with a Nash equilibrium. A backward-induction solution of a perfect-information game can be written as a strategy profile and is always a Nash equilibrium.

  40. 40.

    In Fig. 6, for every terminal history, the top number associated with it is Player 1’s utility and the bottom number is Player 2’s utility. In Fig. 7 we have only represented parts of the functions \(f_{1}\) and \( f_{2}\), namely that \(f_{1}(\gamma , \{ \alpha , \beta , \delta \}) = \{ \delta \}\) and \(f_{2}(\beta , \{ \alpha , \beta , \delta \}) = f_{2}(\gamma , \{ \alpha , \beta , \delta \}) = \{ \alpha \}\) (note that \([a_{1}] = \{ \alpha , \beta , \delta \}\)). Similar examples can be found in [15, 28, 43, 50].

  41. 41.

    For an example of epistemic models of dynamic games where strategies do play a role see [41].

  42. 42.

    In general dynamic games, a strategy specifies a choice for every information set of the player.

  43. 43.

    [20] uses a dynamic framework where the set of “possible worlds” is given by state-instant pairs \((\omega ,t)\). Each state \(\omega \) specifies the entire play of the game (that is, a terminal history) and, for every instant t, \((\omega ,t)\) specifies the history that is reached at that instant (in state \(\omega \)). A player is said to be active at \((\omega ,t)\) if the history reached in state \(\omega \) at date t is a decision history of his. At every state-instant pair \((\omega ,t)\) the beliefs of the active player provide an answer to the question “what will happen if I take action a?”, for every available action a. A player is said to be rational at \((\omega ,t)\) if either he is not active there or the action he ends up taking at state \(\omega \) is optimal given his beliefs at \((\omega ,t)\). Backward induction is characterized in terms of the following event: the first mover (at date 0) (i) is rational and has correct beliefs, (ii) believes that the active player at date 1 is rational and has correct beliefs, (iii) believes that the active player at date 1 believes that the active player at date 2 is rational and has correct beliefs, etc.

  44. 44.

    The focus of this chapter has been on the issue of modeling the notion of rationality and “common recognition” of rationality in dynamic games with perfect information. Alternatively one can use the AGM theory of belief revision to provide foundations for refinements of Nash equilibrium in dynamic games. This is done in [19, 21] where a notion of perfect Bayesian equilibrium is proposed for general dynamic games (thus allowing for imperfect information). Perfect Bayesian equilibria constitute a refinement of subgame-perfect equilibria and are a superset of sequential equilibria. The notion of sequential equilibrium was introduced by [36].

References

  1. Alchourrón, C., Gärdenfors, P., Makinson, D.: On the logic of theory change: Partial meet contraction and revision functions. J. Symbolic Logic 50, 510–530 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  2. Arló-Costa, H., Bicchieri, C.: Knowing and supposing in games of perfect information. Stud. Logica 86, 353–373 (2007)

    Article  MATH  Google Scholar 

  3. Aumann, R.: What is game theory trying to accomplish? In: Arrow, K., Honkapohja, S. (eds.) Frontiers in Economics, pp. 28–76. Basil Blackwell, Oxford (1985)

    Google Scholar 

  4. Aumann, R.: Backward induction and common knowledge of rationality. Games Econ. Behav. 8, 6–19 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  5. Aumann, R.: On the centipede game. Games Econ. Behav. 23, 97–105 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  6. Aumann, R., Brandenburger, A.: Epistemic conditions for Nash equilibrium. Econometrica 63, 1161–1180 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  7. Baltag, A., Smets, S., Zvesper, J.: Keep ‘hoping’ for rationality: A solution to the backward induction paradox. Synthese 169, 301–333 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. Battigalli, P., Bonanno, G.: Recent results on belief, knowledge and the epistemic foundations of game theory. Res. Econ. 53, 149–225 (1999)

    Article  Google Scholar 

  9. Battigalli, P., Di-Tillio, A., Samet, D.: Strategies and interactive beliefs in dynamic games. In: Acemoglu, D., Arellano, M., Dekel, E. (eds.) Advances in Economics and Econometrics. Theory and Applications: Tenth World Congress. Cambridge University Press, Cambridge (2013)

    Google Scholar 

  10. Battigalli, P., Siniscalchi, M.: Strong belief and forward induction reasoning. J. Econ. Theor. 106, 356–391 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  11. Ben-Porath, E.: Nash equilibrium and backwards induction in perfect information games. Rev. Econ. Stud. 64, 23–46 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  12. van Benthem, J.: Logical Dynamics of Information and Interaction. Cambridge University Press, Cambridge (2011)

    Book  MATH  Google Scholar 

  13. Bernheim, D.: Rationalizable strategic behavior. Econometrica 52, 1002–1028 (1984)

    Article  MathSciNet  Google Scholar 

  14. Board, O.: Belief revision and rationalizability. In: Gilboa, I. (ed.) Theoretical Aspects of Rationality and Knowledge (TARK VII). Morgan Kaufman, San Francisco (1998)

    Google Scholar 

  15. Board, O.: Dynamic interactive epistemology. Games Econ. Behav. 49, 49–80 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  16. Board, O.: The equivalence of Bayes and causal rationality in games. Theor. Decis. 61, 1–19 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  17. Bonanno, G.: A syntactic approach to rationality in games with ordinal payoffs. In: Bonanno, G., van der Hoek, W., Wooldridge, M. (eds.) Logic and the Foundations of Game and Decision Theory (LOFT 7). Texts in Logic and Games, vol. 3, pp. 59–86. Amsterdam University Press, Amsterdam (2008)

    Chapter  Google Scholar 

  18. Bonanno, G.: Rational choice and AGM belief revision. Artif. Intell. 173, 1194–1203 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Bonanno, G.: AGM belief revision in dynamic games. In: Apt, K. (ed.) Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge, TARK XIII, pp. 37–45. ACM, New York (2011)

    Chapter  Google Scholar 

  20. Bonanno, G.: A dynamic epistemic characterization of backward induction without counterfactuals. Games Econ. Behav. 78, 31–45 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  21. Bonanno, G.: AGM-consistency and perfect Bayesian equilibrium. Part I: definition and properties. Int. J. Game Theor. 42, 567–592 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  22. Bonanno, G.: Epistemic foundations of game theory. In: van Ditmarsch, H., Halpern, J., van der Hoek, W., Kooi, B. (eds.), Handbook of Logics for Knowledge and Belief, pp. 411–450. College Publications (2015)

    Google Scholar 

  23. Bonanno, G., Nehring, K.: Assessing the truth axiom under incomplete information. Math. Soc. Sci. 36, 3–29 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  24. Bonanno, G., Nehring, K.: Common belief with the logic of individual belief. Math. Logic Q. 46, 49–52 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  25. Brandenburger, A.: The power of paradox: some recent developments in interactive epistemology. Int. J. Game Theor. 35, 465–492 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  26. Camerer, C.: Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press, Princeton (2003)

    Google Scholar 

  27. Clausing, T.: Doxastic conditions for backward induction. Theor. Decis. 54, 315–336 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  28. Clausing, T.: Belief revision in games of perfect information. Econ. Philos. 20, 89–115 (2004)

    Article  Google Scholar 

  29. de Bruin, B.: Explaining Games: The Epistemic Programme in Game Theory. Springer, The Netherlands (2010)

    Book  Google Scholar 

  30. Dekel, E., Gul, F.: Rationality and knowledge in game theory. In: Kreps, D., Wallis, K. (eds.) Advances in Economics and Econometrics, pp. 87–172. Cambridge University Press, Cambridge (1997)

    Google Scholar 

  31. Feinberg, Y.: Subjective reasoning - dynamic games. Games Econ. Behav. 52, 54–93 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  32. Gerstung, M., Nakhoul, H., Beerenwinkel, N.: Evolutionary games with affine fitness functions: applications to cancer. Dyn. Games Appl. 1, 370–385 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  33. Halpern, J.: Hypothetical knowledge and counterfactual reasoning. Int. J. Game Theor. 28, 315–330 (1999)

    Article  MATH  Google Scholar 

  34. Halpern, J.: Set-theoretic completeness for epistemic and conditional logic. Ann. Math. Artif. Intell. 26, 1–27 (1999)

    Article  MATH  Google Scholar 

  35. Halpern, J.: Substantive rationality and backward induction. Games Econ. Behav. 37, 425–435 (2001)

    Article  MATH  Google Scholar 

  36. Kreps, D., Wilson, R.: Sequential equilibrium. Econometrica 50, 863–894 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  37. Pacuit, E.: Dynamic Models of Rational Deliberation in Games. In: van Benthem, J., Ghosh, S., Verbrugge, R. (eds.) Models of Strategic Reasoning. LNCS, vol. 8972, pp. 3–33. Springer, Heidelberg (2015)

    Google Scholar 

  38. Pearce, D.: Rationalizable strategic behavior and the problem of perfection. Econometrica 52, 1029–1050 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  39. Perea, A.: Epistemic foundations for backward induction: An overview. In: van Benthem, J., Gabbay, D., Löwe, B. (eds.) Interactive logic. Proceedings of the 7th Augustus de Morgan Workshop. Texts in Logic and Games, vol. 1, pp. 159–193. Amsterdam University Press, Amsterdam (2007)

    Google Scholar 

  40. Perea, A.: Epistemic Game Theory: Reasoning and Choice. Cambridge University Press, Cambridge (2012)

    Book  Google Scholar 

  41. Perea, A.: Finite reasoning procedures for dynamic games. In: van Benthem, J., Ghosh, S., Verbrugge, R. (eds.) Models of Strategic Reasoning. Lecturer Notes in Computer Science, vol. 8972, pp. 63–90. Springer, Heidelberg (2015)

    Chapter  Google Scholar 

  42. Rabinowicz, W.: Stable revision, or is preservation worth preserving? In: Fuhrmann, A., Rott, H. (eds.) Logic, Action and Information: Essays on Logic in Philosophy and Artificial Intelligence, pp. 101–128. de Gruyter, Berlin (1996)

    Google Scholar 

  43. Rabinowicz, W.: Backward induction in games: On an attempt at logical reconstruction. In: Rabinowicz, W. (ed.) Value and Choice: Some Common Themes in Decision Theory and Moral Philosophy, pp. 243–256. Lund Philosophy Reports, Lund (2000)

    Google Scholar 

  44. Rubinstein, A.: Comments on the interpretation of game theory. Econometrica 59, 909–924 (1991)

    Article  Google Scholar 

  45. Samet, D.: Hypothetical knowledge and games with perfect information. Games Econ. Behav. 17, 230–251 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  46. Shoham, Y., Leyton-Brown, K.: Multiagent Systems: Algorithmic, Game-theoretic, and Logical Foundations. Cambridge University Press, Cambridge (2008)

    Book  Google Scholar 

  47. Maynard Smith, J.: Evolution and the Theory of Games. Cambridge University Press, Cambridge (1982)

    Book  MATH  Google Scholar 

  48. Stalnaker, R.: A theory of conditionals. In: Rescher, N. (ed.) Studies in Logical Theory, pp. 98–112. Blackwell, Oxford (1968)

    Google Scholar 

  49. Stalnaker, R.: Knowledge, belief and counterfactual reasoning in games. Econ. Philos. 12, 133–163 (1996)

    Article  Google Scholar 

  50. Stalnaker, R.: Belief revision in games: Forward and backward induction. Math. Soc. Sci. 36, 31–56 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  51. Stalnaker, R.: Extensive and strategic forms: Games and models for games. Res. Econ. 53, 293–319 (1999)

    Article  Google Scholar 

  52. Stalnaker, R.: On logics of knowledge and belief. Philos. Stud. 128, 169–199 (2006)

    Article  MathSciNet  Google Scholar 

  53. Stalnaker, R.: Iterated belief revision. Erkenntnis 128, 189–209 (2009)

    Article  MathSciNet  Google Scholar 

  54. Suzumura, K.: Rational Choice, Collective Decisions and Social Welfare. Cambridge University Press, Cambridge (1983)

    Book  Google Scholar 

  55. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944)

    MATH  Google Scholar 

  56. Zambrano, E.: Counterfactual reasoning and common knowledge of rationality in normal form games. Topics Theor. Econ. 4, Article 8 (2004)

    Google Scholar 

Download references

Acknowledgments

I am grateful to Sonja Smets for presenting this chapter at the Workshop on Modeling Strategic Reasoning (Lorentz Center, Leiden, February 2012) and for offering several constructive comments. I am also grateful to two anonymous reviewers and to the participants in the workshop for many useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giacomo Bonanno .

Editor information

Editors and Affiliations

A Summary of Notation

A Summary of Notation

The following table summarizes the notation used in this chapter.

Notation

Interpretation

\(\varOmega \)

Set of states

\(\mathcal {B}_{i}\)

Player i’s binary “doxastic accessibility” relation on \(\varOmega \). The interpretation of \(\omega \mathcal {B}_{i} \omega ^{\prime }\) is that at state \(\omega \) player i considers state \( \omega ^{\prime }\) possible: see Definition 1

\(\mathcal {B}_{i}(\omega ) =\{\omega ^{\prime }\in \varOmega :\omega \mathcal {B}_{i}\omega ^{\prime }\}\)

Belief set of player i at state \(\omega \)

\(\mathbb {B}_{i}:2^{\varOmega }\rightarrow 2^{\varOmega }\)

Belief operator of player i. If \(E \subseteq \varOmega \) then \(\mathbb {B}_{i}E\) is the set of states where player i believes E, that is, \(\mathbb {B}_{i}E=\{\omega \in \varOmega :\mathcal {B}_{i}(\omega )\subseteq E \}\)

\(\mathcal {B}^{*}\)

Common belief relation on the set of states \(\varOmega \) (the transitive closure of the union of the \(\mathcal {B}_{i}\)’s)

\(\mathbb {B}^{*}:2^{\varOmega }\rightarrow 2^{\varOmega }\)

Common belief operator

\(\left\langle N,\left\{ S_{i},\succsim _{i}\right\} _{i\in N}\right\rangle \)

Strategic-form game: see Definition 2

\(f:\varOmega \times \mathcal {E}\rightarrow 2^{\varOmega } \)

Objective counterfactual selection function. The event \(f(\omega ,E)\) is interpreted as “the set of states closest to \(\omega \) where E is true”: see Definition 4

\(E\rightrightarrows F=\{\omega \in \varOmega :f(\omega ,E)\subseteq F\}\)

The interpretation of \(E\rightrightarrows F\) is “the set of states where it is true that if E were the case then F would be the case.”

\(\left\langle A,H,N,\iota ,\left\{ \succsim _{i}\right\} _{i\in N}\right\rangle \)

Dynamic game with perfect information. See Definition 5

\(f_{i}:\varOmega \times \mathcal {E}_{i}\rightarrow 2^{\varOmega }\)

Subjective counterfactual selection function. The event \(f_{i}(\omega ,E)\) is interpreted as the set of states that player i would consider possible, at state \(\omega \), under the supposition that (or if informed that) E is true: see Definition 6

\(E\rightrightarrows _{i}F=\{\omega \in \varOmega :f_{i}(\omega ,E)\subseteq F\}\)

The event \(E\rightrightarrows _{i}F\) is interpreted as “the set of states where, according to player i, if E were the case, then F would be true”

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Bonanno, G. (2015). Reasoning About Strategies and Rational Play in Dynamic Games. In: van Benthem, J., Ghosh, S., Verbrugge, R. (eds) Models of Strategic Reasoning. Lecture Notes in Computer Science(), vol 8972. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-48540-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-48540-8_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-48539-2

  • Online ISBN: 978-3-662-48540-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics