Decision Support Static search games played over graphs and general metric spaces

We deﬁne a general game which forms a basis for modelling situations of static search and concealment over regions with spatial structure. The game involves two players, the searching player and the concealing player, and is played over a metric space. Each player simultaneously chooses to deploy at a point in the space; the searching player receiving a payoff of 1 if his opponent lies within a predetermined radius r of his position, the concealing player receiving a payoff of 1 otherwise. The concepts of dominance and equivalence of strategies are examined in the context of this game, before focusing on the more speciﬁc case of the game played over a graph. Methods are presented to simplify the analysis of such games, both by means of the iterated elimination of dominated strategies and through consideration of automorphisms of the graph. Lower and upper bounds on the value of the game are presented and optimal mixed strategies are calculated for games played over a particular family of graphs. (cid:2) 2013 The Authors. Published by Elsevier B.V.


Introduction
In this paper, we define a general search and concealment game that takes full account of the spatial structure of the set over which it is played. The game is static in the sense that players do not move, but deploy simultaneously at particular spatial points and receive payoffs based on their relative positions. In this way, the static spatial search game (SSSG) provides a theoretical foundation for the study of the relative strategic value of different positions in a geography. Using the theory of metric spaces, we model situations in which the searching player may simultaneously search multiple locations based on concepts of distance or adjacency relative to the point at which they are deployed.
While the SSSG does build upon previous work, particularly that of Ruckle (1983) and White (1994), its simplicity and generality together with its explicit consideration of spatial structure set it apart from much of the literature (see Section 3 for a detailed review of related work) and lend it the versatility to describe games over a huge variety of different spaces. The primary contributions of this article are therefore to both propose a highly general model of spatial search and concealment situations, which unites several other games presented in the literature (see Section 4.2), and to present new propositions and approaches for the strategic analysis of such scenarios.
While this paper is theoretical in nature, the SSSG provides a framework for the analysis of a diverse range of operational research questions. Aside from explicit search and concealment scenarios, the game may be used to model situations in which some structure or region must be protected against 'attacks' that could arise at any spatial point; for example, the deployment of security personnel to protect cities against terrorist attacks or outbreaks of rioting, security software scanning computer networks to eliminate threats, the defence of shipping lanes against piracy, the protection of a rail network against cable theft or the deployment of stewards at public events to respond to emergency situations.
We provide a brief overview of all necessary game theoretic concepts in Section 2 and a review of the literature on games of search and security in Section 3, before formally defining the SSSG, examining its relationship to other games in the literature and presenting some initial propositions in Section 4. In Section 5, we examine the SSSG on a graph and identify upper and lower bounds on the value of such games before presenting an algorithm in Section 6 which simplifies graph games by means of the iterated elimination of dominated strategies, focusing particularly on the application of the algorithm to games played on trees. Section 7 contains further results, including a way to simplify graph games through consideration of graph automorphisms and an examination of a particular type of strategy for such games, which we describe as an ''equal oddments strategy''. In Section 8, we use the concept of an equal oddments strategy to find analytic solutions for a particular family of graph games, while Section 9 forms a conclusion to the paper, containing a summary of our key results and suggestions of potential avenues for further research. Two proofs, which were too complicated to include in the main text, are presented as appendices.
The following definitions relate to the maximum expected payoff that players can guarantee themselves through careful choice of their mixed strategies: Definition 2.2. Values of the game Given a two-player game, the values of the game u A , u B to Players A and B respectively, are defined as: Also, provided that R A and R B are finite, optimal mixed strategies are guaranteed to exist for both players. Both of these facts are consequences of the Minimax Theorem (see Morris, 1994, p. 102).
Given a constant-sum two-player game with finite strategy sets, a solution of the game comprises optimal mixed strategies R A ; R B and values u A ; u B for each Player.
The following definition allows for a crude comparison of the efficacy of different strategies.

Definition 2.4. Strategic dominance and equivalence
Consider a two-player game with strategy sets R A ; R B and payoff functions p A ; p B . Given particular pure strategies x 1 ; x 2 2 R A for Player A, we have: x 2 very weakly dominates x 1 if and only if: p A ðx 2 ; yÞ P p A ðx 1 ; yÞ; 8y 2 R B x 2 weakly dominates x 1 if and only if: p A ðx 2 ; yÞ P p A ðx 1 ; yÞ; 8y 2 R B and 9y Ã 2 R B such that: p A ðx 2 ; y Ã Þ > p A ðx 1 ; y Ã Þ x 2 strictly dominates x 1 if and only if: p A ðx 2 ; yÞ > p A ðx 1 ; yÞ; 8y 2 R B x 2 is equivalent to x 1 if and only if: p A ðx 2 ; yÞ ¼ p A ðx 1 ; yÞ; 8y 2 R B Since the designation of the players as A and B is arbitrary, obtaining corresponding definitions of strategic dominance and equivalence for Player B is simply a matter of relabelling.
Note that weak dominance, strict dominance and equivalence are all special cases of very weak dominance. Also, strict dominance is a special case of weak dominance.
In this paper, weak dominance is of most relevance. Therefore, for reasons of clarity, the terms ''dominance'' and ''dominated strategies'' will be used to refer to weak dominance unless otherwise stated.
Since a player aims to maximise his or her payoff, we would intuitively expect that they should not play any dominated strategies.
For a general definition of dominance in game theory, see Leyton-Brown and Shoham (2008, pp. 20-23), from which the above definition was adapted.

Games of search and security: a review
Games of search and concealment, in which one player attempts to hide themselves or to conceal some substance in a specified space while another player attempts to locate or capture the player or substance, have been widely studied.

Simple search games
One of the simplest search games is the well-known high-low number guessing game in which one player chooses an integer in a given range, while the other player makes a sequence of guesses to identify it, each time being informed whether the guess was too high or too low (Gal, 1974). Continuous versions of the game have also been studied (Gal, 1978;Baston et al., 1985;Alpern, 1985a).
Another simple search game involves one player attempting to locate an object that the opposing player has hidden at a location chosen from a finite or countably infinite set with no spatial structure (except a possible ordering). Variants of these games include examples where the searching player has some chance of overlooking the object despite searching the correct location (Neuts, 1963;Subelman, 1981) or where the searcher must also avoid the location of a second hidden object (Ruckle, 1990).

Search games with immobile targets
A more complicated class of search games is that in which the searching player is mobile and their target is immobile, with payoffs to each player typically (though not universally) being dependent on the amount of time that elapses before the target is located. Such games have been examined over many different types of graph (Anderson et al., 1990;Alpern, 2008Alpern, , 2010Buyang, 1995;Reijnierse & Potters, 1993;Kikuta et al., 1994;Kikuta, 2004;Pavlović , 1995), though in general the space may be a continuous region (Gal, 1979). While the starting position of the searching player is often fixed, games in which the searching player can choose their position have also been studied (Alpern, Baston, & Gal, 2008a;Dagan et al., 2008), as have games with multiple searchers (Alpern & Howard, 2000;Peshkin, 1994).

Accumulation games
Accumulation games are an extension of this concept in which there may be many hidden objects (Kikuta et al., 1997) or in which hidden objects are replaced with some continuous material that the hiding player can distribute across a set of discrete locations (Kikuta et al., 2002;Zoroa, Fernández-Sáez, & Zoroa, 2004) or across a continuous space (Ruckle et al., 2000). The payoffs in these games are typically dependent on the number of objects or the quantity of material that the searching player is able to locate.

Search games with mobile targets
Adding a further layer of complication, there is the class of search game in which both the searching player and the hiding player are mobile, including so-called ''princess and monster'' games. Again, the payoffs in such games are typically dependent on the amount of time that elapses before the hiding player is captured and players are typically 'invisible' to each other, only becoming aware of the location of their opponent at the moment of capture.
Such games have been considered over continuous one-dimensional spaces such as the circle (Alpern, 1974) and the unit interval (Alpern, Fokkink, Lindelauf, & Olsder, 2008b), over continuous graphs or networks (Alpern et al., 1986;Anderson et al., 1992;Alpern, 1985b) and over continuous two-dimensional spaces (Foreman, 1977;Garnaev, 1991;Chkhartishvili et al., 1995). In the latter case, it is necessary to introduce the concept of a detection radius, with a capture occurring if the distance between the players drops below this value. In some cases, the probability of capture is allowed to vary based on the distance between the players (Garnaev, 1992).
Analyses of search games over discrete spaces in which both searcher and hider are mobile have tended to consider spatial structure in only a very limited way. While this structure may determine the freedom of movement of the players, very little work has been done to introduce an analogous concept to the detection radius to such games. Generally, players move sequentially and may only move to locations that are sufficiently close to their current position (e.g. Eagle et al., 1991), though variants have been considered in which either the searching player (Zoroa, Fernández-Sáez, & Zoroa, 2012) or the hiding player (Thomas et al., 1991) has the freedom to move to any location regardless of adjacency or distance.
Further variations on the search game with mobile searcher and hider include games in which the searching player follows a predetermined path and must decide how thoroughly to search each location visited (Hohzaki et al., 2000), games in which the searching player must intercept an opponent attempting to move from a given start point to a given end point (Alpern, 1992) and games with a variegated environment and the possibility that the hiding player will be betrayed by 'citizens' of the space (Owen et al., 2008). Such games have also been used to model predator-prey interactions (Alpern, Fokkink, Timmer, & Casas, 2011a).

Allocation games
Allocation games are a related concept, in which the searching player does not move around the space individually, but rather distributes 'search resources' to locate the mobile hiding player. Such games may include false information (Hohzaki, 2007) and may incorporate spatial structure by allowing the influence of resources to spread across space (''reachability''), an area which has seen ''little research'' (Hohzaki, 2008).
Variations on this idea include situations in which searching resources are deployed sequentially (Dendris, Kirousis, & Thilikos, 1997) or in which both players distribute resources to respectively locate or protect a hidden object (Baston et al., 2000). Cooperative allocation games, in which multiple players combine their searching resources to locate a moving target, have also been considered (Hohzaki, 2009).

Rendez-vous games
Rendez-vous games are a parallel concept to games with mobile searching and hiding players, the difference being that these games are cooperative, with both players wishing to locate the other as soon as possible (see Alpern, 2002, for an overview). Typically, in a rendez-vous game, the structure of the space is known to all, with consideration given to the amount of information available to players regarding their relative positions, and their ability to distinguish between symmetries of the space (whether they have a common understanding of ''North'', for example).
Rendez-vous games have been studied over various continuous one-dimensional spaces, such as the line Lim et al., 1996; and the circle (Alpern, 2000), over continuous two-dimensional spaces, such as the plane  or a general compact metric space (Alpern, 1995) and over discrete spaces, such as lattices (Alpern et al., 2005;Ruckle, 2007) and other graphs (Alpern, Baston, & Essegaier, 1999). Costs may also be introduced for movement and examination of particular locations (Kikuta et al., 2007).
Work has also been done on 'hybrid' games of search and rendez-vous, where, for example, two agents attempt to meet without being located by a third (Alpern et al., 1998) or where the seaching player does not know whether the other player is attempting to rendez-vous or to evade capture .

Security games
Security games are used to model situations in which some public resource (e.g. airports, transport infrastructure, power facilities) must be protected from attack with limited defensive resources. A good introduction to the topic is provided by Tambe (2012).
Such situations tend to be modelled as Stackelberg games, where it is assumed that the defensive player first commits to some strategy to protect the vulnerable sites and that this strategy is observed by the attacking player, who then chooses an optimal response (Tambe, 2012, pp. 4-8). Stackelberg-type security games related to the mobile-searcher-immobile-hider games of Section 3.2 have also been proposed to examine optimal patrolling strategies (Alpern, Morton, & Papadaki, 2011b;Basilico, Gatti, & Amigoni, 2012).
A related concept is that of the much studied Colonel Blotto game, in which two players must simultaneously distribute a fixed quantity of discrete or continuous resources across a number of sites, each site being 'won' by the player who distributed the greater quantity of resources to it, with payoffs determined by the number of sites that each player wins (see Roberson, 2006). The many extensions of the Blotto game have included asymmetric versions (Tofias, Merolla, & Munger, 2007;Hortala-Vallve & Llorente-Saguer, 2012), examples in which resources are allocated to battlefields sequentially rather than simultaneously (Powell, 2009) and examples in which defensive resources are heterogenous (Cohen, 1966).
Though the deployment sites in such models are often assumed to be wholly separate, with events at one location having no effect on events at other locations, certain security games and Blotto games with strategically interdependent sites have been considered. For example, Shubik and Weber (1981) introduce the concept of a ''characteristic function'' for such games, which allocates values to subsets of the sites, thus allowing interdependencies between them to be captured. Other approaches to modelling such interdependence include an extension of the Colonel Blotto game in which a successful attack on a ''radar site'' ensures the success of attacks on other sites (Grometstein & Shoham, 1989), while Hausken (2010Hausken ( , 2011) discusses a classification of the underlying infrastructures of security games based on the interdependence of their sites (e.g. series, parallel, complex, . . .) and Powell (2007) analyses the relative value of defending borders over protecting strategic targets directly.
Though analyses of interdependence in security games and Blotto games may be quite general (that of Shubik & Weber, 1981, for example), interdependence that arises explicitly from the spatial structure of the deployment sites has not been considered in a general setting.

Geometric games
One of the most general and theoretical analyses of search and concealment type situations is Geometric games and their applications (Ruckle, 1983). In this book, the author defines a geometric game as a two-player zero-sum game (with players called ''RED'' and ''BLUE'') played over a given set S, where the strategy sets for each player R RED ; R BLUE are subsets of the power set PðSÞ (the set of all subsets of S). Pure strategies for each player are therefore subsets R; B # S. The payoff to each player is a function of R and B, typically depending directly on the intersection R \ B.
This concept of a geometric game allows Ruckle to model a wide variety of situations of search, ambush and pursuit, as well as a range of abstract games, taking full consideration of the structure of the space S over which the games are played.

Conclusion: motivations for the SSSG
Much of the literature on search games and related concepts has focussed on analysing specific games, rather than attempting to present general frameworks for such situations and identifying more broadly applicable results. While spatial structure may be considered for games in which players are mobile, the geography of the space over which games are played is often given little or limited consideration, particularly in the literature on security games. The concept of ''reachability'', as described by Hohzaki (2008), in which a searcher or searching resource deployed at a point has influence over a neighbourhood of that point, has received very little attention. Games which concentrate purely on the strategic value of a player's chosen position in a space, rather than on strategies for moving through the space for the purposes of search or rendez-vous, have also seen little research, at least since the work of Ruckle (1983).

The static spatial search game (SSSG)
The definitions and notation relating to metric spaces used in this section are from Sutherland (1975, pp. 19-44).

Definition of the SSSG
The static spatial search game (SSSG) is a two-player game played over a metric space M ¼ ðX; dÞ, where X is a set of points x and d : X Â X ! ½0; 1Þ is the metric or distance, which has the standard properties: 8x; y 2 X ðM3Þ dðx; yÞ þ dðy; zÞ P dðx; zÞ; 8x; y; z 2 X The metric d reflects the spatial structure of X. In R n ; d may be the Euclidean distance, while in a graph d may be the length of the shortest path connecting two points. However, depending on the interpretation of the game, d could also represent an abstract distance, indicating dissimilarity, difficulty of communication or perceived costs.
In specific cases, it may be sensible to relax some of these conditions. For example, in a graph that is not connected, we could allow infinite distances between vertices that are not connected by a path (an extended metric). Alternatively, to represent a directed graph, we may wish to ignore the symmetry condition (M2) (a quasi-metric; see Steen & Seebach (1970)). However, for the sake of simplicity, we do not consider such cases at this time.
We define a non-negative real number r called the detection radius and use the notation B r ½x to designate the closed ball centred on x: B r ½x ¼ y 2 X : dðx; yÞ 6 r f g The strategies for Player A (the searching player) and Player B (the concealing player) are specific points of X at which they may choose to deploy. In a single play of the game, each player simultaneously picks a point x A , x B from their own strategy set, For the sake of clarity, we use masculine pronouns to refer to Player A and feminine pronouns to refer to Player B throughout this paper.
We define the payoff functions for Player A and Player B respectively as: This is a constant-sum game and can be analysed accordingly. In interpreting the game, we imagine that Player B chooses to hide somewhere in X, while Player A attempts to locate his opponent. To do this, Player A selects a point of X and searches a neighbourhood of this point. If Player B's hiding place falls within the detection radius of Player A's chosen point, the attempt to hide is unsuccessful and Player B is located. Otherwise, Player B remains undetected.
The game is illustrated in Fig. 1.

The SSSG and other games
The SSSG is not strictly a geometric game by Ruckle's definition (1983, p. 2), since it is not zero-sum. However, it could be transformed into a zero-sum game without altering the subsequent analysis, simply by subtracting 1 2 from all payoffs. The decision that all payoffs should be 0 or 1 has been taken to ensure the clarity of the payoff matrices considered later in this paper.
Given this proviso, certain of Ruckle (1983)'s geometric games can be formulated as particular cases of the SSSG. For example, if transformed to a zero-sum game as described, game AAGV (Ruckle, 1983, p. 86;adapted from Arnold (1962)) is an example of the Similarly, White (1994)'s Games of strategy on trees are examples of the SSSG, where X is the set of vertices of a tree, R A ¼ R B ¼ X; r ¼ 1 and d is the length of the shortest path between two vertices.
A game that demonstrates the potential complexity that can arise from apparently simple cases of the SSSG is the ''Cookie-Cutter'' game (or the ''Hiding in a Disc'' game), in which Player A chooses a point in a disc of unit radius and Player B simultaneously places a circular 'cookie-cutter' centred at any point of the disc, winning the game if Player A's point lies within the 'cookie-cutter'. Given appropriate payoffs, this game is an example of the SSSG, where X is the closed unit disc, R A ¼ R B ¼ X and dðx; yÞ ¼ jx À yj.
The particular case of this game with r ¼ 1=2 was originally proposed by Gale et al. (1974), for which optimal mixed strategies were presented by Evans (1975). The game was extended to all r > 0 by Bordelon (1975), who proposed optimal mixed strategies for all r > 1=2, but these results were disputed by Ruckle (1983, p. 108). Ruckle's disproof was disputed in turn by Danskin (1990), who showed that Bordelon (1975)'s results were correct for some values of r > 1=2, though false in general. Despite the apparent simplicity of the problem, Danskin (1990) was only able to find optimal mixed strategies for a small range of values of r around r ¼ 1=2 and for all r P ffiffiffi 2 p =2, thus illustrating the hidden complexity of many games of this form.
A particularly simple example of a game that can be represented as an SSSG is ''Matching Pennies'' (see Blackwell & Girshick, 1954, p. 13), in which Players A and B simultaneously call ''Heads'' or ''Tails'', with Player A receiving a payoff of 1 if the calls are the same and À1 otherwise, and Player B receiving a payoff of À1 if the calls are the same and 1 otherwise. Taking this is an SSSG, again with the proviso that the payoffs must be transformed appropriately.
A more complicated example of a game that can be represented as an SSSG is the graph security game of Mavronicolas, Papadopoulou, Philippou, and Spirakis (2008) if the number of attackers is restricted to one. This game is played over an undirected graph G ¼ ðVðGÞ; EðGÞÞ with one defender and (in general) multiple attackers. Simultaneously, the defender chooses an edge and the attackers each choose a vertex. The defender receives a payoff equal to the number of attackers who choose vertices incident to his chosen edge. Each attacker receives a payoff equal to 0 if their chosen vertex is incident to the defender's edge and 1 otherwise.
Consider the graph G 0 obtained by inserting a new vertex at the midpoint of each of the edges of G. Let the set of new vertices created in this way be denoted VðG 0 Þ Ã while the complete vertex set VðG 0 Þ includes both the new vertices and the original vertices. With the defender as Player A, a single attacker as Player length of the shortest path between two vertices in G 0 , this game is also an example of the SSSG.
The SSSG provides a framework that unites all of these games and allows for a general consideration of the relative strategic values of the different points of a space. It implicitly encompasses the concepts of reachability, interdependence based on spatial structure and the detection radius, as discussed in Section 3.

The SSSG with finite strategy sets
Consider an example of the SSSG in which the strategy sets R A ; R B are finite. One of the simplest possible mixed strategies available to each player in such a case is the mixed strategy that allocates equal probabilities to all points in a player's strategy set. We denote these mixed strategies by q A and q B for Players A and B respectively.
The following proposition establishes a sufficient condition for q A ; q B to be optimal mixed strategies: then q A and q B are optimal mixed strategies for Players A and B respectively and fjR A j À1 ¼ fjR B j À1 ¼ u is the value of the game to Player A.
Proof. Suppose that Player A employs the mixed strategy q A which allocates a uniform probability of jR A j À1 to all points x A 2 R A . In a particular play of the game, suppose that Player B deploys at point In this situation, since jB r ½x B \ R A j ¼ f, the expected payoff to Player A is fjR A j À1 , the probability that Player A's point x A lies in B r ½x B . Therefore, for any mixed strategy r B for Player B: and thus: Now suppose that Player B employs the mixed strategy q B which allocates a uniform probability of jR B j À1 to all points x B 2 R B . In a particular play of the game, suppose that Player A deploys at point Therefore, for any mixed strategy r A for Player A: and thus: Now, (2) together with the symmetric property of the distance (M2) imply that fjR A j ¼ fjR B j and thus that jR A j ¼ jR B j. By (3) and (4), we therefore have: and q A , q B are optimal mixed strategies, by Definition 2.3. h

Dominance and equivalence in the SSSG
We can now examine strategic dominance and equivalence (see Definition 2.4) in the context of the SSSG using the notation established in Section 4.
Proposition 4.2. Consider the SSSG played over a metric space X, with strategy sets R A , R B and distance d. For strategies x 1 ; x 2 2 R A ; x 1 -x 2 , for Player A: x 2 very weakly dominates x 1 if and only if: x 2 weakly dominates x 1 if and only if: x 2 strictly dominates x 1 if and only if: x 2 is equivalent to x 1 if and only if: This proposition states that for Player A: x 2 very weakly dominates x 1 if and only if, when deployed at x 2 , Player A can search every potential location of Player B that could be searched from x 1 . This dominance is weak if there exist potential locations of Player B that can be searched from x 2 but that cannot be searched from x 1 (inclusion is strict). Strict dominance only occurs in the trivial case in which no potential locations of Player B can be searched from x 1 while every potential location of Player B can be searched from x 2 .
x 2 and x 1 are equivalent if and only if precisely the same set of potential locations of Player B can be searched from both points.
Proof. We consider each of the four parts of Definition 2.4 and show that, in the context of the SSSG, they are equivalent to the corresponding statements of Proposition 4.2. Recall that p A takes values in f0; 1g.
Very weak dominance p A ðx 2 ; yÞ P p A ðx 1 ; yÞ; Weak dominance Since ðÃÞ () ðÃÃÞ, it suffices to observe that, if ðÃÃÞ is assumed to be true: We now consider dominance and equivalence for Player B: Proposition 4.3. Consider the SSSG played over a metric space X, with strategy sets R A ; R B and distance d. For strategies x 1 ; x 2 2 R B ; x 1 -x 2 , for Player B: x 2 very weakly dominates x 1 if and only if: ½x 2 2 B r ½y ) x 1 2 B r ½y; 8y 2 R A x 2 weakly dominates x 1 if and only if: ½x 2 2 B r ½y ) x 1 2 B r ½y; 8y 2 R A and 9y Ã 2 R A such that: x 1 2 B r ½y Ã and x 2 R B r ½y Ã x 2 strictly dominates x 1 if and only if: x 1 2 B r ½y and x 2 R B r ½y; 8y 2 R A x 2 is equivalent to x 1 if and only if: Weak dominance Since ðÃÞ () ðÃÃÞ, it suffices to observe that: The necessary and sufficient conditions for dominance and equivalence for Player B established in Proposition 4.3 can be shown to be equivalent to a simpler set of conditions, clearly analogous to those relating to dominance and equivalence for Player A seen in Proposition 4.2: Proposition 4.4. Consider the SSSG played over a metric space X, with strategy sets R A ; R B and distance d. For strategies x 1 ; x 2 2 R B ; x 1 -x 2 , for Player B: x 2 very weakly dominates x 1 if and only if: x 2 weakly dominates x 1 if and only if: x 2 strictly dominates x 1 if and only if: x 2 is equivalent to x 1 if and only if: We consider each of the four statements of Proposition 4.3 (which has already been proven) and show that they are equivalent to the corresponding statements of Proposition 4.4.
Very weak dominance ½x 2 2 B r ½y ) x 1 2 B r ½y; Weak dominance Since ðÃÞ () ðÃÃÞ, it suffices to observe that, if ðÃÃÞ is assumed to be true: 9z 2 R A : x 1 2 B r ½z and x 2 R B r ½z () 9z 2 R A : z 2 B r ½x 1 and z R B r ½x 2 ½by ðM2Þ x 1 2 B r ½y and x 2 R B r ½y; 8y 2 R A () y 2 B r ½x 1 and y R B r ½x 2 ; 8y 2 R A ½by ðM2Þ While Proposition 4.4 is apparently simpler than Proposition 4.3, note that every part of its proof depends on the symmetric property of the distance (M2). If this condition were to be relaxed, as discussed in Section 4.1, Proposition 4.4 would not be valid and dominance and equivalence for Player B would have to be analysed on the basis of Proposition 4.3.
Definition 4.1. Pairwise Equivalence Consider the SSSG played over a metric space X, with strategy sets R A ; R B and distance d. For Player A or Player B, a subset of their strategy setR # R A orR # R B exhibits pairwise equivalence if and only if x is equivalent to y; 8x; y 2R.
We conclude that a subsetR # R A orR # R B exhibiting pairwise equivalence can be reduced to any singleton fxg #R without altering the analysis of the game. Since all points inR are equivalent, a player would neither gain nor lose by playing another point in the set overx.
The following proposition states that if x 2 very weakly dominates x 1 for Player A (and x 1 is adjacent to at least one potential location for Player B), then the distance between the two points must be no greater than 2r.
Proposition 4.5. For the SSSG played over a metric space X, with strategy sets R A ; R B and distance d, if x 2 very weakly dominates x 1 for Player A and B r ½x 1 \ R B -;, then x 2 2 B 2r ½x 1 \ R A .
An analogous result holds for Player B. The proof is similar to that of 4.5 and is therefore omitted: Proposition 4.6. For the SSSG played over a metric space X, with strategy sets R A ; R B and distance d, if x 2 very weakly dominates x 1 for Player B and B r ½x 2 \ R A -;, then x 2 2 B 2r ½x 1 \ R B .
In this case, the condition that B r ½x 2 \ R A -; removes strategies that very weakly dominate every other strategy.
Note that both of these propositions depend on the symmetric property of the distance (M2) and the triangle inequality (M3).

Iterated elimination of dominated strategies
The concepts of dominance and equivalence provide us with a method for reducing the SSSG through an iterative process of removing dominated strategies from R A and R B , reducing pairwise equivalent subsets to singletons and reassessing dominance in the new strategy sets. This is known as the iterated elimination of dominated strategies (IEDS) (see, for example, Berwanger, 2007;Börgers, 1992;Dufwenberg et al., 2002). Given any game, the aim of IEDS is to identify a simplified game, whose solutions are also solutions of the complete game. These solutions can then be identified using standard techniques (see, for example, Morris (1994, pp. 99-114)). The application of this method to games played over graphs is discussed in Section 6.
It should be noted that because we are considering weak rather than strict dominance, IEDS may not be suitable for identifying all the solutions of a particular game. The results of this form of IEDS are dependent on the order in which dominated strategies are removed (Leyton-Brown & Shoham, 2008, pp. 20-23) and some solutions may be lost. It is also necessary to observe that, while IEDS has been shown to be valid for games with finitely many possible strategies, for games with infinitely many possible strategies the process may fail (Berwanger, 2007). Indeed, such infinite games may not have solutions (Ruckle, 1983, p. 10).
However, though IEDS is not guaranteed to produce optimal mixed strategies for the SSSG in such cases, given a pair of mixed strategies r A , r B obtained by this method, it is straightforward to check whether or not they are optimal by verifying that, for some u 2 ½0; 1, we have: where u is the value of the game to Player A.
This method is described by Blackwell and Girshick (1954, p. 60) and used extensively by Ruckle (1983) to verify proposed optimal mixed strategies for geometric games.

The SSSG on a graph
The definitions and notation relating to graph theory used in this section are adapted from Bondy and Murty (1976).

Definition of the SSSG on a graph
Consider a simple undirected graph G, characterised by the symmetric adjacency matrix M ¼ ða ij Þ, with a set of j vertices: and a set of edges: We suppose that G is connected, to ensure that the metric space axioms are fulfilled, but this assumption could be relaxed if we allowed for infinite distances between vertices. We also suppose that all edges of G have unit weight.
In this game, the pure strategies for each player are particular vertices. In the following analysis, we use the words ''strategy'' and ''vertex'' interchangeably, depending on the context.
The graph game is illustrated in Fig. 2.

Graph games with r -1
The restriction to r ¼ 1 is not a significant constraint, since any graph game can be reduced to this case by means of a minor alteration.
For situations with r -1, we can define: and apply our analysis to G 0 with r ¼ 1.
Equivalently, if r 2 N, we can replace the adjacency matrix It therefore suffices to exclusively study graph games G with r ¼ 1, since these methods for redefining G ensure that such analysis will be applicable to games for any r 2 N.

Preliminary observations
Analysis of the graph game requires the statement of some preliminary results and definitions. For these results, we use the following notation:  DðGÞ ¼ max w2V ðGÞ ½degðwÞMaximum degree of the vertices of G. dðGÞ ¼ min w2VðGÞ ½degðwÞ Minimum degree of the vertices of G.
The following proposition states that if Player A can search a globally maximal number of potential positions for Player B from a vertex v, then v is not dominated by any other vertex and is only equivalent to those vertices which have the same closed neighbourhood as v. In a graph in which all vertices have a distinct closed neighbourhood, such as a rectangular grid graph, such a vertex v cannot be very weakly dominated by any other vertex.
Proposition 5.1. For the graph game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1, consider a vertex v 2 R A and the subset R ðvÞ We have that if: then: A exhibits pairwise equivalence for Player A; (ii) v is not very weakly dominated for Player A by any strategy in Proof. To prove (i), observe that 8w 1 ; w 2 2 R ðvÞ A ; N½w 1 ¼ N½w 2 . Hence, N B ½w 1 ¼ N B ½w 2 and therefore w 1 is equivalent to w 2 for Player A, so R ðvÞ A exhibits pairwise equivalence for Player A.
To prove (ii), suppose for a contradiction that v satisfies (6) and is very weakly dominated byv 2 R A n R ðvÞ A . Observe also that: If b½w ¼ DðGÞ þ 1; for some w 2 VðGÞ; then N B ½w ¼ N½w: and note that DðGÞ þ 1 is an upper bound for b½w. Now, we have that: N B ½v # N B ½v ½by very weak dominance ) DðGÞ þ 1 6 b½v 6 jN½vj ½by ð6Þ ) DðGÞ þ 1 6 b½v 6 DðGÞ þ 1 ) DðGÞ þ This is a contradiction, sincev 2 S n fvg. h The following proposition is a restatement of Proposition 4.5, reformulated in the context of the graph game. It states that any vertex that very weakly dominates v for Player A can be no more than 2 steps away from v on the graph. Its proof is identical to that of Proposition 4.5 and is thus omitted.
then v is not very weakly dominated for Player A by any other strategy.
Proof. This follows directly from Propositions 5.2 and 5.3, where the set S from Proposition 5.3 is defined as: The corollary states that any vertex from which Player A can search strictly more potential hiding places for Player B than could be searched from any other valid vertex lying no more than two steps away, cannot be very weakly dominated by any other vertex. These results allow us to considerably narrow down our search for dominated and equivalent vertices.
Analogous results to Propositions 5.2 and 5.3 and Corollary 1 hold for very weak dominance for Player B. The proofs of these results are similar to those presented above and are thus omitted.
Proposition 5.4. For the graph game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1, consider a vertex v 2 R B . Let S # R B be such that v 2 S and such that R B n S contains no vertices that very weakly dominate v for Player B.
We have that if: then v is not very weakly dominated for Player B by any other strategy.

Bounds on the value of a graph game
A first step in the analysis of a particular graph game is to determine lower and upper bounds on the values of the game (see Definition 2.2).
Recall that for a two-player constant-sum game, it makes sense to restrict discussion to the value to Player A, since this also determines the value to Player B through the condition that the sum of the two values is a fixed constant (see (1)).
Proposition 5.6. For the graph game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1, let u represent the value to Player A. u is bounded as follows: Note that in the case where R A ¼ R B ¼ VðGÞ, these inequalities become: The proposition derives from a consideration of q A and q B , defined in Section 4.3 as the mixed strategies that allocate equal probabilities to all vertices in a player's strategy set. If Player A employs mixed strategy q A , then Player B can do no better than to deploy at the vertex whose closed neighbourhood contains the fewest possible vertices in R A . The value of the game to Player A cannot therefore be less than the sum of the probabilities that q A assigns to these vertices. This reasoning produces the left hand inequality. The right hand inequality follows in a similar fashion from an analysis of q B as a strategy for Player B. A formal proof follows: Proof. Suppose that Player A employs the mixed strategy q A which allocates a uniform probability of jR A j À1 to all vertices w 2 R A . In a particular play of the game, suppose that Player B deploys at vertex v 2 R B .
In this situation, the expected payoff to Player A is a½vjR A j À1 . Therefore, for any mixed strategy r B for Player B: The proof of the right hand inequality is similar. h The following corollary is a consequence of (9): Corollary 3. Consider the graph game: EðGÞÞ is a regular graph of degree D; R A ¼ R B ¼ VðGÞ; r ¼ 1 and let u be the value of G to Player A. Then: The strategy q that allocates a uniform probability of jVðGÞj À1 to all vertices, is an optimal mixed strategy for both players.
u can also be bounded in a different way: Proposition 5.7. For the graph game G ¼ ðG; R A ; R B ; rÞ with r ¼ 1, let u represent the value to Player A and: Then u is bounded as follows: The left hand inequality is derived from consideration of the mixed strategy s A for Player A that allocates uniform probabilities to a minimal subset of vertices whose closed neighbourhoods cover R B .
The right hand inequality is derived from consideration of the mixed strategy s B for Player B that allocates uniform probabilities to a maximal subset of vertices with the property that no two vertices are connected by a path of length less than 3.
Proof. First consider the left hand inequality. Consider a subset of vertices W 0 2 W of minimum cardinality.
Suppose that Player A employs the mixed strategy s A that allocates uniform probability jW 0 j À1 to vertices w 2 W 0 and zero probability to all other vertices. In a particular play of the game, suppose that Player B deploys at vertex v 2 R B . Since W 0 2 W, we have: Therefore 9w 0 2 W 0 such that v 2 N½w 0 . Since s A allocates a probability of jW 0 j À1 to w 0 , the expected payoff to Player A is greater than or equal to jW 0 j À1 . Therefore, for any mixed strategy r B for Player B: and thus: Now consider the right hand inequality. Consider a subset of vertices Z 0 2 Z of maximum cardinality.
Suppose that Player B employs the mixed strategy s B that allocates uniform probability jZ 0 j À1 to vertices v 2 Z 0 and zero probability to all other vertices. In a particular play of the game, suppose that Player A deploys at vertex w 2 R A . Since d G ðz 1 ; z 2 Þ > 2; 8z 1 ; z 2 2 Z 0 , we clearly have that: jN½w \ Z 0 j 6 1 So, in this situation, the expected payoff to Player A is less than or equal to jZ 0 j À1 . Therefore, for any mixed strategy r B for Player A: and thus: Following these results, we label the bounds on u as follows: For a particular graph game, each of these bounds may or may not be attained. For example, consider the four graphs shown in Fig. 3. In each case, consider the graph game G ¼ ðG; R A ; R B ; rÞ with R A ¼ R B ¼ VðGÞ and r ¼ 1. For such small graphs, optimal mixed strategies are easy to calculate (for example, using the method described by Morris (1994, pp. 99-114)). Table 1 summarises the optimal mixed strategies, the true values of u (the value of G to Player A) and the values of the bounds for each of the four games.
It should be noted that while LB 1 and UB 1 will generally be easy to calculate, LB 2 and UB 2 may not be, since the minimal and maximal cardinalities of W 0 2 W and Z 0 2 Z respectively may be difficult to determine.

An IEDS algorithm for graph games
The results of Section 4.4 and Section 5.3 allow for the creation of an IEDS algorithm (see Section 4.5) for the graph game G ¼ ðG; R A ; R B ; rÞ. Since the game has finitely many strategies, this approach is always a valid method for finding a solution of the game (though it may not identify all optimal mixed strategies), as discussed in Section 4.5.
The algorithm identifies vertices that may be very weakly dominated for Player A or Player B and checks for dominance and equivalence over a small subset of their surrounding vertices. Very weakly dominated vertices are eliminated and the strategy sets for the players are iteratively reduced, forming sequences ðR A;K Þ K2N and ðR B;K Þ K2N of subsets of R A and R B respectively, until there is no dominance or equivalence in the remaining vertices. The aim is to simplify the game as far as possible, such that optimal mixed strategies can be more easily identified.
The explicit identification of vertices that cannot be dominated and the subsequent restriction of the set of vertices that should be examined when searching for dominance of a given vertex are intended to facilitate the creation of an efficient computer programme to apply IEDS in the graph game. An application of the algorithm in a very simple case is presented in Section 6.11.
For the purpose of iteration, our existing notation is extended as follows: Step one: transformation of G to G 0 If r -1, we transform G to G 0 using the method outlined in Section 5.2.
We then set the iteration variable K ¼ 0 and define: Step two: identify vertices that cannot be very weakly dominated for Player A By Propositions 5.1 and 5.3 and Corollary 1, in a graph in which all vertices have a distinct closed neighbourhood, a vertex v for which: b K ½v ¼ DðGÞ þ 1 or N B;K ½v -; and B 2 ½v \ R A;K ¼ fvg S 0 ðvÞ ¼ B 2 ½v \ R A;K n fvg cannot be very weakly dominated for Player A by any other vertex. We need not therefore consider such vertices when looking for dominated or equivalent strategies.
Formally, we define a reduced set of strategies for Player A: For a graph in which two vertices may have the same closed neighbourhood, Proposition 5.1 is not used, and we instead define: Fig. 3. Four simple graphs. Solutions of games played over these graphs and the values of the four bounds LB 1 , LB 2 , UB 1 , UB 2 are summarised in Table 1. Vertices v 2 R A;K for which N B;K ½v ¼ ; are dominated automatically by any vertex w 2 R A;K for which N B;K ½w -;, while Proposition 5.3 states that for each v 2 R À A;K with N B;K ½v -; we need only look for vertices that very weakly dominate v in B 2 ½v \ R A;K .
Vertices are checked sequentially according to some pre-determined ordering and very weakly dominated vertices are removed and are not considered when searching for dominance and equivalence of any remaining vertices. Sequential checking and the immediate removal of identified vertices ensure that pairwise equivalent subsets are not eliminated from R A;K in their entirety, but rather reduced to singletons as required.
Let the set of very weakly dominated vertices identified in this way be denoted as W A;K .
Formally, we define a new strategy set for Player A: Note that in the trivial case where N B;K ½v ¼ ;; 8v 2 R A;K , all vertices under consideration are equivalent for Player A, so we choose any vertex v 2 R A;K , define R A;Kþ1 ¼ fvg; W A;K ¼ R A;K n fvg and continue to Step Four. This ensures that R A;Kþ1 is not empty.

6.5.
Step four: identify vertices that cannot be very weakly dominated for Player B

In a similar fashion to
Step Two, we use Proposition 5.5 and Corollary 2 to define a reduced set of strategies for Player B, removing those strategies that cannot be very weakly dominated: Step Seven. Otherwise, increase the iteration variable K by one and return to Step Two. 6.8.
Step seven: find optimal mixed strategies for the simplified game Construct the payoff matrix for the simplified game defined by the strategy sets R A;K ; R B;K . If optimal mixed strategies for this game can be found (for example, using the method described by Morris (1994), pp. 99-114), they are optimal mixed strategies for the complete graph game (see Morris, 1994, pp. 48-49;Berwanger, 2007, p. 2).

Termination and efficiency of the algorithm
Note that this algorithm must terminate for some K 2 N. The condition for continued iteration (Step Six) is only fulfilled if new pure strategy dominance or equivalence is identified and this necessarily results in the elimination of strategies from one player's strategy set. Also, graph games have been defined on graphs with a finite set of j vertices, which implies that R A;0 and R B;0 are finite sets. Therefore the integer sequence: is strictly decreasing and positive, and so must terminate for some Though a thorough investigation of the computational complexity of the algorithm is beyond the scope of this paper (and would depend to some extent on its precise implementation), a crude measure of the algorithm's efficiency can be determined by considering the number of pairs of vertices that must be tested for very weak dominance before the algorithm terminates. Given a graph game G ¼ ðG; R A ; R B ; rÞ with jVðGÞj ¼ j, an (extremely conservative) upper bound for this number can be determined as follows.
At each of Steps Three and Five, a particular vertex can be tested against no more than j À 1 other vertices. Therefore, the number of vertex pairs tested for very weak dominance at a single iteration certainly does not exceed 2j 2 . Also, since (10) is a strictly decreasing positive sequence and jR A;0 j; jR B;0 j 6 j, the number of iterations clearly cannot exceed 2j. Therefore, for any graph game, the total number of vertex pairs that must be tested for very weak dominance before the algorithm terminates will not exceed a cubic function of the number of vertices j.
Alternatively, since the algorithm only tests vertex pairs for very weak dominance if the distance between them is less than 3, at each of Steps Three and Five, a particular vertex will be tested against no more than DðG 0 Þ 2 other vertices (where DðG 0 Þ is the maximum degree of the vertices of the graph after the completion of Step One). Therefore, the number of vertex pairs tested in a single iteration certainly does not exceed 2DðG 0 Þ 2 j. Again using the fact that the number of iterations cannot exceed 2j, we may conclude that, for any graph game, the total number of vertex pairs tested before termination will not exceed a quadratic function of DðG 0 Þj. Which of these bounds is more useful will depend on the structure of the graphs under consideration.

The algorithm applied to games on trees
In this section and the next, we demonstrate that the algorithm of Section 6.1 offers a distinct advantage over the method proposed by White (1994) for solving games played on trees (with r ¼ 1). Proposition 6.2 establishes that the algorithm always succeeds in reducing games of this type to cases that are trivially simple to solve, while Section 6.11 presents a simple example which shows that the algorithm can also be applied to games on graphs that are not trees. Also note that White (1994) exclusively considers situations where R A ¼ R B ¼ VðGÞ, while the algorithm presented here is not restricted to such cases.
Recall that a tree is a graph in which there is a unique path between any pair of vertices. In particular, if G is a tree, we may designate a vertex t 2 VðGÞ as the ''root'' of G, such that any vertex vt has precisely one neighbour v p (the ''parent'' of v) for which: As in Section 4.3, let q A and q B represent the mixed strategies for each player that allocate equal probabilities to all vertices in their strategy set. The following proposition is a restatement of Proposition 4.1 in the context of graph games, with f ¼ 1.
Proposition 6.1. Consider the graph game: then q A and q B are optimal mixed strategies for Players A and B respectively and jR A j À1 ¼ jR B j À1 ¼ u is the value of the game to Player A.
The following proposition states that, for graph games on trees with r ¼ 1, either the value of the game to Player A is zero and analysis of the game is trivial or the algorithm of Section 6.1 reduces the game to the form given in Proposition 6.1, for which q A and q B have been shown to be optimal mixed strategies.
Proposition 6.2. Consider the graph game: where G is a tree and r ¼ 1. When applied to G, the algorithm of Section 6.1 terminates for some K ¼ K 0 2 N, such that: Either: and u, the value of the game to Player A, is 0; Or: and thus, by Proposition 6.1, the mixed strategies q A;K 0 and q B;K 0 , which allocate equal probabilities to all vertices in the players' respective strategy sets R A;K 0 and R B;K 0 , are optimal mixed strategies and the value of the game to Player A is: A proof of the proposition is presented in Appendix B.

An application of the algorithm
A computer programme was written in Python (Python Software Foundation, 2012) using NumPy (Numpy Developers, 2012) to implement the algorithm of Section 6.1.
The single example presented here is clearly very simple and is included purely to demonstrate that the algorithm can be applied to games on graphs other than trees, though the extent to which it is able to simplify such game varies greatly. Note particularly that this example has r -1 and R A -R B .
Consider the graph game G The graph is shown in Fig. 4.
The algorithm reduces the game to the following case: Remaining strategies for Player A : v 1 ; v 7 ; v 8 ; v 13 Remaining strategies for Player B : v 0 ; v 2 ; v 6 ; v 12 Payoff matrix (for Player A): 0 1 1 1 1 0 1 1 Calculating the optimal mixed strategies from this matrix is simple, for example, using the method described by Morris (1994, pp. 99-114). Alternatively, observe that we may apply Proposition 4.1 to this reduced game, setting f ¼ 3.
An optimal mixed strategy for Player A is to play vertices v 1 ; v 7 ; v 8 ; v 13 , each with probability 0.25; an optimal mixed strategy for Player B is to play vertices v 0 ; v 2 ; v 6 ; v 12 , each with probability 0.25. The value of the game to Player A is 0.75.
7. Further methods for analysing graph games 7.1. Exploiting the automorphisms of a graph game The definitions and notation relating to group theory used in this section are from Neumann, Stoy, and Thompson (1994).
An automorphism of a graph G ¼ ðVðGÞ; EðGÞÞ is a mapping of the graph to itself, which preserves adjacencies. The following definition is adapted from Bondy and Murty (1976, pp. 5-7): Note that this definition is valid for simple graphs. For more general graphs, which may include directed edges, loops, or multiple edges connecting the same vertices, a permutation of the edges also needs to be specified.
We extend the concept of a graph automorphism to that of a graph game automorphism, an automorphism of the graph that also preserves the strategic status of the vertices: Definition 7.2. Consider a graph game: We can also define automorphism groups and vertex orbits in terms of graph game automorphisms: Proposition 7.1. Consider a graph game: The set of all graph game automorphisms of G, with the operation of composition, forms a subgroup CðGÞ (the graph game automorphism group) of CðGÞ.
The orbits of the vertices of G under any subgroup H 6 CðGÞ form a partition of VðGÞ.
For CðGÞ to be a subgroup of CðGÞ, we require that CðGÞ is closed under composition, contains the identity and contains an inverse for every element. Since the only restriction on CðGÞ is that R A ; R B are invariant under its elements, these conditions are clearly satisfied. The second part of the proposition is true of any permutation group.
The following proposition describes a relationship between certain optimal mixed strategies of G and the graph game automorphism group CðGÞ.
Proposition 7.2. Consider a graph game: with VðGÞ ¼ fv 1 ; . . . ; v j g and r ¼ 1. Let O½v denote the orbit of v under some subset H of the graph game automorphism group CðGÞ.
There exists a pair of optimal mixed strategies: where r A and r B respectively allocate probabilities r A ½v i and r B ½v i to vertex v i , such that 8i; j 2 f1; . . . ; jg: Note that in particular the proposition is true for H ¼ CðGÞ. A proof of the proposition is presented in Appendix B.
Proposition 7.2 implies that, when looking for optimal mixed strategies for a graph game G, it suffices to consider those strategies that allocate equal probability to all vertices lying in the same orbit under graph game automorphisms. For graphs with high numbers of symmetries, this can significantly simplify the analysis of the game.
Furthermore, the fact that the proposition is valid for any subgroup H 6 CðGÞ ensures that even if not all graph game automorphisms of G are known, to find a pair of optimal mixed strategies, it is sufficient to restrict consideration to those strategies that allocate equal probability to all vertices lying in the same orbit under the subgroup generated by those graph game automorphisms that can be identified. Note that, though stated and proved in a different setting, this proposition is strongly related to Theorem 2.1 of Zoroa et al. (1993, pp. 526-528), which relates to games played over sets that admit certain transformations. However, Zoroa et al. require that the transformations considered be commutative, which is not necessarily true of the automorphisms of a graph. 7.2. Example: Using automorphisms to find solutions of G Consider the graph game G ¼ ðG 5;5 ; R A ; R B ; rÞ, where G 5;5 is the the 5 Â 5 rectangular grid graph depicted in Fig. 5, R A ¼ R B ¼ VðG 5;5 Þ and r ¼ 1. Each player has 25 possible pure strategies: VðG 5;5 Þ ¼ fv 1 ; . . . ; v 25 g.
Note that since both Players may deploy at any vertex, CðGÞ ¼ CðG 5;5 Þ, so we consider all automorphisms of G 5;5 . These automorphisms are rotations through multiples of p=2, reflection about a vertical axis and compositions of these transformations. The vertices can be partitioned into six orbits, which we denote as O ðiÞ ; i 2 f1; . . . ; 6g. The vertices belonging to each orbit are indicated in Fig. 5. By Proposition 7.2, there exist optimal mixed strategies r A ; r B for Players A and B, which allocate the same probability to all vertices lying in the same orbit. Let x ðiÞ A and x ðiÞ B be the respective Fig. 4 This observation allows us to take a different perspective on the problem. Suppose that we treat the r ðiÞ as if they were pure strategies.
The following strategies would be optimal strategies of such a game: To find these optimal mixed strategies, we analyse the matrix of expected payoffs to Player A for each of these six strategies against one another: Player B r ð1Þ r ð2Þ r ð3Þ r ð4Þ r ð5Þ r ð6Þ Player A r ð1Þ r ð2Þ r ð3Þ r ð4Þ r ð5Þ r ð6Þ The optimal mixed strategies s A and s B can be computed from this payoff matrix using standard methods (see, for example, Morris (1994, pp. 99-114) This fully determines the optimal mixed strategies r A and r B for G.
These optimal mixed strategies are illustrated in Fig. 6. Note that by identifying the six orbits of VðGÞ under graph game automorphisms of G, rather than solving a game with a 25 Â 25 payoff matrix, we needed only to solve a game with a 6 Â 6 payoff matrix. A similar method could be applied to any graph game G ¼ ðG; R A ; R B ; rÞ with r ¼ 1, for which non-trivial graph game automorphisms can be found.

Equal oddments strategies
The example of the graph game G played over the 5 Â 5 rectangular grid G 5;5 exhibits two curious and related properties.
Firstly, and most obviously, the optimal mixed strategies that were calculated for each player are identical r A ¼ r B ¼ r. This means that the optimal mixed strategy for Player A allocates identical probabilities to the vertices of G 5;5 as does the optimal mixed strategy for Player B (though these optimal mixed strategies need not be unique). Given the interpretation of the game, this is a surprising result. Player B is attempting to hide from Player A, so we might have expected that her best mixed strategy would involve avoiding vertices at which Player A was more likely to deploy. Note that the method of Section 7.1 does not produce identical optimal mixed strategies for all graphs, nor for all grid graphs. For example, when applied to the 6 Â 6 grid graph G 6;6 , the method produces distinct optimal mixed strategies r A ; r B (see Fig. 7) that appear to be more consistent with our intuition, in that vertices to which r A allocates fairly high probability seem to be allocated fairly low probability by r B and vice versa.
Secondly, in Fig. 6, observe that for all vertices v 2 VðG 5;5 Þ, the sum of the probability oddments allocated by r to the vertices in the closed neighbourhood N½v is equal to 22. Formally, we make the following definition: For some u 2 R. We call u the neighbourhood sum of r.
The following proposition links those situations in which Players A and B have identical optimal mixed strategies with the existence of equal oddments strategies.
Proposition 7.3. Given a graph game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1 and R A ¼ R B ¼ VðGÞ, then the following statements are equivalent: r is an equal oddments strategy of G with neighbourhood sum u 2 R. r is an optimal mixed strategy of G for both players and u 2 R is the value of the game to Player A.
Proof. Suppose that r is an equal oddments strategy of G with neighbourhood sum u 2 R. Observe that for all pure strategies v 2 VðGÞ for either player, we have: Therefore, by (5), r is an optimal mixed strategy of G for both players and u is the value of the game to Player A.
To prove the opposite implication, suppose that r is an optimal mixed strategy of G for both players and that u is the value of the game to Player A. Let: VðGÞ ¼ fv 1 ; . . . ; v j g r ¼ ðr½v 1 ; . . . ; r½v j Þ Also let: r½w " # and let v À , v þ be vertices for which this minimum and maximum are respectively attained.
Assume (to derive a contradiction) that r is not an equal oddments strategy. Then: Therefore: Since r is an optimal mixed strategy, the above inequalities must be equalities. Specifically: This contradicts (13). Therefore r is an equal oddments strategy. h Proposition 7.3 means that, given a graph game distribution of positive real numbers across the vertices such that the sum of these numbers in any closed neighbourhood is equal to a constant, and we scale these numbers to produce a valid probability distribution across the vertices, this distribution defines a mixed strategy that is optimal for both players. This offers an alternative approach to proving Corollary 3 on optimal mixed strategies for games played over regular graphs, since the mixed strategy q, as defined in the corollary, is clearly an equal oddments strategy.

Poly-level graphs
The concepts discussed in Section 7 suggest a potential approach for finding general expressions for the optimal mixed strategies of games played over certain families of graphs. To demonstrate this approach, we consider games played over a specific family of graphs, which we describe as poly-level graphs. The vertices in such graphs are arranged in levels and each vertex exhibits a local structural similarity in the way that it is connected to other vertices in its own level and to those in the levels above and below.
Definition 8.1. Poly-level graph A graph G ¼ ðVðGÞ; EðGÞÞ is called a poly-level graph if and only if the vertices can be partitioned into h subsets L 1 ; L 2 ; . . . ; L h # VðGÞ called levels, with jL i j ¼ c i , such that each vertex in L i is adjacent to precisely: j other vertices in L i (the intradegree); k vertices in L iþ1 (the superdegree), for ih ; l vertices in L iÀ1 (the subdegree), for i -1 ; 0 vertices in any other level.
where j is a non-negative integer and h; k; l; c 1 ; . . . ; c h are positive integers. Fig. 8 provides a visual representation of a general poly-level graph, while Figs. 9-13 show specific examples of such graphs. In each example, vertices in L 1 are coloured black, with higher levels being indicated by progressively lighter shades.
The appearance of a poly-level graph may be highly symmetric (Figs. 9 and 10) or quite irregular (Fig. 11). Also, two poly-level graphs with the same parameters may be topologically quite different (Fig. 12), while the parameters used to describe a particular poly-level graph are generally not unique (Fig. 13).
Note that it suffices to specify the number of vertices c 1 in L 1 to determine the number of vertices c i in any level L i , as established in the following simple proposition: Proposition 8.1. Given a poly-level graph G, with levels L 1 ; . . . ; L h , superdegree k, subdegree l and jL i j ¼ c i , we have: Proof. The number of edges linking vertices in L i to vertices in L iþ1 can be expressed in two forms, which must be equal: Thus, the c i form a geometric progression with common ratio kl À1 and the proposition follows immediately. h The following proposition establishes two intuitively obvious constraints on the parameters of a poly-level graph.
Proposition 8.2. Given a poly-level graph G, with levels L 1 ; . . . ; L h , intradegree j, superdegree k, subdegree l and jL i j ¼ c i , we have that 8i 2 f1; . . . ; hg: These constraints arise immediately from the fact that the number of vertices c i in any level L i (replaced in Proposition 8.2 by the expression from Proposition 8.1) must (a) be an integer and (b) exceed the intradegree j.

Equal oddments solutions on poly-level graphs
Given a poly-level graph G, from Section 7.3, we know that if there exists a probability distribution across the vertices of the graph such that the sum of the probabilities in any closed neighbourhood is equal to some constant u 2 ð0; 1, then this probability distribution defines an optimal mixed strategy for both players for the game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1; R A ¼ R B ¼ VðGÞ, and the value of the game to Player A is u.
Suppose that such a distribution exists, and suppose that this distribution allocates an equal probability x i to each vertex in level L i . Through consideration of the structure of the graph, we can write the following equations, which must hold for all i 2 f2; 3; . . . ; h À 1g:  . 13. A poly-level graph that admits two different sets of parameters: (a) h ¼ 3, We also have the following constraints, to ensure that the x i define a valid probability distribution: x i P 0; 8i 2 f1; 2; . . . ; hg ð 16Þ Note that it is not necessary to explicitly include the constraint x i 6 1, since this is implied by (15) and (16).
We now perform the change of variables: and introduce additional unknowns w i ; 8i 2 Z.
The question of finding a function x i of i that satisfies (14)- (16) can be reformulated as follows: Given a non-negative integer j and positive integers h; k; l; c 1 satisfying the conditions of Proposition 8.2, find a function w i of i, such that, 8i 2 Z: Subject to the boundary conditions: With the constraint: If suitable w i can be found, an equal oddments solution is then given by: x i ¼ w i uðj þ k þ l þ 1Þ À1 ; 8i 2 f1; . . . ; hg ð 22Þ Applying the change of variables (17): (18) and (19) are derived from (14), (20) is derived from (16), (21) is derived from (15) and Proposition 8.1, while (22) is immediate.

Exact solutions
To find a solution to the problem, let: is any particular solution of (18), and w GS i is the most general solution of the homogeneous difference equation: An obvious candidate for the particular solution is: Look for a solution of the form: where A þ ; A À are arbitrary constants and k þ ; k À are unknown constants to be determined. Substituting into (23), we find: Applying the boundary conditions gives: which can be solved to give: Note that in this case, k þ -k À , and therefore A þ and A À both exist.
8.5. Case 2: ðj þ 1Þ 2 ¼ 4kl Look for a solution of the form: where A; B are arbitrary constants and k is an unknown constant to be determined. Substituting into (23), we find: Applying the boundary conditions gives: Again, A and B exist for all values of the parameters.
8.6. Case 3: ðj þ 1Þ 2 < 4kl Look for a solution of the form: where P; Q are arbitrary constants and k; h are unknown constants to be determined. Substituting into (23), we find: So: Provided that h½h þ 1 is not an integer multiple of p, applying the boundary conditions gives: If h½h þ 1 is an integer multiple of p then no values of P and Q can be found to satisfy the boundary conditions (19) and no solution w i exists, except in the particular case where h½h þ 1 is an even multiple of p and k ¼ l. In the latter instance, the boundary conditions are satisfied for all values of Q, with P ¼ À1, identifying an infinite family of possible solutions.

Summary
To summarise, given any poly-level graph G with suitable parameters h; j; k; l; c 1 , we have found w i satisfying (18) and (19) in all cases except where ðj þ 1Þ 2 < 4kl and h½h þ 1 is an integer multiple of p (barring the special case where h½h þ 1 is an even multiple of p and k ¼ l), where h is defined as in (25).
Note that the non-existence of such a w i in the specific cases mentioned does not necessarily imply that no equal oddments strategy exists for the corresponding graph game, simply that any such strategy cannot be expressed as the solution to a difference equation of the form discussed.
Where such a w i does exist, it remains to check whether constraint (20) is satisfied. If so, then there exists an equal oddments solution to the graph game G ¼ ðG; R A ; R B ; rÞ, with r ¼ 1 and (21) and (22), which allocates a probability of x i to each vertex in level L i . By Proposition 7.3, this equal oddments solution is an optimal mixed strategy of G for both players. Table 2 summarises the equal oddments strategies for the graphs shown in Figs. 9-11.
In this section, we have demonstrated that the concepts outlined in Section 7 may be used to find general expressions for optimal mixed strategies of games played over a particular family of graphs. It may therefore be possible to use or adapt this approach to seek optimal mixed strategies for games played over other families of graphs, thus suggesting a potentially valuable focus for future work.

Conclusions and further work
In this paper, we have defined a search and concealment game, the SSSG, which differs from the games considered in the bulk of the literature both in its generality and in its explicit consideration of the strategic interdependence of positions based on spatial structure. We believe that formulating the game in terms of metric spaces may allow new tools and results from this area of mathematics to be applied to search and concealment problems, facilitating the identification of optimal mixed strategies in a variety of different cases.
We have examined the way in which the game theoretic concepts of dominance and equivalence of strategies are manifested in the context of the SSSG and have presented various methods for analysing the SSSG played over a graph, including: The reduction of graph games with detection radius r -1 to games with r ¼ 1. The formulation of lower and upper bounds on the value of a graph game. An algorithm that applies the concept of IEDS in the explicit context of graph games and which has been demonstrated to reduce games played on trees to cases in which optimal mixed strategies can be immediately determined.
A method for simplifying the the analysis of graph games using automorphisms of the graph. The introduction of the concept of an ''equal oddments strategy'', and the demonstration that such mixed strategies are optimal for both players. The presentation of explicit optimal mixed strategies for a particular family of graph games; those played over ''poly-level graphs''.
In terms of future work, of immediate interest would be the extension of the results and methods of Sections 5 and 7, which have been formulated explicitly for the SSSG played over a graph, to the SSSG over a general metric space. It is our belief that analogues of many of these results could probably be formulated through the careful application of topology and measure theory. This would be an important first step in increasing the applicability and relevance of the SSSG.
With regard to the algorithm of Section 6, it would be useful to have explicit results on its computational efficiency and on how the structure of a graph affects the extent to which a corresponding graph game can be simplified using IEDS. It may be possible, for example, to identify other families of graph game (aside from those played on trees) for which the the method is particularly effective.
It would also be valuable to find analytic expressions for the optimal mixed strategies of a wider range of graph games, perhaps extending the ideas used to analyse games played over poly-level graphs to games played over rectangular lattices. For highly irregular graphs, algorithmic methods to find equal oddments strategies could be sought. Where equal oddments strategies do not exist, but where equal oddments 'distributions' that fail to satisfy (16) can nonetheless be found, an investigation into the connection between the true optimal mixed strategies and these invalid equal oddments distributions may be instructive.
Thinking more broadly, there are many ways in which the game could be extended. For example, by increasing the number of searching and concealing players, by allowing players to deploy at multiple points, by investigating situations in which payoffs are dependent on location, by relaxing the metric space axioms to consider directed graphs and other spaces, by investigating the game over weighted graphs or by allowing the players to move around the space.
For the SSSG as described, a particular area of interest is the analysis of games in which X is a region of R 2 , where the searching player can search a disc of radius r. An understanding of such cases would make the game more applicable in real operational research scenarios, such as the deployment of patrol ships to locate pirates over areas of the ocean or searching archaeological sites for features of historical interest. If optimal mixed strategies cannot be obtained directly, one approach to such continuous situations would be to examine a discretised version of the game, superimposing a rectangular grid over the region in question and analysing a corresponding graph game over this grid.
Finally, it would be valuable to investigate the initial ways in which the SSSG could be profitably applied to real world scenarios. For example, following the riots in England in the summer of 2011, the game could be used to examine possible deployment strategies Table 2 Table detailing the equal oddments solutions to the poly-level graphs shown in Figs. 9-11. xi is the probability that the equal oddments strategy (an optimal mixed strategy for both players) allocates to each vertex in level Li.

Figure
Poly-level graph parameters Equal oddments solution Game values h j k l c 1 x 1 x 2 x 3 x 4 u A u B for the police. Manchester city centre, where a number of incidents occurred over a relatively compact area (Rogers, Sedghi, & Evans, 2011), could be represented as a graph to which the algorithm of Section 6 would be applied to determine the optimal location for a police unit (the searching player) to protect retail centres from a band of rioters (the concealing player). After the deployment of a first police unit, vertices within the detection radius could be removed and new optimal mixed strategies could be calculated on the rest of the graph to determine the placement of a second unit. It may be of interest to test the strategies obtained from such a process by simulation of an outbreak of rioting and to compare the results with the actual events of summer 2011. Otherwise, at least one child of s 2 lies in R B;K 0 . Call this child s 00 and note that: Observe that neither s 00 nor any of its children lies in R A;K 0 , since this would imply that s 00 2 S 1 # S, which is a contradiction by (26) and (28). Therefore: N A;K 0 ½s 00 ¼ fs 0 g # N A;K 0 ½s 0 which means that s 00 very weakly dominates s 0 for Player B, by Proposition 4.4.
So if s 0 2 S 1 , in both possible scenarios, we have identified very weak dominance in one of the strategy sets R A;K 0 or R B;K 0 . In the case where s 0 2 S 2 , very weak dominance can also be identified by following very similar logic (the complete proof is omitted). As previously discussed, this is a contradiction and thus our initial assumption was false. Therefore: a K 0 ½v 6 1; 8v 2 R B;K 0 b K 0 ½w 6 1; 8w 2 R A;K 0 To complete the proof, note that: Any vertex w 2 R A;K 0 for Player A such that b K 0 ½w ¼ 0 is clearly very weakly dominated by any other vertex for Player A.
Any vertex v 2 R B;K 0 for Player B such that a K 0 ½v ¼ 0 clearly very weakly dominates any other vertex for Player B.
where r A and r B respectively allocate probabilities r A ½v i and r B ½v i to vertex v i , such that 8i; j 2 f1; . . . ; jg: We first show that if s A is an optimal mixed strategy for Player A, then s A;/ is also an optimal mixed strategy for Player A, for all / 2 H. However, since / 2 H 6 CðGÞ; R A ; R B are invariant under / (see Definition 7.2) and and thus: Also, because / is an automorphism, we have: VðG 0 Þ ¼ VðGÞ; EðG 0 Þ ¼ EðGÞ Therefore G 0 ¼ G and so s A;/ and s B;/ are are optimal mixed strategies of G for Players A and B respectively, as required. Now, let: These mixed strategies are optimal mixed strategies for G, because any weighted average of optimal mixed strategies is itself an optimal mixed strategy.
It remains to prove that r A and r B satisfy Property (11). Observe that, 8i 2 f1; . . . ; jg: where H½v i ; v j is defined as in Corollary 4.
Using the corollary, (31) can be rewritten as follows, 8v 2 VðGÞ: This demonstrates that r A and r B satisfy Property (11) and thus proves Proposition 7.2. h