TUG-OF-WAR GAMES AND THE INFINITY LAPLACIAN WITH SPATIAL DEPENDENCE

In this paper we look for PDEs that arise as limits of values of Tug-of-War games when the possible movements of the game are taken in a family of sets that are not necessarily euclidean balls. In this way we find existence of viscosity solutions to the Dirichlet problem for an equation of the form −〈Dv · Jx(Dv); Jx(Dv)〉(x) = 0, that is, an infinity Laplacian with spatial dependence. Here Jx(Dv(x)) is a vector that depends on the the spatial location and the gradient of the solution.


Introduction
Our main goal in this work is to look for PDEs that may arise as continuos values of Tug-of-War games when the sets of possible movements are not restricted to be euclidean balls. In this way we obtain what we can call a natural way of defining a infinity Laplacian with spatial dependence.
First, let us recall that the infinity Laplacian is the nonlinear degenerate elliptic operator, usually denoted by ∆ ∞ , given by the following expression: Note that this expression can be read as the second derivative of v in the direction of its gradient. The infinity Laplacian arises from taking limit as p → ∞ in the p-Laplacian operator in the viscosity sense, see [2] and [7]. In fact, let us present a formal derivation. First, expand (formally) the p−laplacian: and next, using this formal expansion, pass to the limit in the equation Note that this calculation can be made rigorous in the viscosity sense.
The infinity Laplacian operator appears naturally when one considers absolutely minimizing Lipschitz extensions (AMLE) of a Lipschitz function F defined on the boundary; see the survey [2]. It turns out (see [2]) that the unique AMLE of F (defined on ∂Ω) to Ω is the unique solution to A fundamental result of Jensen [16] (see also [1] and [3]) establishes that this Dirichlet problem for ∆ ∞ has existence and uniqueness of solutions in the viscosity sense. Solutions to −∆ ∞ v = 0 (that are called infinity harmonic functions) are also used in several applications, for instance, in optimal transportation and image processing. Also the eigenvalue problem related to the ∞-laplacian has been exhaustively studied, see [9], [17], [18].
The Tug-of-War game related to the infinity Laplacian studied in [28] can be briefly described as follows (see Section 2 for details): a Tug-of-War game is a two-person, zero-sum game, that is, two players are in contest and the total earnings of one are the losses of the other. The rules of the game are the following: consider a bounded domain Ω ⊂ R N , and take a strip around the boundary Γ ⊂ R N \ Ω. Let F : Γ → R be a Lipschitz continuous function (the final payoff function). At an initial time, a token is placed at a point x 0 ∈ Ω. Then, a (fair) coin is tossed and the winner of the toss is allowed to move the game position to any x 1 ∈ B (x 0 ). At each turn, the coin is tossed again, and the winner chooses a new game state x k ∈ B (x k−1 ). Once the token has reached some x τ ∈ Γ, the game ends and the first player earns F (x τ ) (while the second player earns −F (x τ )). This game has a expected value u (x 0 ) (called the value of the game) that verifies the Dynamic Programming Principle (DPP), (1.2) u (x) = 1 2 sup here it is understood that u (x) = F (x) for x ∈ Γ. This formula can be intuitively explained from the fact that the first player tries to maximize the expected outcome (and has probability 1/2 of selecting the next state of the game) while the second tries to minimize the expected outcome (and also has probability 1/2 of choosing the next position).
As → 0 we have that u ⇒ v uniformly and this limit v (that is called the continuous value of the game) turns out to be the unique solution to (1.1). The fact that the limit is a solution to the equation can be intuitively explained as follows: for a smooth function φ with non-zero gradient the maximum in B (x) is attained at a point on the boundary of the ball ∂B (x) that lies close to the direction of the gradient, that is, the location of the maximum is close to x+ Dφ(x)/|Dφ(x)|. Analogously the minimum is close to x − Dφ(x)/|Dφ(x)| and hence the DPP, equation (1.2), for the smooth function φ reads as that is a discretization of the second derivative in the direction of the gradient. This formal calculation can be fully justified when one works in the viscosity sense, see [8], [13], [14] and [28].
Our main concern in this paper is to answer the following question: What are the PDEs that can be obtained as continuous values of Tug-of-War games when we replace the ball B (x) with a more general family of sets A (x) ?
To answer this question we have to assume certain conditions on the family of sets A (x) and the way that they behave as → 0 (see Section 2 for details). In our case the DPP reads as Following our previous discussion for the case of balls we can guess that the limit PDE as → 0 will depend on the point at which a smooth function φ with non-zero gradient attains its maximum (and its minimum) in A (x). Our conditions on the sets A (x) are such that there is a preferred direction where the maxima and the minima of a smooth function φ with non-zero gradient are closely located when → 0. This preferred direction depends on the spatial location and on the gradient of φ at that point. We call such direction J x (Dφ(x)).
Our main result reads as follows: Under adequate conditions of the family of possible movements A (x) (see Section 2.1) and assuming that the set Ω has boundary with strictly positive curvature, the values of the Tug-of-War game described above with the ball B (x) replaced by A (x) converge uniformly (along subsequences) to some continuous limit v that is a viscosity solution to Here, as we have mentioned, J x (Dv) depends on x (this dependence comes from the dependence of the sets A (x) on x) and on Dv(x). Note that in this limit equation we also have a second derivative of v but now the direction is not given by Dv(x) but by the vector J x (Dv(x)) that depends also on the spatial location x. In this sense we have found a natural way of introducing spatial dependence in the infinity Laplacian.
As a first example of possible sets A (x) we mention balls in q with 1 < q < +∞, that is, In this case, given a direction v, the resulting J x (v) does not depend on x and read as Then, the limit PDE that appear in (1.3) is given by This equation also appear as limit of p−Laplacian type operators when p → ∞, see [6].
To obtain an equation with dependence on x we can just consider q = q(x) in the previous example, that is, we play in balls of q with different q at each point (we have to assume here that q(x) is continuous and bounded away from one and infinity to fulfill our hypothesis on the sets A (x)).
As another example we can also consider A (x) as ellipses as long as the eccentricity does not degenerate.
We can also consider balls in q with q < 1, but in this case the sets A (x) given by (1.4) does not fulfill our hypothesis. Hence we treat this case separately in the last section of this paper. In this case we obtain as the limit PDE, This equation also appears as the limit equation of a values of the game when we play in lattices, that is, when we take A (x) = {x ± e i , i = 1, ..., N }. This case also has to be treated separately, see Section 5.
This equation can also be obtained as limit of p−Laplacian type equations as p → ∞ and is known as the pseudo infinity Laplacian in the literature, see [5], [15] and [30].
Let us point out that for the general case of J x depending on x in (1.3) it seems difficult to obtain existence of solutions as limits of p−Laplacian type problems. Hence, our results also provide a new existence result for (1.3).
The paper is organized as follows: In section 2 we introduce the precise set of conditions that we assume on the family of possible movements, describe with some detail the Tug-of-War game and prove some of its properties (among them, there is a comparison principle for values of the game); in Section 3 we show that, taking a subsequence if necessary, the values of the game converge uniformly to a continuos limit; in Section 4 we prove that a uniform limit of the values of the game is a viscosity solution of the limit PDE and finally, in Section 5 we analyze briefly the case of possible movements in balls of q with q < 1 or in a lattice.

Preliminaries
First, let us introduce the properties that we will assume for the family of sets {A(x)} that encode the possible movements of the game. From now on we will denote by J x (v) the point z, that is, J x (v) is the point where the linear function z → v; z attains its minimum in the set A(x) − x. Note that Therefore, the direction given by the minimum of the linear function is collinear with the direction of the maximum. In addition, we requiere that (4) (Continuity with respect to x) Given x 0 ∈ Ω, if x n → x 0 as n → ∞, then We also assume that, given K > 0 there exists 1 > α > 0 and C > 0 independent of such that, if J x (v) = z then for small enough. Note that this condition holds (with α = 1/2) if the sets A(x)−x have boundaries with uniformly positive curvature. Now, the set of possible movements at each location x ∈ Ω for small is given by the family {A (x)} defined as Remark 1. The last assumption (condition (2.5)) is used only in the proof of uniform convergence of the values of the game u , see Section 3, while the rest of the assumptions are needed also in the proof of the fact that a uniform limit is a viscosity solution to the limit problem, see Section 4.
Remark 2. The positions of the game are not assumed to be reversible, that is, we can have that y ∈ A (x) and x ∈ A (y). Hence, in this case, there is As we have mentioned in the introduction examples of sets A(x) that fulfill our conditions are the balls in q given by (1.4). As another example, one can also consider a family of ellipses depending on x as A (x) as long as they do not degenerate (the eccentricity needs to be bounded away from zero). As a non-smooth example in R 2 we mention the set Now, let us consider J : S N −1 → R N and try to find conditions under which there is a set A (satisfying our previous assumptions) such that J(v) has the same direction that the point where the linear function v; z attains its minimum (or maximum) in the set A.
Let v(θ) = v(θ 1 , ..., θ N −1 ) ∈ S N −1 be a local parametrization of the sphere and consider J(v(θ)). We look for a surface (that is going to be the boundary of A) of the form a(θ)J(v(θ)). Since, for any θ, the minimum of z → z; v(θ) has to be attained at the point a(θ)J(v(θ)) we need to impose that that is, using that we have J(v(θ)); v(θ) = 0, Hence, is we assume that is conservative, we have a function b such that Db(θ) = Θ(θ) and then we can set to obtain (2.7). Note that this function a(θ) can be computed in terms of J and its derivatives.
Therefore, we have obtained the following result.

z attains its minimum (or maximum) in the set A (and this set A verifies our previous conditions).
Remark that this result (look at the calculations made before) imposes restrictions on the functions J : S N −1 → R N that can appear in the limit equation (1.3).
As an example, let us consider the linear case in which J is given by When M is assumed to be symmetric and positive definite it has two eigenvectors ξ 1 , ξ 2 , with associated eigenvalues λ 1 , λ 2 > 0. In this case J is associated to a set A that verifies our conditions. We can take A to be the ellipse with principal directions the eigenvectors given by A = {s 1 ξ 1 + s 2 ξ 2 : s 2 1 + λ 1 λ 2 s 2 2 ≤ 1}. In fact, an easy calculation gives that the minimum of (s 1 , for some α > 0 (here we use coordinates in the base (ξ 1 , ξ 2 )). Note that the case M negative definite is analogous (λ 1 , λ 2 < 0 in this case) and that M semidefinite ( Description of the game. We follow [28]. Let Ω ⊂ R N be a bounded smooth domain. For a fixed γ > 0, consider a strip around the boundary Γ ⊂ R N \ Ω given by Let F : Γ → R be a Lipschitz continuous function. A Tug-of-War is a two-person, zero-sum game, that is, two players are in contest and the total earnings of one are the losses of the other. Hence, one of them, say Player I, plays trying to maximize his expected outcome, while the other, say Player II is trying to minimize Player I's outcome (or, since the game is zero-sum, to maximize his own outcome).
At an initial time a token is placed at a point x 0 ∈ Ω and we fix > 0 (with L < γ, L being a bound for diam(A), hypothesis (2.1)). Then, a (fair) coin is tossed and the winner of the toss is allowed to move the game position to any x 1 ∈ A (x 0 ). At each turn, the coin is tossed again, and the winner chooses a new game state x k ∈ A (x k−1 ). Once the token has reached some x τ ∈ Γ, the game ends and Player I earns F (x τ ) (while Player II earns −F (x τ )). This is the reason why we will refer to F as the final payoff function. In more general models, it is considered also a running payoff g(x) defined in Ω, which represents the reward (respectively, the cost) at each intermediate state x, and gives rise to nonhomogeneous problems. We will assume here that g ≡ 0. This procedure yields a sequence of game states where every x k except x 0 are random variables, depending on the coin tosses and the strategies adopted by the players.
Note that the relevant values of F are those taken in the set since those are the the points at which the game could end. As the diameters of the sets A are uniformly bounded by L we have that Now we want to give a precise definition of the value of the game. To this end we have to introduce some notation and put the game into its normal or strategic form (see [29] and [25]). The initial state x 0 ∈ Ω is known to both players (public knowledge). Each player i chooses an action a i 0 ∈ A (x 0 ) which is announced to the other player; this defines an action . Then, the new state x 1 ∈ A (x 0 ) is selected according to a probability distribution p(·|x 0 , a 0 ) in Ω which, in our case, is given by the fair coin toss. At stage k, knowing the history h k = (x 0 , a 0 , x 1 , a 1 , . . . , a k−1 , x k ), (the sequence of states and actions up to that stage), each player i chooses an action a i k . If the game ends at time j < k, we set x m = x j and a m = x j for j ≤ m ≤ k. The current state x k and the profile a k = {a 1 k , a 2 k } determine the distribution p(·|x k , a k ) (again given by the fair coin toss) of the new state x k+1 .
k , the set of histories up to stage k, and by H ∞ = k≥1 H k the set of all histories. Notice that H k , as a product space, has a measurable structure. The complete history space H ∞ is the set of plays defined as infinite sequences (x 0 , a 0 , . . . , a k−1 , x k , . . .) endowed with the product topology. Then, the final payoff for Player I, i.e.
F , induces a Borel-measurable function on H ∞ . A pure strategy S i = {S k i } k for Player i, is a sequence of mappings from histories to actions, such that S k i is a Borel-measurable mapping that maps histories ending with x k to elements of A (x k ) (roughly speaking, at every stage the strategy gives the next movement for the player, provided he win the coin toss, as a function of the current state and the past history). The initial state x 0 and a profile of strategies {S I , S II } define (by Kolmogorov's extension theorem) a unique probability P x 0 S I ,S II on the space of plays H ∞ . We denote by E x 0 S I ,S II the corresponding expectation.
Then, if S I and S II denote the strategies adopted by Player I and Player II respectively, we define the expected payoff for Player I as otherwise.
Analogously, we define the expected payoff for Player II as if the game terminates a.s. +∞, otherwise.
Finally, we can define the -value of the game for Player I as while the -value of the game for Player II is defined as In some sense, u I (x 0 ), u II (x 0 ) are the least possible outcomes that each player expects to get when the -game starts at x 0 . Notice that, as in [28], we penalize severely the games that never end.
In [28], see also [20], it is shown that, under very general hypotheses that are fulfilled in the present setting, u I = u II := u . The function u is called the value of the -Tug-of-War game.

2.3.
Properties of the value of the game. Now let us state the Dynamic Programming Principle (DPP) applied to our game.

Lemma 4 (DPP). The value function for Player I satisfies
(2.9) The value function for Player II, u II , satisfies the same equation.
Formulas similar to (2.9) can be found in Chapter 7 of [20]. A detailed proof adapted to our case can also be found in [22].
Let us explain intuitively why the DPP holds by considering the expectation of the payoff at x. If Player I wins the fair coin toss (probability 1/2), she tries to move to a point maximizing the expectation and if Player II wins, he points to a point minimizing the expectation. The expectation at x can be obtained by summing up these two different alternatives.
By adapting the martingale methods used in [28], we can show a comparison principle. Note that in the next results (Theorems 5, 6 and 7) we don't need the full set of conditions on the sets A(x) stated in Section 2.1.
Proof. The strategy of the proof is as follows: we show that by choosing a strategy according to the minimal values of v , Player II can make the process a supermartingale. The optional stopping theorem then implies that the expectation of the process under this strategy is bounded by v . Moreover, this process provides an upper bound for u I .
Fix η > 0. Player I follows any strategy and Player II follows a strategy S 0 II such that at x k−1 ∈ Ω he chooses to step to a point that almost minimizes v , that is, to a point We start from the point x 0 . It follows that where we have estimated the strategy of Player I by sup and used the fact that v verifies the DPP.
where τ ∧ k = min(τ, k), and we used Fatou's lemma as well as the optional stopping theorem for M k . Since η was arbitrary this proves the claim.
Similarly, we can prove that u II is the largest function that satisfies the DPP. To see this fact, Player II follows any strategy and Player I always chooses to step to the point where v is almost maximized. This implies that v (x k ) − η2 −k is a submartingale. The rest of the proof runs as before.
Next we show that the game has a value. This together with the previous comparison principle proves the uniqueness of functions that verify the DPP with given values in Γ.

Theorem 6.
Let Ω ⊂ R N be a bounded open set, and F a given datum in Γ. Then u I = u II , that is, the game has a value.
Proof. Clearly, u I ≤ u II always holds, so we are left with the task of showing that u II ≤ u I . To see this we use the same method as in the proof of the previous theorem: Player II follows a strategy S 0 II such that at x k−1 ∈ Ω, he always chooses to step to a point that almost minimizes u I , that is, to a point x k such that for a fixed η > 0. We start from the point x 0 . It follows that from the choice of strategies and the dynamic programming principle for u I that .
We get by Fatou's lemma and the optional stopping theorem that Similarly to the previous theorem, we also used the fact that the game ends almost surely. Since η > 0 is arbitrary, this completes the proof.
Theorems 5 and 6 imply that with a fixed boundary data there exists a unique solution to the DPP.

Uniform convergence
Our main goal in this section is to prove that, extracting a subsequence if necessary, we have uniform convergence of u to a continuous limit v as → 0.
First, let us prove uniform convergence of the values of the game when there is a solution to the limit problem that is C 3 in a slightly bigger domain Ω ⊂ Ω with Dv = 0 in Ω . To this end we need a lemma that says that v verifies the DPP except for an error term that can be controlled in terms of . Here we use condition (2.5) on the sets A(x).
Here the error term depends on bounds for v in C 3 and on the family A(x) trough the hypothesis (2.5).
Proof. First, we obtain an lower bound for the error. Taking x M a point where v attains its maximum in A (x) and x M its symmetrical point with respect to x, that is given by, we have Now, a simple Taylor expansion gives (recall that we assume that Adding the two previous expansions and using (3.2) we get Hence, we obtain Assuming the claim, we obtain Therefore, we need to prove the claim. Let Hence, to prove the claim we have to show that Note that J x (Dv(x)) is the point at which z → Dv(x); z attains the minimum in A(x) − x, and hence the maximum of such function is located at −J x (Dv(x)).
As v is C 3 we have that there exists C 1 such that for every z ∈ A(x) − x. Now, we observe that, as Dv = 0 in Ω there exists a constant c 2 such that |Dv| ≥ c 2 in Ω. Hence, taking K = 2C 1 we have and the claim follows using condition (2.5).
The proof of the upper bound for the error is analogous. Proof. The proof uses some ideas from the proof of Theorem 2.4 in [29], see also [23]. From our previous lemma we have with a uniform error term for x ∈ Ω as → 0. The error term is uniform due to our assumptions on v.
Now, Player II follows a strategy S 0 II such that at a point x k−1 he chooses to step to a point that minimizes u, that is, to a point is a supermartingale. Indeed, (3.7) The first inequality follows from the choice of the strategy and the second from (3.6). Now we can estimate u II (x 0 ) by using Fatou's lemma and the optional stopping theorem for supermartingales. We have This inequality and the analogous argument for Player I implies for u = Letting → 0 the proof is completed if we prove that there exists C such that To establish this bound, we show that is a supermartingale for small enough > 0. If Player II wins the toss, we have v(x k ) − v(x k−1 ) ≤ −C 3 because Dv = 0, as we can choose C 3 depending on min x∈Ω |Dv| and on the sets A(x). It follows that (3.9) By subtracting a constant if necessary, we may assume that v < 0. Moreover, v(x k−1 ) is determined by the point x k−1 , and thus, we can estimate the second term on the right hand side as The last inequality follows from (3.6) similarly as estimate (3.7). This together with (3.8) and (3.9) implies This holds if we choose, for example, C 2 such that C 3 ≥ 2 C 2 /α and take small enough. Thus,M k is a supermartingale. According to the optional stopping theorem for supermartingales The result follows by passing to the limit as k → +∞ since v is bounded in Ω.
Above we have obtained the convergence result under the extra assumptions that v ∈ C 3 and that Dv = 0. Now we give a proof for the uniform convergence result without using these hypotheses but assuming that Ω has boundary with strictly positive curvature, Theorem 9. The proof is based on a variant of the classical Arzela-Ascoli's compactness lemma, see Lemma 12 below, whose proof is contained in [23] but we include the details here for the sake of completeness. Note that the functions u are not continuous in general, see [23]. Nonetheless, the jumps can be controlled and we will show that they are asymptotically uniformly continuous.
Lemma 12. Let {u : Ω → R, δ ≥ > 0} be a set of functions such that (1) there exists C > 0 so that |u ε (x)| < C for every δ ≥ ε > 0 and every x ∈ Ω, (2) given η > 0 there are constants r 0 and 0 such that for every < 0 and any x, y ∈ Ω with |x − y| < r 0 it holds Then, there exists a uniformly continuous function v : Ω → R and a subsequence still denoted by {u } such that Proof. First, we find a candidate to be the uniform limit v. Let X ⊂ Ω be a dense countable set. Because functions are uniformly bounded, a diagonal procedure provides a subsequence still denoted by {u } that converges for all x ∈ X. Let v(x) denote this limit. Note that at this point v is defined only for x ∈ X.
By assumption, given η > 0, there exists r 0 such that for any x, y ∈ X with |x − y| < r 0 it holds |v(x) − v(y)| < η. Hence, we can extend v to the whole Ω continuously by setting v(z) = lim X x→z v(x). Our next step is to prove that {u } converges to v uniformly. We choose a finite covering Ω ⊂ ∪ N i=1 B r (x i ) and ε 0 > 0 such that for every x ∈ B r (x i ) and < 0 as well as for every x i and < 0 . To obtain the last inequality, we used the fact that N < ∞. Thus for any x ∈ Ω, we can find x i so that x ∈ B r (x i ) and Next we show that for fixed F , the family of functions that are values of the game, with as the parameter, satisfies the conditions of Lemma 12. First observe that the values of the game are bounded since for any x τ ∈ Γ implies the following result: Lemma 13. The value of the game u with value F in Γ satisfies Next to show that values of the game are asymptotically uniformly continuous we assume that ∂Ω is C 2 with strictly positive curvature. The proof of this fact applies Theorem 5 and the fact that linear functions are solutions to the limit problem (these are trivial solutions since all the second derivatives are identically zero). We also use Theorem 11 for these linear solutions, which satisfy the conditions of the theorem. The proof follows closely ideas from [29] and [22].  (2) in Lemma 12, that is, given η > 0 there are constants r 0 and 0 such that for every < 0 and any x, y ∈ Ω with |x − y| < r 0 it holds Proof. Observe that the case x, y ∈ Γ follows from the Lipschitz continuity of F , and thus we can concentrate on the cases x ∈ Ω, y ∈ Γ, and x, y ∈ Ω.
We divide the proof into three steps: First for x ∈ Ω, y ∈ Γ , we employ comparison with a function (that is the value of the game played in the region between two parallel hyperplanes) that converge to a linear function (thanks to our previous result on convergence for C 3 solutions with nonvanishing gradient). It follows that the value of the game with the datum F is bounded close to y ∈ Γ by a slightly smaller constant than the maximum of the values of F . Next, we iterate this argument to show that the value of the game is close to the boundary values near y ∈ Γ when is small. Finally, we extend this result to the case x, y ∈ Ω by translation, by taking the boundary values from the strip already controlled during the previous steps.
To start, we choose an hyperplane π 0 that is tangent to Ω at some point y 0 ∈ ∂Ω such that y lies in the exterior normal direction to ∂Ω at y 0 . By a translation and a rotation of coordinates we can assume that y 0 = 0, π 0 = {x N = 0} and moreover, using that ∂Ω has positive curvature, we have that there is a constant K and a neigbourhood ..., w N −1 , w N ) ∈ Γ , we have that there exists z ∈ ∂Ω such that |w − z| ≤ L (recall (2.8)). Then, when −8δ ≤ z N ≤ 8δ, for every small enough. Therefore, we conclude that, for small, This problem has an explicit linear solution of the form First, assume that a = 0 and extend the solution to the slightly larger set {L > x N > −4δ − L } (here, as before, L stands for a unform bound for diam(A(x)), (2.1)) and use the same notation for the extensions. Now because Dv = 0, Theorem 11 implies convergence for the values of the game where o(1) → 0 uniformly as → 0. For small enough , the comparison principle, Theorem 5, implies that in From the uniform convergence and the fact that v is linear in x N , (3.11), we obtain for some 0 < θ < 1 independent of δ and . Therefore, we conclude that Now, we iterate this bound with an analogous construction in the set Continuing in this way, we see that for small enough > 0 that This gives an upper bound for u .
In the case a = 0 the upper bound is easier, since in this case we have, The argument needed to obtain an analogous lower bound is similar.
Now we observe that, using that F is Lipschitz, we have for |y − y 0 | < δ.
Therefore, we conclude that, given η > 0, we can choose small enough δ > 0, large enough k, and small enough > 0 so that for x ∈ Ω, y ∈ Γ with |x − y| < δ/4 k it holds This shows that the second condition in Theorem 12 holds when x ∈ Ω, y ∈ Γ .
Next we extend the estimate to the interior of the domain. First choose small enough δ and large enough k so that (3.13) F (x ) − F (y ) < η whenever |x − y | < δ/4 k , and ε > 0 small enough so that (3.12) holds.
Next we consider a slightly smaller domaiñ Ω = {z ∈ Ω : dist(z, ∂Ω) > δ/4 k+2 } with the boundary strip Suppose that x, y ∈ Ω with |x − y| < δ/4 k+2 . First, if x, y ∈Γ, then we can estimate by comparing the values at x and y to the nearby boundary values and using (3.12). Finally, let x, y ∈Ω and definẽ We haveF (z) ≥ u (z) inΓ by (3.12), (3.13), and (3.14). Letũ be the value of the game inΩ with the boundary valuesF inΓ. By the comparison principle and uniqueness, we deduce The lower bound follows by a similar argument.
The previous lemmas give the proof of Theorem 9.
Proof of Theorem 9. Lemma 13 and Lemma 14 show that the family {u } verifies the hypotheses in Lemma 12 and hence we have convergence to a uniform continuous limit v along a subsequence of u .
Remark 15. The hypothesis on the curvature of the boundary of the domain could be avoided if we have for each point y ∈ Γ a C 3 solution of the limit equation that is like a cone centered at y, see [23] for the details. For example, when one plays using balls in q (1 < q < ∞ fixed) in R 2 (that is Note that in both examples the limit equation involve a function J that does not depend on x. For the general case in which J x depends on x it seems difficult to obtain existence of a C 3 solution like a cone centered at y ∈ Γ. Hence we used linear functions instead of cones to obtain the estimates, but this fact involves an extra requirement for the set Ω (the boundary is assumed to have strictly positive curvature).

Viscosity solutions to the limit PDE
4.1. Viscosity solutions. Recall that, associated with the family of sets A(x) we have functions J x that, given a direction v give the point in A(x) where the function z → z; v attains its minimum (see Section 2.1).
As in [15], we denote with G * and G * the upper and lower semicontinuous envelopes of G defined by The next result characterizes the upper and lower envelopes for the function G given in (4.1).
Proof. Let us begin with G * . Let M n → M , ξ n → ξ and x n → x. We want to compute G * (M, ξ, x) = lim sup n G(M n , ξ n , x n ).
Assume first that ξ = 0 and therefore ξ n = 0. By the definition of G we have G(M n , ξ n , x n ) = − M n · J x n (ξ n ); J x n (ξ n ) .
Let us prove that J xn (ξ n ) → J x (ξ), where J xn (ξ n ) is the point where the function f n (z) = ξ n ; z attains a minimum in the set A(x n ) − x n , and J x (ξ) is the point where the function f (z) = ξ; z attains a minimum in A(x) − x. Now we observe that there exists a ball B L (0) such that for every large n, J xn (ξ n ) ⊂ B L (0). Then we can extract a subsequence J xn j (ξ n j ) in A(x n j ) − x n j such that J xn j (ξ n j ) → v con v ∈ A(x) − x by the property (2.4) that we assume on the family of sets A(x). Then, ξ; z ≥ ξ; J x (ξ) . Assume that ξ; z > ξ; J x (ξ) . By property (2.3), there exists a sequence z n j ∈ A(x n j )− x n j such that z n j → J x (ξ). Hence we have ξ n j ; z n j → ξ; J x (ξ) and then, for j large enough, it holds ξ n j ; J x n j (ξ n j ) > ξ n j ; z n j . But as J x n j (ξ n j ) is the point where z → ξ n j ; z attains the minimum in the set A(x n j ) − x n j and z n j ∈ A(x n j ) − x n j the previous inequality yields a contradiction. Hence ξ; v = ξ; J x (ξ) , and then, by the uniqueness of the minimum, Assume next that ξ = 0. If ξ n = 0 for n large, then G(M n , ξ n , ; J xn (ξ n ) and its lim sup n is of the form − M · z; z (using condition (2.4) on the sets A(x))). Then we get To see that equality holds we observe that there exists Z ∈ A(x) − x such that max z∈A(x)−x − M · z; z = − M · Z; Z . If Z = 0 we take the sequence M n = M , ξ n = 0, x n = x and we obtain If Z = 0 then we can assume that Z ∈ ∂(A(x) − x) and by condition (2.2) there exists a direction v ∈ S N −1 such that J x (v) = Z. Note that also J x (av) = Z for any a > 0. In this case we take M n = M , ξ n = 1 n v, x n = x, and we get The characterization of G * is analogous. Now we can introduce the definition of a viscosity solution to our PDE, see [10] and [15].
Definition 17. A function v ∈ C(Ω) is a viscosity solution to the problem if v(x) = F (x) for every x ∈ ∂Ω and the following two conditions hold: (ii) for every ψ ∈ C 3 (Ω) such that v − ψ has a strict maximum at Now we are ready to prove that a uniform limit of the values of the game u is a viscosity solution of the limit PDE.
in the sense of Definition 17, where G is given by (4.1).
Proof. From the uniform convergence and the fact that u = F on ∂Ω we obtain that v = F on ∂Ω.
Now, let us begin with the proof of condition (i) in Definition 17. Let φ be such that v − φ has a strict minimum at x 0 ∈ Ω. Assume first that Dφ(x 0 ) = 0. By the uniform convergence of u to v there exists a sequence x such that x → x 0 and that is, Now we use the fact that the values of the game satisfy the Dynamic Programming Principle, that is u verify Hence, by (4.5), we get and then Now, let x m be a point where φ attains a minimum in A (x ), that is, Let x m be the symmetrical point of x m with respect to x , that is given by, By the symmetry of A (x ), we have that x m ∈ A (x ) and hence (4.9) max From (4.7) and (4.9) in (4.6) it follows that As φ ∈ C 3 (Ω), we obtain the following Taylor expansions Adding the two previous expansions and using (4.8) we obtain Hence, (4.10) becomes when Dφ(x 0 ) = 0. From this claim, taking the limit → 0, we get and we have proved condition (i) in this case.
To prove the claim, consider the function We have that A(x ) − x ⊂ B L (0) for every small enough (recall that L is a bound for the diameter of the sets A). Note that Φ is well defined in 1 (Ω − x ) ⊇ B L (0) for small enough. The point where Φ attains a minimum is given by, z m = x m −x . Hence, to prove the claim we have to show that z → J x 0 (Dφ(x 0 )). As we have that and then, Φ (z) converges uniformly to z → Dφ(x 0 ); z in the ball B L (0) as → 0. Let us call this limit function Φ 0 (z).
Since z m ∈ A(x ) − x ⊂ B L (0), there exists a subsequence z j m → z with z ∈ B L (0), and as z j m ∈ A(x j )−x j by the property (2.4) of the family A(x) By the uniform convergence we have Φ j (z j m ) → Φ 0 ( z). By property (2.3), there exists z j in A(x ) − x such that z j converge to z, and hence Φ j (z j ) converge to Φ 0 (z). Then, for j large enough Φ j (z j m ) > Φ j (z j ), which is a contradiction with the fact that z j m is a point where Φ j attains a minimum in A(x ) − x . Therefore, we have Dφ(x 0 ); z .
By Property (3) of the family of sets A(x) (uniqueness of the minimum of a linear nontrivial function in A(x) − x), we have z = J x (Dφ(x 0 )). This proves the claim.
Now, for the case Dφ(x 0 ) = 0, we argue as above until we reach and we just observe that, taking a subsequence if necessary and using con- Hence we obtain and therefore, That is, condition (i) holds also in this case.
The fact that v verifies Condition (ii) in Definition 17 is analogous and we omit the details. 4.2. Uniqueness of the limit. When we consider A (x) as balls in q with q > 1 independent of x (recall the introduction) there is a unique solution to the limit PDE, see [3]. Therefore in this case the whole family u converge uniformly to the solution of the limit PDE.
In the general case, when we have spatial dependence of J in x, the results in [3] are not directly applicable since only equations which are translation invariant are considered in the above mentioned reference. The general uniqueness question (as well as the regularity issue, see the explicit example below) for solutions to the limit PDE remains as a challenging problem.

An explicit solution.
To end this section let us present an explicit example in R 2 of a solution to the equation − D 2 v · J x (Dv); J x (Dv) (x) = 0 when J does not depend on x and is associated to the set A = {(y 1 , y 2 ) ∈ R 2 : (y 1 ) 2 − 1 ≤ y 2 ≤ 1 − (y 1 ) 2 }. Let v(x 1 , x 2 ) = 2x 1 + f (x 2 ).
Then, if f is such that −2 ≤ f (s) ≤ 2 then J(Dv) = J((2, f (s))) = (1, 0) and hence Note that we can even consider a non-differentiable f , for example, f (s) = |s| and obtain that v(x 1 , x 2 ) = 2x 1 + |x 2 | is a viscosity solution to our equation. The fact that this solution is not C 1 has to be contrasted with the regularity results for solutions to the infinity laplacian, see [31] and also [11] and [12]. Note that solutions to the pseudo infinity laplacian are not necessarily C 1 , see [30]. At this point one can consider the following question: is there a connection between the C 1 regularity result for solutions to the equation − D 2 v · J x (Dv); J x (Dv) (x) = 0 and the smoothness of the sets A(x) associated to J x ?. We leave this open.

5.
Playing with balls in q with q < 1 or in the lattice x 0 + Z N .
In this last section we analyze the case in which the positions of the game are given by the family of sets Note that in this last case the positions of the game are restricted to the lattice x 0 + Z N where x 0 is the initial position of the game.
Also remark that these families of sets do not verify the uniqueness of the minimum for linear functions condition that we assumed previously. In fact, if we take the direction v = (1, 1) (we restrict ourselves to N = 2 for simplicity) then, for both examples, The proof of the fact that a uniform limit of u is a viscosity solution of the limit PDE, that is known as the pseudo-infinity Laplacian, where I(Dv) = {i : | ∂v ∂x i | = max j | ∂v ∂x j |}, runs as before, but let us point out where is a difference (see below). In this case there is uniqueness of solution to the limit PDE, see [3] and [30]. Therefore, the whole family u converge uniformly to the unique solution of the limit PDE with Dirihlet datum F .
To emphasize the difference with the previous analysis, let us look for the case in which there is no uniqueness of the point where the minimum of the linear function z → Dφ(x 0 ); z in A(x 0 ) is attained. The other cases can be treated exactly as in the previous section, Section 4.
Recall that, to simplify, we have assumed that we are in R 2 and let us focus in the case when ∂φ ∂x (x 0 , y 0 ) = ∂φ ∂y (x 0 , y 0 ) .
Otherwise, the minimum of z → Dφ(x 0 ); z in A(x 0 ) is attained at a unique point. Also to simplify, we restrict to Dφ(x 0 , y 0 ) = 0.
In this case, arguing as before, we are lead to consider x m a point where φ attains a minimum in A (x ), that is, In any of the first two possibilities we obtain 0 ≥ ∂ 2 φ ∂x 2 (x 0 , y 0 ) and in any of the other two 0 ≥ ∂ 2 φ ∂y 2 (x 0 , y 0 ). Therefore, we have 0 ≥ min ∂ 2 φ ∂x 2 (x 0 , y 0 ); that is the discontinuos function that is involved in the definition of viscosity solution to (5.1) when Dv = 0, c.f. Definition 17. See also [15].