Tug-of-war games with varying probabilities and the normalized $p(x)$-Laplacian

We study a tug-of-war game with varying probabilities. In particular, we show that the value of the game is locally asymptotically H\"{o}lder continuous. We also show the existence and uniqueness of values of the game. As an application, we prove that the value function of the game converges to a solution of the normalized $p(x)$-Laplacian.


1.
Introduction. The seminal works of Crandall, Evans, Ishii, Lions, Souganidis and others established a connection between the stochastic differential games and viscosity solution to Bellman-Isaacs equations in the early 80s. However, a similar connection between the p-Laplace or ∞-Laplace equations and the tug-of-war games with noise was discovered only rather recently in [19,20].
In this paper we study a tug-of-war with noise with space dependent probabilities, which is a natural generalization of the original tug-of-war both from mathematical and application point of views. In particular, we prove that the value functions of the game in this setting are asymptotically Hölder continuous, Theorem 4.1. Here the main difficulty is the loss of translation invariance so that the global or local regularity methods in [19], [16] or [12] are not directly applicable. Instead, we employ the method in [11].
The main idea is to consider two game sequences simultaneously. Heuristically speaking, in a higher dimensional space, the sequences can be linked to a single higher dimensional game by introducing a probability measure that has the measures of the original game as marginals through suitable couplings. It is interesting to note that couplings of stochastic processes can be employed in the study of regularity for second order linear uniformly parabolic equations with continuous highest order coefficients, see for example [14], [21], and [10]. The method has also some similarities to the Ishii-Lions method [7], see also [18]. However, the method we use does not rely on the theorem of sums in the theory of viscosity solutions nor does it use stochastic tools. Indeed, it applies directly to functions satisfying a dynamic programming equation whether they arise from the stochastic games or numerical methods to PDEs.
One of the key tools in studying the tug-of-war games is the dynamic programming principle. For the game in this paper, the dynamic programming principle (DPP) reads as with a given boundary cut-off function δ, a boundary function F and probability functions α(x), β(x). Here, B ν ε denotes the (n − 1)-dimensional ball orthogonal to ν. For more details, see Section 2. Heuristic idea behind the DPP is that the value at a point can be obtained by considering a single step in the game and summing up all the possible outcomes. At the point x, the game continues with a probability 1 − δ(x). In this case, the maximizer selects the direction ν max of fixed radius maximizing the expected payoff at the point. Similarly, the minimizer selects the direction ν min of the same radius minimizing the expectation. Then with a probability α(x)/2, the game moves to x + ν max in the single step, and with the same probability, the game moves to x + ν min . With a probability β(x)/2, the next game point is x+ν ′ max , where ν ′ max is chosen according to the uniform distribution in a (n − 1)-dimensional ball orthogonal to ν max . Similarly with the same probability, the next game point is x + ν ′ min , where ν ′ min is chosen uniformly random from a (n − 1)-dimensional ball orthogonal to ν min . If the game on the other hand stops at x, the payoff is given by the boundary function F at the point.
The first step in the paper is to show that a value function satisfies the dynamic programming principle above and that the value is unique. This is Theorem 3.7. We first prove existence of a measurable function satisfying the DPP by iterating the operator on the right hand side of (1). To this end, we guarantee the continuity and thus Borel measurability of the iterands by the boundary correction in the DPP above. Otherwise it is difficult to guarantee the measurability in such iterations. Then, the uniqueness and the continuity of the solution is obtained by using game theoretic arguments. In particular, we show that the solution coincides with the game value.
As an application, by using the regularity result, Arzelà-Ascoli's theorem and the DPP, we show in Theorem 6.2 that the values of the game converge to a continuous viscosity solution of the normalized p(x)-Laplace equation where ∆ N ∞ u := |∇u| −2 n i,j=1 u xixj u xi u xj is the normalized infinity Laplacian, and p : Ω → (1, ∞) is a continuous function on the closure of the game domain Ω with inf Ω p > 1 and sup Ω p < ∞. Observe that we cover the range 1 < p(x) < ∞. To guarantee that the limit takes the same boundary values, we need boundary estimates which are obtained in Theorem 5.2 by using barrier arguments. programming principle. Thus, we define the following open sets I ε = {x ∈ Ω : dist(x, ∂Ω) < ε}, O ε = {x ∈ R n \ Ω : dist(x, ∂Ω) < ε} and the set Ω ε := Ω ∪ O ε . The function δ : Ω ε → [0, 1] is given by Let p be a continuous function on Ω satisfying We require the finite upper bound p max to make sure that the tug-of-war game defined below ends almost surely regardless of the strategies. Similarly, the upper bound comes into a play in the techniques we use in Section 3.2. On the other hand, the regularity and convergence results below require the lower bound in (2) for the function p. To prove existence and uniqueness of continuous solutions to (1) in Section 3, we utilize the uniform continuity of p. In Sections 4 and 5, the regularity techniques do not require the continuity of p, but in Section 6, we apply the continuity of p.
We define the functions α, β : Ω → (0, 1) depending on p(x) and the dimension n by By the assumptions on p(x), the functions α and β are uniformly continuous. In addition, we have We also denote β min := 1 − α max > 0. We consider averages of the form where L n−1 denotes the (n − 1)-dimensional Lebesgue measure. The open ball of radius ε in the (n − 1)-dimensional hyperplane ν ⊥ orthogonal to ν ∈ R n is denoted by B ν ε , i.e., B ν ε := B ε (0) ∩ ν ⊥ := {z ∈ R n : |z| < ε and z, ν = 0}. Throughout the paper, we denote open n-dimensional balls of radius r > 0 by B r (x) or by B r , if the center point x ∈ R n plays no role.
For brevity, the compact boundary strip of the game domain is denoted by Let F be a continuous boundary function F : Γ ε,ε → R. In addition, we define an auxiliary function and an operator for all x ∈ Ω ε and continuous functions u ∈ C(Ω ε ). By using this operator, we can identify the solutions to (1) with the fixed points of T ε . Note that, despite the fact that α(x) and β(x) are not defined in the outside strip Ω ε \ Ω, The same boundary correction as above is also applied in [6,13]. For an alternative approach, see [2]. Here, this correction is used in order to preserve measurability when iterating the operator. Indeed, in such iterations the measurability can rather easily be lost, see for example [13,Example 2.4]. In addition, an asymptotic expansion close to (1) is studied in [8].
2.1. The two-player tug-of-war game. In this subsection, we introduce the stochastic zero-sum tug-of-war game used in this work. Most of the methods of this paper arise from game theory, and some of the results are even directly proved by using game theory arguments (for example the uniqueness proof in Theorem 3.6).
Let us consider a game involving two players (say P I and P II ). A token is placed at a starting point x 0 ∈ Ω. Suppose that, after j = 0, 1, 2, . . . movements, the token is at a point x j ∈ Ω. Then, • if x j ∈ Ω \ I ε , P I and P II decide their possible movements ν I j+1 and ν II j+1 , respectively, with ν I j+1 = ν II j+1 = ε. A fair coin is tossed and if P i wins the toss, we have two possibilities with a probability α(x j ), the token is moved to x j+1 = x j + ν i j+1 , and with a probability β(x j ), the token is moved to a point the game ends with a probability δ(x j ) and then, P II pays P I the amount given by F (x j ), and with a probability 1 − δ(x j ), the players play a game as in the previous case x j ∈ Ω \ I ε .
Let τ denote the time when the game ends, and denote by x τ ∈ Γ ε,ε the position where the game ends. Then, P II pays P I the quantity F (x τ ). We can construct the game described above by the following procedure. Let (c j ) ∞ j=0 be a sequence of random variables such that c j ∈ {0, 1} =: C for all j ≥ 0 with c 0 := 0. The random variable c j gives the information whether the jth movement of the game has been decided by playing the game. If it holds c j = 0, the position x j is selected by playing the game. On the other hand, if it holds c j = 1, we have x j = x j−1 .
Let ξ 0 , ξ 1 , ξ 2 , . . . be independent and identically distributed random variables such that ξ 0 is distributed uniformly random on [0, 1]. Moreover, the process (ξ j ) ∞ j=0 is independent of the game process (x j ) ∞ j=0 . Then for all j ≥ 1, the probability distribution of the random variable c j is determined by given that it holds c j−1 = 0. If it holds c j−1 = 1, then we have c j = 1. By this definition of c j for all j ≥ 1, we can define the random variable τ by We define a history of the game as the vector (c 0 , x 0 ), (c 1 , x 1 ), . . . , (c j , x j ) describing the positions of the token and the information whether the positions had been taken by playing the game at each step after j repetitions. A strategy is a sequence of Borel measurable functions that gives the next game position given the history of the game. Therefore, we define for all j ∈ N and with both i ∈ {I, II}. For example, we have for P I and for all j ≥ 1 that . Given a starting point x 0 ∈ Ω and strategies S I , S II , we define a probability measure P x0 SI,SII on the natural product σ-algebra of the space of all game trajectories. This measure is built by applying Kolmogorov's extension theorem to the family of transition densities π SI,SII (c 0 , x 0 ), (c 1 , x 1 ), . . . , (c j , x j ); C, A for all Borel subsets A ⊂ R n and C ⊂ C. With a slight abuse of the notation, for all points z and sets B, the measure I z (B) is one, if z ∈ B, and zero otherwise. Moreover, it holds with the constant ω n−1 := L n−1 B z 1 for any z ∈ R n \ {0}. Furthermore, we denote B z ε (y) := y + B z ε for z ∈ R n \ {0} and y ∈ R n . Here, we follow the ideas from [6], where the constant α case is covered. For the benefit of the reader and since the setting is slightly different, we give a selfcontained proof. Proof. The idea of the proof is to consider solely random movements and to find a uniform lower bound for the probability of the event that the modulus of |x j | grows in a suitable fashion. In the proof, we need the fact β min > 0. Let x 0 ∈ Ω, j ≥ 0 and let x j+1 = x j + h j , where h j represents the displacement at each step of the game. By the vector calculus, we have In addition by the definition of the game, h j is randomly chosen from B ν ε with a probability β(x j )/2 for the vector ν := ν I j+1 . Moreover, given that a random movement is chosen from B ν ε , we have x j , h j ≥ 0 with a probability of at least 1 2 and the event |h j | ≥ ε 2 has a probability of Consequently, there is a positive probability of a random movement h j such that |h j | ≥ ε/2 and x j , h j ≥ 0. In this case, we have with a probability of at least Note that the universal constant θ does not depend on j and the fact β min > 0 implies θ > 0. Now, let Then, after j 0 consecutive movements in the way (7) we have Therefore, the token has exited the game domain after at most j 0 steps for any starting point x 0 with a probability of at least θ j0 . Consequently, the probability of not exiting the game domain after j 0 steps is bounded above by 1 − θ j0 . By repeating kj 0 times the game, the probability of not exiting Ω after kj 0 steps is bounded above by 1 − θ j0 k . Thus, by letting k → ∞, this probability goes to zero, and the proof is completed.
For all starting points x 0 ∈ Ω, we define a value function for P I and for P II by 3. Existence and uniqueness. In this section, the goal is to prove that there exists a unique continuous solution satisfying the dynamic programming principle (1). The proof is divided into two parts. In Section 3.1, by iterating the operator T ε defined in (5), we show that there exist a lower and an upper semicontinuous solution to (1). Then in Section 3.2, we show that every measurable solution to (1) is bounded between the lower and the upper semicontinuous solutions. Further, we prove by using the tug-of-war game defined in Section 2.1 that, in fact, both semicontinuous solutions are the same. (1). In this subsection, by iterating the operator T ε , we construct monotone sequences of bounded continuous functions. As a consequence, these sequences converge to semicontinuous functions which turn out to be solutions to (1). With that purpose, first, we need to show that T ε maps continuous functions into continuous functions.

Existence of semicontinuous solutions to
Proof. For fixed |ν| = ε, we have for any x, y ∈ Ω the estimate where ω f is a modulus of continuity of the uniformly continuous function f . In a similar way, we have for x, y ∈ Ω. Thus, these inequalities imply that for all x, y ∈ Ω. Hence, W (·, ν) is a continuous function for fixed ν with modulus of continuity ω u + u ∞ ω α + ω β . For the continuity on ν, fix a point x ∈ Ω. Then, the modulus of continuity of α does not play any role. In addition, since the function u is continuous by the hypothesis, we only need to check the continuity of the function Let |ν| = |χ| = ε and define a rotation P : for all h ∈ ν ⊥ , where C > 0 is a constant not depending on the choices of ν and χ. Therefore, we have By recalling (10) together with the fact that we can choose ω u to be increasing, we can estimate the expression in brackets in the equation (11) from above by for h ∈ B ν ε . Then, this same bound also holds for (11), and the continuity of W (x, ·) for fixed x ∈ Ω follows.
Proof. The monotonicity of T ε follows easily from the definition (5). Let u ∈ C(Ω ε ) be a function with a modulus of continuity ω u . By (5) and the fact that F is continuous on O ε , the function T ε u is continuous on the outside strip O ε . Thus, we have to check that T ε u is continuous on Ω.
First, let x, y ∈ Ω \ I ε and recall the elementary inequalities Then by the inequality (9) for any |ν| = ε, we get that Here, we use the shorthand notation Therefore, since δ = 0 on Ω \ I ε , we have shown that T ε u is continuous on Ω \ I ε .
Then, let x, y ∈ I ε and recall that sup x∈Ω 1 − δ(x) = 1 and ω δ (t) = t/ε for t ≥ 0. Thus, we can estimate Consequently, T ε u is continuous in I ε . Since the limiting values of the function T ε u coincide with the function values on the boundary ∂I ε , there must exist a modulus of continuity for T ε u, and hence T ε u ∈ C(Ω ε ).
For the next result, let T k ε denote the k-th iteration of the operator T ε for k ∈ N, i.e., , T 0 ε = Id, with the identity operator Id(u) = u for all u ∈ C(Ω ε ). By (5) and the monotonic- for all k ∈ N. Consequently, we can define the pointwise limit of both sequences for all x ∈ Ω ε . In addition, since u and u are defined as the limit of monotone sequences of continuous functions, they are lower and upper semicontinuous functions, respectively.
Proof. The inequality (14) follows easily from (12). We only show that u is a solution to (1), since a similar argument can be applied to u. To establish the result, we use Lemmas 3.1 and 3.2 and the fact that {T k ε (inf F )} is increasing to show that we can change the order of the limit and the infimum in the function u.
Let x ∈ Ω ε and u k : where W denotes the auxiliary function defined in (4). Thus, we need to prove the equalities lim The first equation follows from the fact that the sequence {u k } is pointwise increasing. For the second equation, we can assume x ∈ Ω. Lemmas 3.1 and 3.2 imply that W (u k ; x, ν) is continuous with respect to ν for all k ≥ 1. Therefore, we can define the compact set This, together with the fact that {u k } is increasing, yields C k (λ) = ∅ for all k ≥ 1. Thus by Cantor's intersection theorem, we get

10Á. ARROYO, J. HEINO AND M. PARVIAINEN
The first inequality follows from the choice of λ and the fact that {u k } is increasing. In addition, we use the monotone convergence theorem in the first equality and the choice ofν in the last inequality. Therefore, the proof is complete.

3.2.
Uniqueness of solutions to (1). In this subsection, we prove the uniqueness of solutions to (1). To establish the result, we first show that any measurable solution of the equation (1) is between the solutions u and u. Then, we show that, in fact, the functions u and u coincide. For the first result, we need the following technical lemma.
Lemma 3.4. Let u be a measurable solution to (1). Assume that sup F < sup Ω u, and let x ∈ Ω be such that and with a constant c(α) > 1.
Proof. We obtain the inequalities (16) and (17) by analyzing the dynamic programming principle (1). The proof is similar to the proof of Lemma 2.1. Since u satisfies (1), we have In addition, by utilizing the assumption (15) and u = F on O ε , we get Thus by (15) By the definition of supremum, there must exist |ν 0 | = ε such that Next, we define a set S ⊂ B ν0 ε depending on x and ν 0 . If x = 0 and ν 0 ∈ span{x} or x = 0, we define Otherwise, we set Observe that in both cases, the Lebesgue measure of the set S is the same. Indeed, it is clear that . By symmetry, we get , and in the case x = 0 and ν 0 ∈ span{x}, it holds In addition, because (3/4) 2 n−1 ≥ 2 2 1−n , the inequality (16) holds for each h ∈ S. The equality (19), together with (15) and (18), implies By rearranging the terms and multiplying by 4, we have Hence, there must exist h 0 ∈ S ⊂ B ν0 ε satisfying (17). Proposition 3.5. Any measurable solution u to (1) satisfies u ≤ u ≤ u with u and u the semicontinuous functions defined in (13).
Proof. By the monotonicity of the operator T ε and the definitions of u and u, it is Because u is a solution to (1), we have u(x) = F (x) for x ∈ O ε . Hence, we need to show the estimate (20) for all x ∈ Ω. We focus our attention on the second inequality, since the proof of the first inequality is analogous. We proceed by contradiction and assume that sup Ω u > sup F.
By the assumption, for η > 0 there exists a point x 1 ∈ Ω such that The idea of the proof consists of finding a sequence of points {x j } satisfying u(x j ) > sup F for all j and |x j0 | is big enough for some large j 0 ≥ 1. This is a contradiction, because u = F on O ε . We obtain the sequence of points by using Lemma 3.4 iteratively.
Choose an integer j 0 := j 0 (ε, n, Ω) ≥ 1 big enough such that for all j ≥ j 0 . Then, we fix the constant η > 0 small enough such that with the constant c(α) > 1 from Lemma 3.4. We start from x 1 and choose x 2 such that x 2 = x 1 + h 0 with h 0 given by Lemma 3.4. Then, we have that we get a contradiction. Otherwise, we continue in the same way. We choose x 3 such that x 3 = x 2 + h 1 with h 1 given by Lemma 3.4. Then, we have that After j 0 − 1 repetitions, assume that x j0 ∈ Ω. By the inequalities (21) and (22) it holds for the point x j0+1 that Since x j0+1 ∈ Ω, the contradiction follows.
The next theorem, together with (14), implies that the semicontinuous solutions to (1), u and u, coincide.
Theorem 3.6. Let u and u be the semicontinuous functions defined in (13). In addition, let u I and u II be the value functions defined in (8). Then, we have that Proof. From the properties of inf and sup, it is clear that Thus, we need to prove that u II ≤ u and u ≤ u I .
We only show u II ≤ u, since the argument in the other case is similar. To establish the result, we find a suitable strategy for P II and a function (c, x) → Φ(c, x) depending on u and F such that the process Φ(c k , x k ) becomes a supermartingale irregardless what the opponent does. Then, we will be able to compare the functions u and u II by the optimal stopping theorem.
Let x 0 ∈ Ω and denote a strategy S * II for P II such that for all j ≥ 0, where W denotes the auxiliary function defined in (4). By a measure theoretical analysis, we can prove that this strategy is Borel measurable. For more details, see for example [22,Theorem 5.3.1].
Fix any strategy S I for P I , and let us define a function Φ : Then, we can estimate Since u is a solution to (1), we have by this estimate for all j ≥ 0. Thus, the stochastic process Φ(c k , x k ) ∞ k=0 is a supermartingale, when P II uses the strategy S * II . By recalling (6), and since F is bounded, we get by using the optional stopping theorem that because it holds c 0 = 0. Therefore, the proof is complete. Now, Theorem 3.6 and Proposition 3.5 imply the uniqueness of solutions to (1). In addition, this unique function is continuous and the value function of the game.
Theorem 3.7. Let ε > 0 and let F : Γ ε,ε → R n be a continuous function. Then, there exists a continuous function u ε : Ω ε → R n with the boundary data F such that it satisfies the dynamic programming principle (1). Moreover, this function is unique and it is the value function of the game, i.e., u ε = u I = u II with u I and u II defined in (8).
4. Local regularity. In this section, we give a local regularity estimate for functions satisfying (1) in Ω \ I ε . The dynamic programming principle in Ω \ I ε reduces to the equation The regularity result is based on a method established by Luiro and Parviainen in [11]. The method consists of several steps. First, we choose a comparison function 14Á. ARROYO, J. HEINO AND M. PARVIAINEN f having the desired regularity properties. Then, the idea is to analyze two different cases separately. At a small scale, we need to control the effects arising from the discretization. At a bigger scale, the key term of the comparison function is C|x−z| γ with x, z ∈ R n , 0 < γ < 1 and C > 0 big enough.
In the second step, we aim to prove that the error u( with both sets belonging to R 2n . The set T is the set of points (x, z) ∈ R 2n such that x = z. Then, we thrive for a contradiction by assuming that the error is bigger As a final step, we get a contradiction by using a multidimensional dynamic programming principle for the comparison function f . In the proof below, intuition based on suitable strategies is helpful even though we do not write down stochastic arguments.
Proof. By using a scaling x → Rx, we can assume that R = 1. In addition by translation, it is enough to consider the claim (25) in the case z = −x. For simplicity, we assume sup B2×B2 (u(x) − u(z)) ≤ 1. Given C > 1, let N ∈ N be such that N ≥ 10 2 C γ .
Then, we define the following functions in R 2n for i = 0, 1, . . . , N . The function f 2 is called an annular step function, and it is needed to control the small scale jumps. Note that we have sup f 2 = C 2N ε γ reached on It holds that f 1 ≥ 1 in (B 2 × B 2 ) \ (B 1 × B 1 ). Here, we need the term |x + z| 2 in the function f 1 , because for all x, z ∈ (B 2 × B 2 ) \ (B 1 × B 1 ) such that |x − z| ≤ 1. Therefore, together with u(x) − u(z) ≤ 1 in B 2 × B 2 and u(x) − u(z) = 0 in T we have if (x, z) ∈ T or (x, z) ∈ (B 2 × B 2 ) \ (B 1 × B 1 ). We have to show that this inequality is also true in (B 1 × B 1 ) \ T . Thriving for a contradiction, write and suppose that M > C 2N ε γ .

By (26), this is equivalent to
For all η > 0, we choose a pair of points ( Then by (23), we have where W is the auxiliary function defined in (4).

16Á. ARROYO, J. HEINO AND M. PARVIAINEN
This together with (32) yields inf νx,νz Therefore, by applying this inequality and (33) to (29) we get Combining this with (28), we need to show The main part of the section is to show this estimate for G. This is done in several steps below.

Proof of Proposition 4.2.
Let V ⊂ R n be the space spanned by x − z = 0. We denote the orthogonal complement of V by V ⊥ , i.e., Given any y ∈ R n , we can decompose y = y V (x − z)/|x − z| + y V ⊥ , where y V ∈ R is the scalar projection of y onto V and y V ⊥ ∈ V ⊥ , respectively. For the decomposed point it holds |y V ⊥ | = |y| 2 − y 2 V . By using this notation, the second order Taylor's expansion of f 1 is where E x,z (h x , h z ) is the error term. In the above, we used the calculations The matrix I stands for the n × n identity matrix, and we denote the tensor product of two vectors by ⊗, i.e., h ⊗ s := hs T for vectors h, s ∈ R n . By recalling the elementary formula h T (s ⊗ s)h = h, s 2 for all h, s ∈ R n , we get (34). By Taylor's theorem, the error term satisfies because |h x | , |h z | ≤ ε. Therefore, to prove the result, we distinguish two separate cases. In the first case, we have |x − z| ≤ N 10 ε and in the second case, we have |x − z| > N 10 ε. Proof of Proposition 4.2: Case |x − z| ≤ N ε 10 . In this case, we do not utilize the formula (34). We use concavity and convexity estimates for the terms in f 1 and the properties of the annular step function f 2 . For x, z ∈ B 1 and |h x | , |h z | < ε < 1, it holds Together with f 2 ≥ 0, these estimates yield sup hx,hz Find i ∈ {1, 2, . . . , N } such that (i − 1) ε 10 < |x − z| ≤ i ε 10 and choose |ν x | , |ν z | < ε such that (x + ν x , z + ν z ) ∈ A i−1 . Then for C > 1 large enough, we can estimate sup hx,hz where we use f 2 ≥ 0 in the second inequality and α(z) > α min > 0 for all z ∈ Ω in the last inequality. Therefore, by f = f 1 − f 2 and (36) it holds inf hx,hz Combining this inequality with (36), we get sup hx,hz Hence, the proof of the case is complete.

18Á. ARROYO, J. HEINO AND M. PARVIAINEN
Proof of Proposition 4.2: Case |x − z| > N ε 10 . In this case, f 2 (x, z) = 0 and hence f ≡ f 1 . We apply (34) to get the result. For η > 0, let ν x , ν z be such that sup hx,hz Therefore for any |̺ x | , |̺ z | ≤ ε, we get the following inequality sup hx,hz (37) By (35) and |h x | , |h z | ≤ ε, the last two terms in (34) are bounded above by We denote and recall the notation P h,s denoting the rotation sending h to s for any vectors |h| = |s| in R n . By (37) and (31), it suffices to study For simplicity, we decompose the previous expression into three terms to be examined separately, i.e., Then by (34), we have Note that the first order terms in [III] vanishes when we integrate over the ball. Therefore, we can estimate In addition, it holds We distinguish between two cases depending on the value of (ν x − ν z ) 2 V and fix τ 0 < τ < 1 with 0 < τ 0 < 1 defined later. a) Case |(ν x − ν z ) V | ≥ (τ + 1)ε: In this case, we choose ̺ x = −ν x and ̺ z = −ν z . By replacing these vectors in the inequalities [II], [III] and [IV] and using symmetry, we obtain We used γ − 1 < 0 and the choice P νz ,−νx = P −νz ,νx in the estimate for [III]. By assumption, it holds (ν Thus, we need to obtain uniform bounds for the terms (ν The assumption |(ν x − ν z ) V | ≥ (τ +1)ε, together with |ν x |, |ν z | ≤ ε and Pythagoras' theorem, implies Moreover, the same facts yield By combining these and using Pythagoras' theorem, we get In addition by applying this equality together with (43) and |h V ⊥ | ≤ |h| ≤ ε, we obtain Consequently, we get the estimates and We can assume that τ 0 is close enough to 1 guaranteeing the positivity of the quantity τ − τ −1 √ 1 − τ 2 . In order to obtain the last estimate needed, we recall that P νz ,−νx is any rotation sending the vector ν z to −ν x . In particular, we choose a rotation satisfying |h − P νz ,−νx h| ≤ |ν z − P νz ,−νx ν z | = |ν z + ν x | for every |h| ≤ ε. Hence by recalling (44), we get In addition by (39), we get The assumption on γ in (24) implies that we can choose τ 0 := τ 0 (κ) < 1 close enough to 1 such that the previous expression is negative, i.e., Now, by recalling (38), we have By choosing C > 1 large enough, we obtain This estimate yields In this case, the first order terms in (34) imply the result. By choosing ̺ x = −ε x−z |x−z| and ̺ z = ε x−z |x−z| in V and utilizing these in (40), (41) and (42), we get The second order terms in these inequalities can be estimated above by In addition, we deduce that (ν x − ν z ) V < 1 + τ +1 2 2 ε. Therefore, we have [IV] ≤ 4 |x + z| ε + 3Cγ |x − z| γ−2 ε 2 .
By combining all these and recalling (38) and (39), we get As in the previous case, we can choose the constant C > 1 large enough to ensure the negativity of the previous equation. Thus, the proof is complete.

5.
Regularity near the boundary. In this section, we show that the value function of the game is also asymptotically continuous near the boundary, if we assume some regularity on the boundary of the set. The proof is based on finding a suitable barrier function and a strategy for the other player so that the process under the barrier function is super-or submartingale depending on the form of the function. Then, the result follows by analyzing the barrier function and iterating the argument. Fix r > 0 and z ∈ R n and define a barrier function for all x ∈ R n \ B r (z) with some constants σ < 0, a < 0 and b ≥ 0. Recall the auxiliary function W defined in (4). First, we prove the following properties of the function v.
Lemma 5.1. Let r > 0 and z ∈ R n and define the function v as in (48) with constants a < 0, b ≥ 0 and σ < 0. Then, there is a constant C > 0 such that and for all ε > 0 and x ∈ R n \ B r (z).
Proof. To establish the result, we apply Taylor's formula to the function v. The function v is real-analytic so we obtain by Taylor's formula for all h ∈ R n . Let C > 0 be big enough so that O(|h| 3 ) ≤ Cε 3 for all |h| ≤ ε. The function v is radially increasing, and the average integral over the first order term of v vanishes. In addition, we have Thus, Taylor's formula proves the equation (49). Next, we prove the equation (50). Recall the notations V = span{x − z} and the orthogonal complement Also, for any vector y ∈ B z−x ε , we have y, x− z = 0. Hence, by a short calculation, we have

Thus, this inequality together with
Next, we prove the main theorem of this section. To get the result, we need to assume some regularity on the boundary of the set Ω.
Proof. The idea is to find a suitable barrier function so that by Lemma 5.1, if P I pulls towards the point z ∈ B r (y) \ Ω, the game process inside the barrier function is a supermartingale. Then by utilizing the properties of the barrier function, we get the result by iteration. Choose a constant 0 < θ < 1, independent of r, such that θ := s σ − 2 σ s σ − 4 σ with the parameter s > 0 from the boundary condition and a parameter σ < 0 that will be defined later. We extend the function F continuously to the set Γ 1,1 and use the same notation for the extension. Then, we choose k ≥ 1 big enough such that In addition, we denote the constants Thus for the chosen k, independent of ε, it holds We define a function v k such that If b U = b 4r , it holds a = 0 and b = b U . Otherwise, the values are a < 0 and b ≥ 0. We consider the case with a < 0, since the proof of the other case is clear. We extend the function v k to the set R n \B 4 1−k sr−2ε (z) and use the same notation for the extension. We may assume that x 0 ∈ Ω, and observe that Assume that P II plays the game by pulling towards the point z given a turn, i.e., he moves the game token by the vector −ε(x m − z)/|x m − z|, if he wins the mth toss. This strategy is denoted by S * II . Also, fix a strategy for P I and denote it by S I . By using Lemma 5.1 for all m ≥ 1, we can estimate for some C > 0. Next, we need to choose the constant σ < 0 small enough. Recall that α min > 0. Let us fix the value for all x m ∈ Ω. In addition, we have for all x m ∈ Ω. Thus by choosing ε 0 := ε 0 (α min , r, Ω, k) > 0 small enough, we can ensure that for all ε < ε 0 . We have shown that the process is a supermartingale, when P II uses the strategy S * II and P I uses any strategy S I . Define a boundary function F v k : Γ ε,ε → R such that By Theorem 3.7, we have u = u I with u I the value function for P I defined in (8).
is a supermartingale, F v k is bounded and τ < ∞ almost surely, we can estimate with the help of the optimal stopping theorem By using the boundary values (54), we can calculate the constants a and b in the function v k and deduce by (55) that Hence by (53), we have shown the estimate (52).

26Á. ARROYO, J. HEINO AND M. PARVIAINEN
The boundary function F is continuous on the compact set Γ ε,ε , so there is a modulus of continuity ω F for the function F . Thus, we can estimate It holds |z − y| < r implying the estimate |y * − y| ≤ |y * − z| + |z − y| < 5r. We choose r > 0 so small that ω F (5r) < η 10 for all r < r. This yields that for any r < r, we have for all ε < ε 0 and x 0 ∈ B 4 1−k r (y) ∩ Ω ε . Next, assume that y ∈ ∂Ω. Pick a point y b ∈ ∂Ω such that y ∈ B ε (y b ). We choose ε 0 > 0 so small that This implies that for all ε < ε 0 . Since y b ∈ ∂Ω, we can use the estimates above to get the result.
6. Application. In this section, we prove that the uniform limit of functions satisfying (1) as ε → 0 is a weak solution to the normalized homogeneous p(x)-Laplace equation This equation is in a non divergence form so we define weak solutions via viscosity theory. There is a related version of the equation (56), called a strong p(x)-Laplacian, in a divergence form, which has recently received attention and studied using distributional weak theory (see for example [1,23,17]). For some questions, the viscosity point of view is very natural in the sense that the equation (56) has the Pucci operator bounds used for example in Section 4 in [3].
We define for all vectors x, h ∈ R n and symmetric n × n matrices X These functions are discontinuous, when h = 0. Therefore, we define viscosity solutions via semicontinuous extensions. For more details about the extensions, see for example [5,4]. We denote by λ min (X) and by λ max (X) the smallest and the largest eigenvalues of a symmetric matrix X. Definition 6.1. A continuous function u : Ω → R is a viscosity solution to the equation (56), if for all x ∈ Ω and φ ∈ C 2 such that u(x) = φ(x) and u(y) > φ(y) for y = x we have if ∇φ(x) = 0, 0 ≥ λ min ((p(x) − 2)D 2 φ(x)) + trace(D 2 φ(x)), if ∇φ(x) = 0. (57) We also require that for all x ∈ Ω and φ ∈ C 2 such that u(x) = φ(x) and u(y) < φ(y) for y = x all the inequalities are reversed, and we use λ max in the role of λ min .
It is equivalent to require that u − φ has a local strict minimum at x instead of u(x) = φ(x) and u(y) > φ(y) for y = x (see for example [9]). Next, we prove that by passing to a subsequence if necessary, the value function of the game converges uniformly to a solution of the equation (56). To prove that the limiting function u is a viscosity solution to (56), we use an argument similar to the stability principle for viscosity solutions. We apply the DPP (1) for a test function φ ∈ C 2 and deduce the connection by utilizing the uniform convergence. Theorem 6.2. Let u ε denote the unique continuous solution to (1) with ε > 0 and with a continuous boundary function F : Γ ε,ε → R. Then, there are a function u : Ω → R and a subsequence {ε i } such that u εi converges uniformly to u on Ω and the function u is a viscosity solution to (56) with the boundary data F . Proof. To find the function u, we use a variant of the Arzelà-Ascoli's theorem (see for example [16, p. 15-16]). By Theorems 4.1 and 5.2 together with Remark 5.4, the assumptions for Arzelà-Ascoli's theorem are satisfied and hence, there exist a continuous function u on Ω with the boundary values F and a subsequence {ε i } such that u εi → u uniformly on Ω as i → ∞. Thus, it is enough to show that u is a viscosity solution to (56). Let x ∈ Ω and φ ∈ C 2 such that u − φ has a strict local minimum at x. Then, we have inf for some r > 0 and for all z ∈ B r (x) \ {x}. The uniform convergence yields inf for all z ∈ B r (x)\{x} and for all ε > 0 small enough. Thus, we can use the definition of the infimum and deduce that for all η ε > 0, there exists a point x ε ∈ B r (x) ⊂ Ω such that u ε (x ε ) − φ(x ε ) ≤ u ε (z) − φ(z) + η ε for all z ∈ B r (x) and ε > 0 small enough with x ε → x as ε → 0. We define ϕ := φ + u ε (x ε ) − φ(x ε ) so that ϕ(x ε ) = u ε (x ε ) and u ε (z) ≥ ϕ(z) − η ε for all z ∈ B r (x). Therefore, these together with the fact that u ε is a solution to (1) imply where we use the monotonicity of T ε and denote Λ ε := η ε + φ(x ε ) − u ε (x ε ). This inequality yields By the Taylor's expansion of φ at x ε with |ν| = 1, we get In (60), we utilize the orthonormal basis V including ν and an orthonormal basis for ν ⊥ to obtain in a similar way to [15]. Here, the operator ∆ ν ⊥ denotes the Laplacian on the plane ν ⊥ , i.e., ∆ ν ⊥ φ(x ε ) = n j=2 D 2 φ(x ε )ν j , ν j with ν 2 , . . . , ν n the orthonormal basis vectors for ν ⊥ . Observe that ∆φ(z) = trace(D 2 φ(z)) = ∆ ν ⊥ φ(z) + D 2 φ(z)ν, ν for any |ν| = 1 and z ∈ Ω. To see this, we apply the orthonormal basis V and a change of variables x 1 = ν, x 2 = ν 2 , . . . , x n = ν n . Then, we deduce the equation (61) by the chain rule. There exists a vector ν min := ν min (ε) minimizing